Search results for: SAP2000 software
72 Utilization of Informatics to Transform Clinical Data into a Simplified Reporting System to Examine the Analgesic Prescribing Practices of a Single Urban Hospital’s Emergency Department
Authors: Rubaiat S. Ahmed, Jemer Garrido, Sergey M. Motov
Abstract:
Clinical informatics (CI) enables the transformation of data into a systematic organization that improves the quality of care and the generation of positive health outcomes.Innovative technology through informatics that compiles accurate data on analgesic utilization in the emergency department can enhance pain management in this important clinical setting. We aim to establish a simplified reporting system through CI to examine and assess the analgesic prescribing practices in the EDthrough executing a U.S. federal grant project on opioid reduction initiatives. Queried data points of interest from a level-one trauma ED’s electronic medical records were used to create data sets and develop informational/visual reporting dashboards (on Microsoft Excel and Google Sheets) concerning analgesic usage across several pre-defined parameters and performance metrics using CI. The data was then qualitatively analyzed to evaluate ED analgesic prescribing trends by departmental clinicians and leadership. During a 12-month reporting period (Dec. 1, 2020 – Nov. 30, 2021) for the ongoing project, about 41% of all ED patient visits (N = 91,747) were for pain conditions, of which 81.6% received analgesics in the ED and at discharge (D/C). Of those treated with analgesics, 24.3% received opioids compared to 75.7% receiving opioid alternatives in the ED and at D/C, including non-pharmacological modalities. Demographics showed among patients receiving analgesics, 56.7% were aged between 18-64, 51.8% were male, 51.7% were white, and 66.2% had government funded health insurance. Ninety-one percent of all opioids prescribed were in the ED, with intravenous (IV) morphine, IV fentanyl, and morphine sulfate immediate release (MSIR) tablets accounting for 88.0% of ED dispensed opioids. With 9.3% of all opioids prescribed at D/C, MSIR was dispensed 72.1% of the time. Hydrocodone, oxycodone, and tramadol usage to only 10-15% of the time, and hydromorphone at 0%. Of opioid alternatives, non-steroidal anti-inflammatory drugs were utilized 60.3% of the time, 23.5% with local anesthetics and ultrasound-guided nerve blocks, and 7.9% with acetaminophen as the primary non-opioid drug categories prescribed by ED providers. Non-pharmacological analgesia included virtual reality and other modalities. An average of 18.5 ED opioid orders and 1.9 opioid D/C prescriptions per 102.4 daily ED patient visits was observed for the period. Compared to other specialties within our institution, 2.0% of opioid D/C prescriptions are given by ED providers, compared to the national average of 4.8%. Opioid alternatives accounted for 69.7% and 30.3% usage, versus 90.7% and 9.3% for opioids in the ED and D/C, respectively.There is a pressing need for concise, relevant, and reliable clinical data on analgesic utilization for ED providers and leadership to evaluate prescribing practices and make data-driven decisions. Basic computer software can be used to create effective visual reporting dashboards with indicators that convey relevant and timely information in an easy-to-digest manner. We accurately examined our ED's analgesic prescribing practices using CI through dashboard reporting. Such reporting tools can quickly identify key performance indicators and prioritize data to enhance pain management and promote safe prescribing practices in the emergency setting.Keywords: clinical informatics, dashboards, emergency department, health informatics, healthcare informatics, medical informatics, opioids, pain management, technology
Procedia PDF Downloads 14471 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 24070 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading
Authors: Vaso K. Kapnopoulou, Piero Caridis
Abstract:
The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.Keywords: dynamic load cases, finite element method, high cycle fatigue, lower hopper knuckle
Procedia PDF Downloads 41969 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050
Authors: Farzaneh Sasanpour, Saeed Amini Varaki
Abstract:
Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran
Procedia PDF Downloads 6768 Household Water Practices in a Rapidly Urbanizing City and Its Implications for the Future of Potable Water: A Case Study of Abuja Nigeria
Authors: Emmanuel Maiyanga
Abstract:
Access to sufficiently good quality freshwater has been a global challenge, but more notably in low-income countries, particularly in the Sub-Saharan countries, which Nigeria is one. Urban population is soaring, especially in many low-income countries, the existing centralised water supply infrastructures are ageing and inadequate, moreover in households peoples’ lifestyles have become more water-demanding. So, people mostly device coping strategies where municipal supply is perceived to have failed. This development threatens the futures of groundwater and calls for a review of management strategy and research approach. The various issues associated with water demand management in low-income countries and Nigeria, in particular, are well documented in the literature. However, the way people use water daily in households and the reasons they do so, and how the situation is constructing demand among the middle-class population in Abuja Nigeria is poorly understood. This is what this research aims to unpack. This is achieved by using the social practices research approach (which is based on the Theory of Practices) to understand how this situation impacts on the shared groundwater resource. A qualitative method was used for data gathering. This involved audio-recorded interviews of householders and water professionals in the private and public sectors. It also involved observation, note-taking, and document study. The data were analysed thematically using NVIVO software. The research reveals the major household practices that draw on the water on a domestic scale, and they include water sourcing, body hygiene and sanitation, laundry, kitchen, and outdoor practices (car washing, domestic livestock farming, and gardening). Among all the practices, water sourcing, body hygiene, kitchen, and laundry practices, are identified to impact most on groundwater, with impact scale varying with household peculiarities. Water sourcing practices involve people sourcing mostly from personal boreholes because the municipal water supply is perceived inadequate and unreliable in terms of service delivery and water quality, and people prefer easier and unlimited access and control using boreholes. Body hygiene practices reveal that every respondent prefers bucket bathing at least once daily, and the majority bathe twice or more every day. Frequency is determined by the feeling of hotness and dirt on the skin. Thus, people bathe to cool down, stay clean, and satisfy perceived social, religious, and hygiene demand. Kitchen practice consumes water significantly as people run the tap for vegetable washing in daily food preparation and dishwashing after each meal. Laundry practice reveals that most people wash clothes most frequently (twice in a week) during hot and dusty weather, and washing with hands in basins and buckets is the most prevalent and water wasting due to soap overdose. The research also reveals poor water governance as a major cause of current inadequate municipal water delivery. The implication poor governance and widespread use of boreholes is an uncontrolled abstraction of groundwater to satisfy desired household practices, thereby putting the future of the shared aquifer at great risk of total depletion with attendant multiplying effects on the people and the environment and population continues to soar.Keywords: boreholes, groundwater, household water practices, self-supply
Procedia PDF Downloads 12367 Introducing Transport Engineering through Blended Learning Initiatives
Authors: Kasun P. Wijayaratna, Lauren Gardner, Taha Hossein Rashidi
Abstract:
Undergraduate students entering university across the last 2 to 3 years tend to be born during the middle years of the 1990s. This generation of students has been exposed to the internet and the desire and dependency on technology since childhood. Brains develop based on environmental influences and technology has wired this generation of student to be attuned to sophisticated complex visual imagery, indicating visual forms of learning may be more effective than the traditional lecture or discussion formats. Furthermore, post-millennials perspectives on career are not focused solely on stability and income but are strongly driven by interest, entrepreneurship and innovation. Accordingly, it is important for educators to acknowledge the generational shift and tailor the delivery of learning material to meet the expectations of the students and the needs of industry. In the context of transport engineering, effectively teaching undergraduate students the basic principles of transport planning, traffic engineering and highway design is fundamental to the progression of the profession from a practice and research perspective. Recent developments in technology have transformed the discipline as practitioners and researchers move away from the traditional “pen and paper” approach to methods involving the use of computer programs and simulation. Further, enhanced accessibility of technology for students has changed the way they understand and learn material being delivered at tertiary education institutions. As a consequence, blended learning approaches, which aim to integrate face to face teaching with flexible self-paced learning resources, have become prevalent to provide scalable education that satisfies the expectations of students. This research study involved the development of a series of ‘Blended Learning’ initiatives implemented within an introductory transport planning and geometric design course, CVEN2401: Sustainable Transport and Highway Engineering, taught at the University of New South Wales, Australia. CVEN2401 was modified by conducting interactive polling exercises during lectures, including weekly online quizzes, offering a series of supplementary learning videos, and implementing a realistic design project that students needed to complete using modelling software that is widely used in practice. These activities and resources were aimed to improve the learning environment for a large class size in excess of 450 students and to ensure that practical industry valued skills were introduced. The case study compared the 2016 and 2017 student cohorts based on their performance across assessment tasks as well as their reception to the material revealed through student feedback surveys. The initiatives were well received with a number of students commenting on the ability to complete self-paced learning and an appreciation of the exposure to a realistic design project. From an educator’s perspective, blending the course made it feasible to interact and engage with students. Personalised learning opportunities were made available whilst delivering a considerable volume of complex content essential for all undergraduate Civil and Environmental Engineering students. Overall, this case study highlights the value of blended learning initiatives, especially in the context of large class size university courses.Keywords: blended learning, highway design, teaching, transport planning
Procedia PDF Downloads 14966 Establishment of Farmed Fish Welfare Biomarkers Using an Omics Approach
Authors: Pedro M. Rodrigues, Claudia Raposo, Denise Schrama, Marco Cerqueira
Abstract:
Farmed fish welfare is a very recent concept, widely discussed among the scientific community. Consumers’ interest regarding farmed animal welfare standards has significantly increased in the last years posing a huge challenge to producers in order to maintain an equilibrium between good welfare principles and productivity, while simultaneously achieve public acceptance. The major bottleneck of standard aquaculture is to impair considerably fish welfare throughout the production cycle and with this, the quality of fish protein. Welfare assessment in farmed fish is undertaken through the evaluation of fish stress responses. Primary and secondary stress responses include release of cortisol and glucose and lactate to the blood stream, respectively, which are currently the most commonly used indicators of stress exposure. However, the reliability of these indicators is highly dubious, due to a high variability of fish responses to an acute stress and the adaptation of the animal to a repetitive chronic stress. Our objective is to use comparative proteomics to identify and validate a fingerprint of proteins that can present an more reliable alternative to the already established welfare indicators. In this way, the culture conditions will improve and there will be a higher perception of mechanisms and metabolic pathway involved in the produced organism’s welfare. Due to its high economical importance in Portuguese aquaculture Gilthead seabream will be the elected species for this study. Protein extracts from Gilthead Seabream fish muscle, liver and plasma, reared for a 3 month period under optimized culture conditions (control) and induced stress conditions (Handling, high densities, and Hipoxia) are collected and used to identify a putative fish welfare protein markers fingerprint using a proteomics approach. Three tanks per condition and 3 biological replicates per tank are used for each analisys. Briefly, proteins from target tissue/fluid are extracted using standard established protocols. Protein extracts are then separated using 2D-DIGE (Difference gel electrophoresis). Proteins differentially expressed between control and induced stress conditions will be identified by mass spectrometry (LC-Ms/Ms) using NCBInr (taxonomic level - Actinopterygii) databank and Mascot search engine. The statistical analysis is performed using the R software environment, having used a one-tailed Mann-Whitney U-test (p < 0.05) to assess which proteins were differentially expressed in a statistically significant way. Validation of these proteins will be done by comparison of the RT-qPCR (Quantitative reverse transcription polymerase chain reaction) expressed genes pattern with the proteomic profile. Cortisol, glucose, and lactate are also measured in order to confirm or refute the reliability of these indicators. The identified liver proteins under handling and high densities induced stress conditions are responsible and involved in several metabolic pathways like primary metabolism (i.e. glycolysis, gluconeogenesis), ammonia metabolism, cytoskeleton proteins, signalizing proteins, lipid transport. Validition of these proteins as well as identical analysis in muscle and plasma are underway. Proteomics is a promising high-throughput technique that can be successfully applied to identify putative welfare protein biomarkers in farmed fish.Keywords: aquaculture, fish welfare, proteomics, welfare biomarkers
Procedia PDF Downloads 15665 Effects of Heart Rate Variability Biofeedback to Improve Autonomic Nerve Function, Inflammatory Response and Symptom Distress in Patients with Chronic Kidney Disease: A Randomized Control Trial
Authors: Chia-Pei Chen, Yu-Ju Chen, Yu-Juei Hsu
Abstract:
The prevalence and incidence of end-stage renal disease in Taiwan ranks the highest in the world. According to the statistical survey of the Ministry of Health and Welfare in 2019, kidney disease is the ninth leading cause of death in Taiwan. It leads to autonomic dysfunction, inflammatory response and symptom distress, and further increases the damage to the structure and function of the kidneys, leading to increased demand for renal replacement therapy and risks of cardiovascular disease, which also has medical costs for the society. If we can intervene in a feasible manual to effectively regulate the autonomic nerve function of CKD patients, reduce the inflammatory response and symptom distress. To prolong the progression of the disease, it will be the main goal of caring for CKD patients. This study aims to test the effect of heart rate variability biofeedback (HRVBF) on improving autonomic nerve function (Heart Rate Variability, HRV), inflammatory response (Interleukin-6 [IL-6], C reaction protein [CRP] ), symptom distress (Piper fatigue scale, Pittsburgh Sleep Quality Index [PSQI], and Beck Depression Inventory-II [BDI-II] ) in patients with chronic kidney disease. This study was experimental research, with a convenience sampling. Participants were recruited from the nephrology clinic at a medical center in northern Taiwan. With signed informed consent, participants were randomly assigned to the HRVBF or control group by using the Excel BINOMDIST function. The HRVBF group received four weekly hospital-based HRVBF training, and 8 weeks of home-based self-practice was done with StressEraser. The control group received usual care. We followed all participants for 3 months, in which we repeatedly measured their autonomic nerve function (HRV), inflammatory response (IL-6, CRP), and symptom distress (Piper fatigue scale, PSQI, and BDI-II) on their first day of study participation (baselines), 1 month, and 3 months after the intervention to test the effects of HRVBF. The results were analyzed by SPSS version 23.0 statistical software. The data of demographics, HRV, IL-6, CRP, Piper fatigue scale, PSQI, and BDI-II were analyzed by descriptive statistics. To test for differences between and within groups in all outcome variables, it was used by paired sample t-test, independent sample t-test, Wilcoxon Signed-Rank test and Mann-Whitney U test. Results: Thirty-four patients with chronic kidney disease were enrolled, but three of them were lost to follow-up. The remaining 31 patients completed the study, including 15 in the HRVBF group and 16 in the control group. The characteristics of the two groups were not significantly different. The four-week hospital-based HRVBF training combined with eight-week home-based self-practice can effectively enhance the parasympathetic nerve performance for patients with chronic kidney disease, which may against the disease-related parasympathetic nerve inhibition. In the inflammatory response, IL-6 and CRP in the HRVBF group could not achieve significant improvement when compared with the control group. Self-reported fatigue and depression significantly decreased in the HRVBF group, but they still failed to achieve a significant difference between the two groups. HRVBF has no significant effect on improving the sleep quality for CKD patients.Keywords: heart rate variability biofeedback, autonomic nerve function, inflammatory response, symptom distress, chronic kidney disease
Procedia PDF Downloads 18064 Monte Carlo Risk Analysis of a Carbon Abatement Technology
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele
Abstract:
Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5 cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbo machinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50 % cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low temperature heat exchanger LTHX (referred to by some authors as air pre-heater the mixed conductive membrane responsible for oxygen transfer and the high temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. This paper discusses techno-economic analysis of four possible layouts of the AZEP cycle. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout) – AZEP 85 % (85 % CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine– AZEP 85 % (85 % CO2 capture). This paper discusses Montecarlo risk analysis of four possible layouts of the AZEP cycle.Keywords: gas turbine, global warming, green house gases, power plants
Procedia PDF Downloads 47263 Finite Element Simulation of Four Point Bending of Laminated Veneer Lumber (LVL) Arch
Authors: Eliska Smidova, Petr Kabele
Abstract:
This paper describes non-linear finite element simulation of laminated veneer lumber (LVL) under tensile and shear loads that induce cracking along fibers. For this purpose, we use 2D homogeneous orthotropic constitutive model of tensile and shear fracture in timber that has been recently developed and implemented into ATENA® finite element software by the authors. The model captures (i) material orthotropy for small deformations in both linear and non-linear range, (ii) elastic behavior until anisotropic failure criterion is fulfilled, (iii) inelastic behavior after failure criterion is satisfied, (iv) different post-failure response for cracks along and across the grain, (v) unloading/reloading behavior. The post-cracking response is treated by fixed smeared crack model where Reinhardt-Hordijk function is used. The model requires in total 14 input parameters that can be obtained from standard tests, off-axis test results and iterative numerical simulation of compact tension (CT) or compact tension-shear (CTS) test. New engineered timber composites, such as laminated veneer lumber (LVL), offer improved structural parameters compared to sawn timber. LVL is manufactured by laminating 3 mm thick wood veneers aligned in one direction using water-resistant adhesives (e.g. polyurethane). Thus, 3 main grain directions, namely longitudinal (L), tangential (T), and radial (R), are observed within the layered LVL product. The core of this work consists in 3 numerical simulations of experiments where Radiata Pine LVL and Yellow Poplar LVL were involved. The first analysis deals with calibration and validation of the proposed model through off-axis tensile test (at a load-grain angle of 0°, 10°, 45°, and 90°) and CTS test (at a load-grain angle of 30°, 60°, and 90°), both of which were conducted for Radiata Pine LVL. The second finite element simulation reproduces load-CMOD curve of compact tension (CT) test of Yellow Poplar with the aim of obtaining cohesive law parameters to be used as an input in the third finite element analysis. That is four point bending test of small-size arch of 780 mm span that is made of Yellow Poplar LVL. The arch is designed with a through crack between two middle layers in the crown. Curved laminated beams are exposed to high radial tensile stress compared to timber strength in radial tension in the crown area. Let us note that in this case the latter parameter stands for tensile strength in perpendicular direction with respect to the grain. Standard tests deliver most of the relevant input data whereas traction-separation law for crack along the grain can be obtained partly by inverse analysis of compact tension (CT) test or compact tension-shear test (CTS). The initial crack was modeled as a narrow gap separating two layers in the middle the arch crown. Calculated load-deflection curve is in good agreement with the experimental ones. Furthermore, crack pattern given by numerical simulation coincides with the most important observed crack paths.Keywords: compact tension (CT) test, compact tension shear (CTS) test, fixed smeared crack model, four point bending test, laminated arch, laminated veneer lumber LVL, off-axis test, orthotropic elasticity, orthotropic fracture criterion, Radiata Pine LVL, traction-separation law, yellow poplar LVL, 2D constitutive model
Procedia PDF Downloads 29062 Oncolytic Efficacy of Thymidine Kinase-Deleted Vaccinia Virus Strain Tiantan (oncoVV-TT) in Glioma
Authors: Seyedeh Nasim Mirbahari, Taha Azad, Mehdi Totonchi
Abstract:
Oncolytic viruses, which only replicate in tumor cells, are being extensively studied for their use in cancer therapy. A particular virus known as the vaccinia virus, a member of the poxvirus family, has demonstrated oncolytic abilities glioma. Treating Glioma with traditional methods such as chemotherapy and radiotherapy is quite challenging. Even though oncolytic viruses have shown immense potential in cancer treatment, their effectiveness in glioblastoma treatment is still low. Therefore, there is a need to improve and optimize immunotherapies for better results. In this study, we have designed oncoVV-TT, which can more effectively target tumor cells while minimizing replication in normal cells by replacing the thymidine kinase gene with a luc-p2a-GFP gene expression cassette. Human glioblastoma cell line U251 MG, rat glioblastoma cell line C6, and non-tumor cell line HFF were plated at 105 cells in a 12-well plates in 2 mL of DMEM-F2 medium with 10% FBS added to each well. Then incubated at 37°C. After 16 hours, the cells were treated with oncoVV-TT at an MOI of 0.01, 0.1 and left in the incubator for a further 24, 48, 72 and 96 hours. Viral replication assay, fluorescence imaging and viability tests, including trypan blue and crystal violet, were conducted to evaluate the cytotoxic effect of oncoVV-TT. The finding shows that oncoVV-TT had significantly higher cytotoxic activity and proliferation rates in tumor cells in a dose and time-dependent manner, with the strongest effect observed in U251 MG. To conclude, oncoVV-TT has the potential to be a promising oncolytic virus for cancer treatment, with a more cytotoxic effect in human glioblastoma cells versus rat glioma cells. To assess the effectiveness of vaccinia virus-mediated viral therapy, we have tested U251mg and C6 tumor cell lines taken from human and rat gliomas, respectively. The study evaluated oncoVV-TT's ability to replicate and lyse cells and analyzed the survival rates of the tested cell lines when treated with different doses of oncoVV-TT. Additionally, we compared the sensitivity of human and mouse glioma cell lines to the oncolytic vaccinia virus. All experiments regarding viruses were conducted under biosafety level 2. We engineered a Vaccinia-based oncolytic virus called oncoVV-TT to replicate specifically in tumor cells. To propagate the oncoVV-TT virus, HeLa cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 10 MOI virus was added. After 48 h, cells were harvested by scraping, and viruses were collected by 3 sequential freezing and thawing cycles followed by removal of cell debris by centrifugation (1500 rpm, 5 min). The supernatant was stored at −80 ◦C for the following experiments. To measure the replication of the virus in Hela, cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 5 MOI virus or equal dilution of PBS was added. At the treatment time of 0 h, 24 h, 48 h, 72 h and 96 h, the viral titers were determined under the fluorescence microscope (BZ-X700; Keyence, Osaka, Japan). Fluorescence intensity was quantified using the imagej software according to the manufacturer’s protocol. For the isolation of single-virus clones, HeLa cells seeded in six-well plates (5×105 cells/well). After 24 h (100% confluent), the cells were infected with a 10-fold dilution series of TianTan green fluorescent protein (GFP)virus and incubated for 4 h. To examine the cytotoxic effect of oncoVV-TT virus ofn U251mg and C6 cell, trypan blue and crystal violet assay was used.Keywords: oncolytic virus, immune therapy, glioma, vaccinia virus
Procedia PDF Downloads 7961 Explanation of the Main Components of the Unsustainability of Cooperative Institutions in Cooperative Management Projects to Combat Desertification in South Khorasan Province
Authors: Yaser Ghasemi Aryan, Firoozeh Moghiminejad, Mohammadreza Shahraki
Abstract:
Background: The cooperative institution is considered the first and most essential pillar of strengthening social capital, whose sustainability is the main guarantee of survival and continued participation of local communities in natural resource management projects. The Village Development Group and the Microcredit Fund are two important social and economic institutions in the implementation of the International Project for the Restoration of Degraded Forest Lands (RFLDL) in Sarayan City, South Khorasan Province, which has learned positive lessons from the participation of the beneficiaries in the implementation. They have brought more effective projects to deal with desertification. However, the low activity or liquidation of some of these institutions has become one of the important challenges and concerns of project executive experts. The current research was carried out with the aim of explaining the main components of the instability of these institutions. Materials and Methods: This research is descriptive-analytical in terms of method, practical in terms of purpose, and the method of collecting information is two documentary and survey methods. The statistical population of the research included all the members of the village development groups and microcredit funds in the target villages of the RFLDL project of Sarayan city, based on the Kochran formula and matching with the Karjesi and Morgan table. Net people were selected as a statistical sample. After confirming the validity of the expert's opinions, the reliability of the questionnaire was 0.83, which shows the appropriate reliability of the researcher-made questionnaire. Data analysis was done using SPSS software. Results: The results related to the extraction of obstacles to the stability of social and economic networks were classified and prioritized in the form of 5 groups of social-cultural, economic, administrative, educational-promotional and policy-management factors. Based on this, in the socio-cultural factors, the items ‘not paying attention to the structural characteristics and composition of groups’, ‘lack of commitment and moral responsibility in some members of the group,’ and ‘lack of a clear pattern for the preservation and survival of groups’, in the disciplinary factors, The items ‘Irregularity in holding group meetings’ and ‘Irregularity of members to participate in meetings’, in the economic factors of the items "small financial capital of the fund’, ‘the low amount of loans of the fund’ and ‘the fund's inability to conclude contracts and attract capital from other sources’, in the educational-promotional factors of the items ‘non-simultaneity of job training with the granting of loans to create jobs’ and ‘insufficient training for the effective use of loans and job creation’ and in the policy-management factors of the item ‘failure to provide government facilities for support From the funds, they had the highest priority. Conclusion: In general, the results of this research show that policy-management factors and social factors, especially the structure and composition of social and economic institutions, are the most important obstacles to their sustainability. Therefore, it is suggested to form cooperative institutions based on network analysis studies in order to achieve the appropriate composition of members.Keywords: cooperative institution, social capital, network analysis, participation, Sarayan.
Procedia PDF Downloads 5560 Development of Advanced Virtual Radiation Detection and Measurement Laboratory (AVR-DML) for Nuclear Science and Engineering Students
Authors: Lily Ranjbar, Haori Yang
Abstract:
Online education has been around for several decades, but the importance of online education became evident after the COVID-19 pandemic. Eventhough the online delivery approach works well for knowledge building through delivering content and oversight processes, it has limitations in developing hands-on laboratory skills, especially in the STEM field. During the pandemic, many education institutions faced numerous challenges in delivering lab-based courses, especially in the STEM field. Also, many students worldwide were unable to practice working with lab equipment due to social distancing or the significant cost of highly specialized equipment. The laboratory plays a crucial role in nuclear science and engineering education. It can engage students and improve their learning outcomes. In addition, online education and virtual labs have gained substantial popularity in engineering and science education. Therefore, developing virtual labs is vital for institutions to deliver high-class education to their students, including their online students. The School of Nuclear Science and Engineering (NSE) at Oregon State University, in partnership with SpectralLabs company, has developed an Advanced Virtual Radiation Detection and Measurement Lab (AVR-DML) to offer a fully online Master of Health Physics program. It was essential for us to use a system that could simulate nuclear modules that accurately replicate the underlying physics, the nature of radiation and radiation transport, and the mechanics of the instrumentations used in the real radiation detection lab. It was all accomplished using a Realistic, Adaptive, Interactive Learning System (RAILS). RAILS is a comprehensive software simulation-based learning system for use in training. It is comprised of a web-based learning management system that is located on a central server, as well as a 3D-simulation package that is downloaded locally to user machines. Users will find that the graphics, animations, and sounds in RAILS create a realistic, immersive environment to practice detecting different radiation sources. These features allow students to coexist, interact and engage with a real STEM lab in all its dimensions. It enables them to feel like they are in a real lab environment and to see the same system they would in a lab. Unique interactive interfaces were designed and developed by integrating all the tools and equipment needed to run each lab. These interfaces provide students full functionality for data collection, changing the experimental setup, and live data collection with real-time updates for each experiment. Students can manually do all experimental setups and parameter changes in this lab. Experimental results can then be tracked and analyzed in an oscilloscope, a multi-channel analyzer, or a single-channel analyzer (SCA). The advanced virtual radiation detection and measurement laboratory developed in this study enabled the NSE school to offer a fully online MHP program. This flexibility of course modality helped us to attract more non-traditional students, including international students. It is a valuable educational tool as students can walk around the virtual lab, make mistakes, and learn from them. They have an unlimited amount of time to repeat and engage in experiments. This lab will also help us speed up training in nuclear science and engineering.Keywords: advanced radiation detection and measurement, virtual laboratory, realistic adaptive interactive learning system (rails), online education in stem fields, student engagement, stem online education, stem laboratory, online engineering education
Procedia PDF Downloads 9059 Numerical Analysis of Mandible Fracture Stabilization System
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented work is to recognize the impact of mini-plate application approach on the stress and displacement within the stabilization devices and surrounding bones. The mini-plate osteosynthesis technique is widely used by craniofacial surgeons as an improved replacement of wire connection approach. Many different types of metal plates and screws are used to the physical connection of fractured bones. Below investigation is based on a clinical observation of patient hospitalized with mini-plate stabilization system. Analysis was conducted on a solid mandible geometry, which was modeled basis on the computed tomography scan of the hospitalized patient. In order to achieve most realistic connected system behavior, the cortical and cancellous bone layers were assumed. The temporomandibular joint was simplified to the elastic element to allow physiological movement of loaded bone. The muscles of mastication system were reduced to three pairs, modeled as shell structures. Finite element grid was created by the ANSYS software, where hexahedral and tetrahedral variants of SOLID185 element were used. A set of nonlinear contact conditions were applied on connecting devices and bone common surfaces. Properties of particular contact pair depend on screw - mini-plate connection type and possible gaps between fractured bone around osteosynthesis region. Some of the investigated cases contain prestress introduced to the mini-plate during the application, what responds the initial bending of the connecting device to fit the retromolar fossa region. Assumed bone fracture occurs within the mandible angle zone. Due to the significant deformation of the connecting plate in some of the assembly cases the elastic-plastic model of titanium alloy was assumed. The bone tissues were covered by the orthotropic material. As a loading were used the gauge force of magnitude of 100N applied in three different locations. Conducted analysis shows significant impact of mini-plate application methodology on the stress distribution within the miniplate. Prestress effect introduces additional loading, which leads to locally exceed the titanium alloy yield limit. Stress in surrounding bone increases rapidly around the screws application region, exceeding assumed bone yield limit, what indicate the local bone destruction. Approach with the doubled mini-plate shows increased stress within the connector due to the too rigid connection, where the main path of loading leads through the mini-plates instead of plates and connected bones. Clinical observations confirm more frequent plate destruction of stiffer connections. Some of them could be an effect of decreased low cyclic fatigue capability caused by the overloading. The executed analysis prove that the mini-plate system provides sufficient support to mandible fracture treatment, however, many applicable solutions shifts the entire system to the allowable material limits. The results show that connector application with the initial loading needs to be carefully established due to the small material capability tolerances. Comparison to the clinical observations allows optimizing entire connection to prevent future incidents.Keywords: mandible fracture, mini-plate connection, numerical analysis, osteosynthesis
Procedia PDF Downloads 27558 The Use of Non-Parametric Bootstrap in Computing of Microbial Risk Assessment from Lettuce Consumption Irrigated with Contaminated Water by Sanitary Sewage in Infulene Valley
Authors: Mario Tauzene Afonso Matangue, Ivan Andres Sanchez Ortiz
Abstract:
The Metropolitan area of Maputo (Mozambique Capital City) is located in semi-arid zone (800 mm annual rainfall) with 1101170 million inhabitants. On the west side, there are the flatlands of Infulene where the Mulauze River flows towards to the Indian Ocean, receiving at this site, the storm water contaminated with sanitary sewage from Maputo, transported through a concrete open channel. In Infulene, local communities grow salads crops such as tomato, onion, garlic, lettuce, and cabbage, which are then commercialized and consumed in several markets in Maputo City. Lettuce is the most daily consumed salad crop in different meals, generally in fast-foods, breakfasts, lunches, and dinners. However, the risk of infection by several pathogens due to the consumption of lettuce, using the Quantitative Microbial Risk Assessment (QMRA) tools, is still unknown since there are few studies or publications concerning to this matter in Mozambique. This work is aimed at determining the annual risk arising from the consumption of lettuce grown in Infulene valley, in Maputo, using QMRA tools. The exposure model was constructed upon the volume of contaminated water remaining in the lettuce leaves, the empirical relations between the number of pathogens and the indicator of microorganisms (E. coli), the consumption of lettuce (g) and reduction of pathogens (days). The reference pathogens were Vibrio cholerae, Cryptosporidium, norovirus, and Ascaris. The water quality samples (E. coli) were collected in the storm water channel from January 2016 to December 2018, comprising 65 samples, and the urban lettuce consumption data were collected through inquiry in Maputo Metropolis covering 350 persons. A non-parametric bootstrap was performed involving 10,000 iterations over the collected dataset, namely, water quality (E. coli) and lettuce consumption. The dose-response models were: Exponential for Cryptosporidium, Kummer Confluent hypergeomtric function (1F1) for Vibrio and Ascaris Gaussian hypergeometric function (2F1-(a,b;c;z) for norovirus. The annual infection risk estimates were performed using R 3.6.0 (CoreTeam) software by Monte Carlo (Latin hypercubes), a sampling technique involving 10,000 iterations. The annual infection risks values expressed by Median and the 95th percentile, per person per year (pppy) arising from the consumption of lettuce are as follows: Vibrio cholerae (1.00, 1.00), Cryptosporidium (3.91x10⁻³, 9.72x 10⁻³), nororvirus (5.22x10⁻¹, 9.99x10⁻¹) and Ascaris (2.59x10⁻¹, 9.65x10⁻¹). Thus, the consumption of the lettuce would result in greater risks than the tolerable levels ( < 10⁻³ pppy or 10⁻⁶ DALY) for all pathogens, and the Vibrio cholerae is the most virulent pathogens, according to the hit-single models followed by the Ascaris lumbricoides and norovirus. The sensitivity analysis carried out in this work pointed out that in the whole QMRA, the most important input variable was the reduction of pathogens (Spearman rank value was 0.69) between harvest and consumption followed by water quality (Spearman rank value was 0.69). The decision-makers (Mozambique Government) must strengthen the prevention measures related to pathogens reduction in lettuce (i.e., washing) and engage in wastewater treatment engineering.Keywords: annual infections risk, lettuce, non-parametric bootstrapping, quantitative microbial risk assessment tools
Procedia PDF Downloads 12057 Extracellular Polymeric Substances (EPS) Attribute to Biofouling of Anaerobic Membrane Bioreactor: Adhesion and Viscoelastic Properties
Authors: Kbrom Mearg Haile
Abstract:
Introduction: Membrane fouling is the bottleneck for the anaerobic membrane bioreactor (AnMBR) robust continuous operation, primarily caused by the mixed liquor suspended solids (MLSS) characteristics formed by aggregated flocs and a scaffold of microbial self-produced extracellular polymeric substances (EPS), which dictates the flocs integrity. Accordingly, the adhesion of EPS to the membrane surface versus their role in forming firm, elastic, and mechanically stable flocs under the reactor’s hydraulic shear is critical for minimizing interactions between EPS and colloids originating from the MLSS flocs with the membrane. This study aims to gain insight and investigate the effect of MLSS flocs properties, EPS adhesion and viscoelasticity, viscoelastic properties of the sludge, and membrane fouling propensity. Experimental: As a working hypothesis, to alter the aforementioned flocs’ and EPS’s properties, the addition of either coagulant or surfactant was carried out during the AnMBR operation. In the AnMBR, two flat-sheet 300 kDa pore size polyether sulfone (PES) membranes with a total filtration area of 352 cm2 were immersed in the AnMBR system treating municipal wastewater of Midreshet Ben-Gurion village at the Negev highlands, Israel. The system temperature, pH, biogas recirculation, and hydraulic retention time were regulated. TMP fluctuations during a 30-day experiment were recorded under three operating conditions: Baseline (without the addition of coagulating or dispersing agent), coagulant addition (FeCl3), and surfactant addition (sodium dodecyl sulfate). At the end of each experiment, EPS were extracted from the MLSS and from the fouled membrane, characterized for their protein, polysaccharides, and DOC contents, and correlated with the fouling tendency of the submerged UF membrane. The EPS adherence and viscoelastic properties were revealed using QCM-D via the PES-coated gold sensor used as a membrane-mimicking surface providing a detailed real-time EPS adhesion. The associated shifts in the resonance frequency and dissipation at different overtones were further modeled using the Voigt-based viscoelastic model (using Dfind software, Q-Sense Biolin Scientific) in which the thickness, shear modulus, and shear viscosity values of the adsorbed EPS layers on the PES coated sensor were calculated. Results and discussion: The observations obtained from the QCM-D analysis indicate a greater decrease in the frequency shift for the elevated membrane fouling scenarios, likely due to an observed decrease in the calculated shear viscosity and shear modulus of the EPS adsorbed layer, coupled with an increase in EPS layer hydrated thickness and fluidity (ΔD/Δf slopes). Further analysis is being conducted for the three major operating conditions-analyzing their effects on sludge rheology, dewaterability (capillary suction time-CST) and settle ability (SVI). The biofouling layer is further characterized microscopically using a confocal laser scanning microscope (CLSM) and scanning electron microscope (SEM), for analyzing the consistency of the development of the biofouling layer with sludge characteristics, i.e., thicker biofouling layer on the membrane surface when operated with surfactant addition, due to flocs with reduced integrity and availability of EPS/colloids to the membrane. Conversely, a thinner layer when operated with coagulant compared to the baseline experiment, due to elevation in flocs integrity.Keywords: viscoelasticity, biofouling, viscoelastic, AnMBR, EPS, elocintegrity
Procedia PDF Downloads 2256 Prevalence and Diagnostic Evaluation of Schistosomiasis in School-Going Children in Nelson Mandela Bay Municipality: Insights from Urinalysis and Point-of-Care Testing
Authors: Maryline Vere, Wilma ten Ham-Baloyi, Lucy Ochola, Opeoluwa Oyedele, Lindsey Beyleveld, Siphokazi Tili, Takafira Mduluza, Paula E. Melariri
Abstract:
Schistosomiasis, caused by Schistosoma (S.) haematobium and Schistosoma (S.) mansoni parasites poses a significant public health challenge in low-income regions. Diagnosis typically relies on identifying specific urine biomarkers such as haematuria, protein, and leukocytes for S. haematobium, while the Point-of-Care Circulating Cathodic Antigen (POC-CCA) assay is employed for detecting S. mansoni. Urinalysis and the POC-CCA assay are favoured for their rapid, non-invasive nature and cost-effectiveness. However, traditional diagnostic methods such as Kato-Katz and urine filtration lack sensitivity in low-transmission areas, which can lead to underreporting of cases and hinder effective disease control efforts. Therefore, in this study, urinalysis and the POC-CCA assay was utilised to diagnose schistosomiasis effectively among school-going children in Nelson Mandela Bay Municipality. This was a cross-sectional study with a total of 759 children, aged 5 to 14 years, who provided urine samples. Urinalysis was performed using urinary dipstick tests, which measure multiple parameters, including haematuria, protein, leukocytes, bilirubin, urobilinogen, ketones, pH, specific gravity and other biomarkers. Urinalysis was performed by dipping the strip into the urine sample and observing colour changes on specific reagent pads. The POC-CCA test was conducted by applying a drop of urine onto a cassette containing CCA-specific antibodies, and the presence of a visible test line indicated a positive result for S. mansoni infection. Descriptive statistics were used to summarize urine parameters, and Pearson correlation coefficients (r) were calculated to analyze associations among urine parameters using R software (version 4.3.1). Among the 759 children, the prevalence of S. haematobium using haematuria as a diagnostic marker was 33.6%. Additionally, leukocytes were detected in 21.3% of the samples, and protein was present in 15%. The prevalence of positive POC-CCA test results for S. mansoni was 3.7%. Urine parameters exhibited low to moderate associations, suggesting complex interrelationships. For instance, specific gravity and pH showed a negative correlation (r = -0.37), indicating that higher specific gravity was associated with lower pH. Weak correlations were observed between haematuria and pH (r = -0.10), bilirubin and ketones (r = 0.14), protein and bilirubin (r = 0.13), and urobilinogen and pH (r = 0.12). A mild positive correlation was found between leukocytes and blood (r = 0.23), reflecting some association between these inflammation markers. In conclusion, the study identified a significant prevalence of schistosomiasis among school-going children in Nelson Mandela Bay Municipality, with S. haematobium detected through haematuria and S. mansoni identified using the POC-CCA assay. The detection of leukocytes and protein in urine samples serves as critical biomarkers for schistosomiasis infections, reinforcing the presence of schistosomiasis in the study area when considered alongside haematuria. These urine parameters are indicative of inflammatory responses associated with schistosomiasis, underscoring the necessity for effective diagnostic methodologies. Such findings highlight the importance of comprehensive diagnostic assessments to accurately identify and monitor schistosomiasis prevalence and its associated health impacts. The significant burden of schistosomiasis in this population highlights the urgent need to develop targeted control interventions to effectively reduce its prevalence in the study area.Keywords: schistosomiasis, urinalysis, haematuria, POC-CCA
Procedia PDF Downloads 2255 Identifying Effective Strategies to Promote Vietnamese Fashion Brands in an Internationally Dominated Market
Authors: Lam Hong Lan, Gabor Sarlos
Abstract:
It is hard to search for best practices in promotion for local fashion brands in Vietnam as the industry is still very young. Local fashion start-ups have grown quickly in the last five years, thanks in part to the internet and social media. However, local designer/owners can face a huge challenge when competing with international brands in the Vietnamese market – and few local case studies are available for guidance. In response, this paper studied how local small- to medium-sized enterprises (SMEs) promote to their target customers in order to compete with international brands. Knowledge of both successful and unsuccessful approaches generated by this study is intended to both contribute to the academic literature on local fashion in Vietnam as well as to help local designers to learn from and improve their brand-building strategy. The primary study featured qualitative data collection via semi-structured depth interviews. Transcription and data analysis were conducted manually in order to identify success factors that local brands should consider as part of their promotion strategy. Purposive sampling of SMEs identified five designers in Ho Chi Minh City (the biggest city in Vietnam) and three designers in Hanoi (the second biggest) as interviewees. Participant attributes included: born in the 1980s or 1990s; familiar with internet and social media; designer/owner of a successful local fashion brand in the key middle market and/or mass market segments (which are crucial to the growth of local brands). A secondary study was conducted using social listening software to gather further qualitative data on what were considered to be successful or unsuccessful approaches to local fashion brand promotion on social media. Both the primary and secondary studies indicated that local designers had maximized their promotion budget by using owned media and earned media instead of paid media. Findings from the qualitative interviews indicate that internet and social media have been used as effective promotion platforms by local fashion start-ups. Facebook and Instagram were the most popular social networks used by the SMEs interviewed, and these social platforms were believed to offer a more affordable promotional strategy than traditional media such as TV and/or print advertising. Online stores were considered an important factor in helping the SMEs to reach customers beyond the physical store. Furthermore, a successful online store allowed some SMEs to reduce their business rental costs by maintaining their physical store in a cheaper, less central city area as opposed to a more traditional city center store location. In addition, the small comparative size of the SMEs allowed them to be more attentive to their customers, leading to higher customer satisfaction and rate of return. In conclusion, this study found that these kinds of cost savings helped the SMEs interviewed to focus their scarce resources on producing unique, high-quality collections in order to differentiate themselves from international brands. Facebook and Instagram were the main platforms used for promotion and brand-building. The main challenge to this promotion strategy identified by the SMEs interviewed was to continue to find innovative ways to maximize the impact of a limited marketing budget.Keywords: Vietnam, SMEs, fashion brands, promotion, marketing, social listening
Procedia PDF Downloads 12554 Numerical Prediction of Width Crack of Concrete Dapped-End Beams
Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo
Abstract:
Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis
Procedia PDF Downloads 16753 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors
Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov
Abstract:
Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model
Procedia PDF Downloads 21852 Experimental Characterisation of Composite Panels for Railway Flooring
Authors: F. Pedro, S. Dias, A. Tadeu, J. António, Ó. López, A. Coelho
Abstract:
Railway transportation is considered the most economical and sustainable way to travel. However, future mobility brings important challenges to railway operators. The main target is to develop solutions that stimulate sustainable mobility. The research and innovation goals for this domain are efficient solutions, ensuring an increased level of safety and reliability, improved resource efficiency, high availability of the means (train), and satisfied passengers with the travel comfort level. These requirements are in line with the European Strategic Agenda for the 2020 rail sector, promoted by the European Rail Research Advisory Council (ERRAC). All these aspects involve redesigning current equipment and, in particular, the interior of the carriages. Recent studies have shown that two of the most important requirements for passengers are reasonable ticket prices and comfortable interiors. Passengers tend to use their travel time to rest or to work, so train interiors and their systems need to incorporate features that meet these requirements. Among the various systems that integrate train interiors, the flooring system is one of the systems with the greatest impact on passenger safety and comfort. It is also one of the systems that takes more time to install on the train, and which contributes seriously to the weight (mass) of all interior systems. Additionally, it presents a strong impact on manufacturing costs. The design of railway floor, in the development phase, is usually made relying on a design software that allows to draw and calculate several solutions in a short period of time. After obtaining the best solution, considering the goals previously defined, experimental data is always necessary and required. This experimental phase has such great significance, that its outcome can provoke the revision of the designed solution. This paper presents the methodology and some of the results of an experimental characterisation of composite panels for railway application. The mechanical tests were made for unaged specimens and for specimens that suffered some type of aging, i.e. heat, cold and humidity cycles or freezing/thawing cycles. These conditionings aim to simulate not only the time effect, but also the impact of severe environmental conditions. Both full solutions and separated components/materials were tested. For the full solution, (panel) these were: four-point bending tests, tensile shear strength, tensile strength perpendicular to the plane, determination of the spreading of water, and impact tests. For individual characterisation of the components, more specifically for the covering, the following tests were made: determination of the tensile stress-strain properties, determination of flexibility, determination of tear strength, peel test, tensile shear strength test, adhesion resistance test and dimensional stability. The main conclusions were that experimental characterisation brings a huge contribution to understand the behaviour of the materials both individually and assembled. This knowledge contributes to the increase the quality and improvements of premium solutions. This research work was framed within the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through the COMPETE 2020.Keywords: durability, experimental characterization, mechanical tests, railway flooring system
Procedia PDF Downloads 15551 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations
Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai
Abstract:
Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile
Procedia PDF Downloads 14250 Assigning Moral Positions Caused by Environmental Degradation in San Buenaventura Public Housing Complex in Ixtapaluca, State of Mexico, Mexico
Authors: Ángel O. Aldape, José M. Bustos, José G. Guízar
Abstract:
Building companies providing public housing in Mexico, such as INFONAVIT, Casas GEO, Casas ARA, among others, provide low-interest home loans for thousands of Mexican families and individuals to buy a home. However, once this goal is achieved, these companies are not responsible for the care and maintenance of green areas and waste collection services because, technically, it is the local municipalities’ responsibility to provide these services to the community. However, this does not always occur with local municipalities. To study this problem, the San Buenaventura public housing complex was selected. This housing complex is located in the municipality of Ixtapaluca, State of Mexico (Estado de Mexico), Mexico. To our best knowledge, there are currently no formal studies about San Buenaventura that can offer effective options and/or better ways of sorting and disposing households’ wastes, as well as improving local green areas (community gardens and parks). Only a few web-blogs and periodical reports have addressed these serious problems that directly affect the social and psychological well-being of residents. The main goal of this research project aims to improve our understanding towards the existing ontological elements that emerge from residents’ discourses (in the form of informal talks and gossip) and discover the socio-physical elements that they use to assign moral positions onto others or onto themselves. The theoretical framework used in this study is based on two constructionist theories: positioning theory and site ontology. The first theory offered the opportunity to explore the rights, duties, and obligations assigned to a social role (or moral position) of the participants. The second theory provided a constructionist philosophical base that includes various socio-physical elements that are considered to assign personal or community meanings to particular contexts. Both theories contributed to defining personal dispositions and/or attitudes to carry out concrete social action or practice. The theoretical framework was guided by a relativistic ontology that allowed the researcher to better interpret the reality of the participants of this study. A descriptive-interpretative methodology was used, and two qualitative methods were arranged based on the theoretical framework proposed as follows: a semi-structured focus group interview, and direct observations. The semi-structured focus group was carried out with four residents of San Buenaventura and covert observations of public spaces and houses were carried out. These were analysed and interpreted by the researcher and assisted by NVivo software. The results suggest that the participants assigned moral traits of responsibility to other residents regarding the problem of the neglect of the green areas and waste pollution. The results suggest that all participants agreed to assign moral traits to other residents making them liable for the environmental degradation and the decay of green areas. They neither assigned any moral duty nor responsible moral traits onto themselves towards environmental protection or destruction. Overall, the participants in this study pointed out that external ontological elements such as the local government, infrastructure or cleaning services were not main cause of these environmental problems but rather the general lack of moral duty and disposition of other residents.Keywords: conversation, environment, housing, moral, ontology, position, public, site, talks
Procedia PDF Downloads 20449 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan
Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen
Abstract:
In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.Keywords: automation, integration, value, communication
Procedia PDF Downloads 14648 Simulation of Multistage Extraction Process of Co-Ni Separation Using Ionic Liquids
Authors: Hongyan Chen, Megan Jobson, Andrew J. Masters, Maria Gonzalez-Miquel, Simon Halstead, Mayri Diaz de Rienzo
Abstract:
Ionic liquids offer excellent advantages over conventional solvents for industrial extraction of metals from aqueous solutions, where such extraction processes bring opportunities for recovery, reuse, and recycling of valuable resources and more sustainable production pathways. Recent research on the use of ionic liquids for extraction confirms their high selectivity and low volatility, but there is relatively little focus on how their properties can be best exploited in practice. This work addresses gaps in research on process modelling and simulation, to support development, design, and optimisation of these processes, focusing on the separation of the highly similar transition metals, cobalt, and nickel. The study exploits published experimental results, as well as new experimental results, relating to the separation of Co and Ni using trihexyl (tetradecyl) phosphonium chloride. This extraction agent is attractive because it is cheaper, more stable and less toxic than fluorinated hydrophobic ionic liquids. This process modelling work concerns selection and/or development of suitable models for the physical properties, distribution coefficients, for mass transfer phenomena, of the extractor unit and of the multi-stage extraction flowsheet. The distribution coefficient model for cobalt and HCl represents an anion exchange mechanism, supported by the literature and COSMO-RS calculations. Parameters of the distribution coefficient models are estimated by fitting the model to published experimental extraction equilibrium results. The mass transfer model applies Newman’s hard sphere model. Diffusion coefficients in the aqueous phase are obtained from the literature, while diffusion coefficients in the ionic liquid phase are fitted to dynamic experimental results. The mass transfer area is calculated from the surface to mean diameter of liquid droplets of the dispersed phase, estimated from the Weber number inside the extractor. New experiments measure the interfacial tension between the aqueous and ionic phases. The empirical models for predicting the density and viscosity of solutions under different metal loadings are also fitted to new experimental data. The extractor is modelled as a continuous stirred tank reactor with mass transfer between the two phases and perfect phase separation of the outlet flows. A multistage separation flowsheet simulation is set up to replicate a published experiment and compare model predictions with the experimental results. This simulation model is implemented in gPROMS software for dynamic process simulation. The results of single stage and multi-stage flowsheet simulations are shown to be in good agreement with the published experimental results. The estimated diffusion coefficient of cobalt in the ionic liquid phase is in reasonable agreement with published data for the diffusion coefficients of various metals in this ionic liquid. A sensitivity study with this simulation model demonstrates the usefulness of the models for process design. The simulation approach has potential to be extended to account for other metals, acids, and solvents for process development, design, and optimisation of extraction processes applying ionic liquids for metals separations, although a lack of experimental data is currently limiting the accuracy of models within the whole framework. Future work will focus on process development more generally and on extractive separation of rare earths using ionic liquids.Keywords: distribution coefficient, mass transfer, COSMO-RS, flowsheet simulation, phosphonium
Procedia PDF Downloads 19047 Implementation of Building Information Modelling to Monitor, Assess, and Control the Indoor Environmental Quality of Higher Education Buildings
Authors: Mukhtar Maigari
Abstract:
The landscape of Higher Education (HE) institutions, especially following the CVID-19 pandemic, necessitates advanced approaches to manage Indoor Environmental Quality (IEQ) which is crucial for the comfort, health, and productivity of students and staff. This study investigates the application of Building Information Modelling (BIM) as a multifaceted tool for monitoring, assessing, and controlling IEQ in HE buildings aiming to bridge the gap between traditional management practices and the innovative capabilities of BIM. Central to the study is a comprehensive literature review, which lays the foundation by examining current knowledge and technological advancements in both IEQ and BIM. This review sets the stage for a deeper investigation into the practical application of BIM in IEQ management. The methodology consists of Post-Occupancy Evaluation (POE) which encompasses physical monitoring, questionnaire surveys, and interviews under the umbrella of case studies. The physical data collection focuses on vital IEQ parameters such as temperature, humidity, CO2 levels etc, conducted by using different equipment including dataloggers to ensure accurate data. Complementing this, questionnaire surveys gather perceptions and satisfaction levels from students, providing valuable insights into the subjective aspects of IEQ. The interview component, targeting facilities management teams, offers an in-depth perspective on IEQ management challenges and strategies. The research delves deeper into the development of a conceptual BIM-based framework, informed by the insight findings from case studies and empirical data. This framework is designed to demonstrate the critical functions necessary for effective IEQ monitoring, assessment, control and automation with real time data handling capabilities. This BIM-based framework leads to the developing and testing a BIM-based prototype tool. This prototype leverages on software such as Autodesk Revit with its visual programming tool i.e., Dynamo and an Arduino-based sensor network thereby allowing for real-time flow of IEQ data for monitoring, control and even automation. By harnessing the capabilities of BIM technology, the study presents a forward-thinking approach that aligns with current sustainability and wellness goals, particularly vital in the post-COVID-19 era. The integration of BIM in IEQ management promises not only to enhance the health, comfort, and energy efficiency of educational environments but also to transform them into more conducive spaces for teaching and learning. Furthermore, this research could influence the future of HE buildings by prompting universities and government bodies to revaluate and improve teaching and learning environments. It demonstrates how the synergy between IEQ and BIM can empower stakeholders to monitor IEQ conditions more effectively and make informed decisions in real-time. Moreover, the developed framework has broader applications as well; it can serve as a tool for other sustainability assessments, like energy analysis in HE buildings, leveraging measured data synchronized with the BIM model. In conclusion, this study bridges the gap between theoretical research and real-world application by practicalizing how advanced technologies like BIM can be effectively integrated to enhance environmental quality in educational institutions. It portrays the potential of integrating advanced technologies like BIM in the pursuit of improved environmental conditions in educational institutions.Keywords: BIM, POE, IEQ, HE-buildings
Procedia PDF Downloads 4946 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure
Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer
Abstract:
The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition
Procedia PDF Downloads 10845 Project Management Practices and Operational Challenges in Conflict Areas: Case Study Kewot Woreda North Shewa Zone, Amhara Region, Ethiopia
Authors: Rahel Birhane Eshetu
Abstract:
This research investigates the complex landscape of project management practices and operational challenges in conflict-affected areas, with a specific focus on Kewot Woreda in the North Shewa Zone of the Amhara region in Ethiopia. The study aims to identify essential project management methodologies, the significant operational hurdles faced, and the adaptive strategies employed by project managers in these challenging environments. Utilizing a mixed-methods approach, the research combines qualitative and quantitative data collection. Initially, a comprehensive literature review was conducted to establish a theoretical framework. This was followed by the administration of questionnaires to gather empirical data, which was then analyzed using statistical software. This sequential approach ensures a robust understanding of the context and challenges faced by project managers. The findings reveal that project managers in conflict zones encounter a range of escalating challenges. Initially, they must contend with immediate security threats and the presence of displaced populations, which significantly disrupt project initiation and execution. As projects progress, additional challenges arise, including limited access to essential resources and environmental disruptions such as natural disasters. These factors exacerbate the operational difficulties that project managers must navigate. In response to these challenges, the study highlights the necessity for project managers to implement formal project plans while simultaneously adopting adaptive strategies that evolve over time. Key adaptive strategies identified include flexible risk management frameworks, change management practices, and enhanced stakeholder engagement approaches. These strategies are crucial for maintaining project momentum and ensuring that objectives are met despite the unpredictable nature of conflict environments. The research emphasizes that structured scope management, clear documentation, and thorough requirements analysis are vital components for effectively navigating the complexities inherent in conflict-affected regions. However, the ongoing threats and logistical barriers necessitate a continuous adjustment to project management methodologies. This adaptability is not only essential for the immediate success of projects but also for fostering long-term resilience within the community. Concluding, the study offers actionable recommendations aimed at improving project management practices in conflict zones. These include the adoption of adaptive frameworks specifically tailored to the unique conditions of conflict environments and targeted training for project managers. Such training should focus on equipping managers with the skills to better address the dynamic challenges presented by conflict situations. The insights gained from this research contribute significantly to the broader field of project management, providing a practical guide for practitioners operating in high-risk areas. By emphasizing sustainable and resilient project outcomes, this study underscores the importance of adaptive management strategies in ensuring the success of projects in conflict-affected regions. The findings serve not only to enhance the understanding of project management practices in Kewot Woreda but also to inform future research and practice in similar contexts, ultimately aiming to promote stability and development in areas beset by conflict.Keywords: project management practices, operational challenges, conflict zones, adaptive strategies
Procedia PDF Downloads 1544 Biosynthesis of Silver Nanoparticles Using Zataria multiflora Extract, and Study of Their Antibacterial Effects on Negative Bacillus Bacteria Causing Urinary Tract Infection
Authors: F. Madani, M. Doudi, L. Rahimzadeh Torabi
Abstract:
The irregular consumption of current antibiotics contributes to an escalation in antibiotic resistance among urinary pathogens on a global scale. The objective of this research was to investigate the process of biologically synthesized silver nanoparticles through the utilization of Zataria multiflora extract. Additionally, the study aimed to evaluate the efficacy of these synthesized nanoparticles in inhibiting the growth of multi-drug resistant negative bacillus bacteria, which commonly contribute to urinary tract infections. The botanical specimen utilized in the current research investigation was Z. multiflora, and its extract was produced employing the Soxhlet extraction technique. The study examined the green synthesis conditions of silver nanoparticles by considering three key parameters: the quantity of extract used, the concentration of silver nitrate salt, and the temperature. The particle dimensions were ascertained using the Zetasizer technique. In order to identify synthesized Silver nanoparticles TEM, XRD, and FTIR methods were used. For evaluating the antibacterial effects of nanoparticles synthesized through a biological method, different concentrations of silver nanoparticles were studied on 140 cases of Multiple drug resistance (MDR) bacteria strains Escherichia coli, Klebsiella pneumoniae, Enterobacter aerogenes, Proteus vulgaris,Citrobacter freundii, Acinetobacter bumanii and Pseudomonas aeruginosa, (each genus of bacteria, 20 samples), which all were MDR and cause urinary tract infections, for identification of bacteria were used of PCR test and laboratory methods (Agar well diffusion and Microdilution methods) to assess their sensitivity to Nanoparticles. The data were subjected to analysis using the statistical software SPSS, specifically employing nonparametric Kruskal-Wallis and Mann-Whitney tests. This study yielded noteworthy findings regarding the impacts of varying concentrations of silver nitrate, different quantities of Z. multiflora extract, and levels of temperature on nanoparticles. Specifically, it was observed that an increase in the concentration of silver nitrate, extract amount, and temperature resulted in a reduction in the size of the nanoparticles synthesized. However, the impact of the aforementioned factors on the index of particle diffusion was found to be statistically non-significant. According to the transmission electron microscopy (TEM) findings, the particles exhibited predominantly spherical morphology, with a diameter spanning from 25 to 50 nanometers. Nanoparticles in the examined sample. Nanocrystals of silver. FTIR method illustrated that the spectrums of Z. multiflora and synthesized nanoparticles had clear peaks in the ranges of 1500-2000, and 3500 - 4000. The obtained results of antibacterial effects of different concentrations of silver nanoparticles on according to agar well diffusion and microdilution method, biologically synthesized nanoparticles showed 1000 mg /ml highest and lowest mean inhibition zone diameter in E. coli, A. bumanii 23 and 15mm, respectively. MIC was observed for all of bacteria 125 mg/ml and for A. bumanii 250 mg/ml. Comparing the growth inhibitory effect of chemically synthesized the results obtained from the experiment indicated that both nanoparticles and biologically synthesized nanoparticles exhibit a notable growth inhibition effect. Specifically, the chemical method of synthesizing nanoparticles demonstrated the highest level of growth inhibition at a concentration of 62.5 mg/mL The present study demonstrated an inhibitory effect on bacterial growth, facilitating the causative factors of urine infection and multidrug resistance (MDR).Keywords: multiple drug resistance, negative bacillus bacteria, urine infection, Zataria multiflora
Procedia PDF Downloads 10443 Force Sensing Resistor Testing of Hand Forces and Grasps during Daily Functional Activities in the Covid-19 Pandemic
Authors: Monique M. Keller, Roline Barnes, Corlia Brandt
Abstract:
Introduction Scientific evidence on the hand forces and the types of grasps measurement during daily tasks are lacking, leaving a gap in the field of hand rehabilitation and robotics. Measuring the grasp forces and types produced by the individual fingers during daily functional tasks is valuable to inform and grade rehabilitation practices for second to fifth metacarpal fractures with robust scientific evidence. Feix et al, 2016 identified the most extensive and complete grasp study that resulted in the GRASP taxonomy. Covid-19 virus changed data collection across the globe and safety precautions in research are essential to ensure the health of participants and researchers. Methodology A cross-sectional study investigated six healthy adults aged 20 to 59 years, pilot participants’ hand forces during 105 tasks. The tasks were categorized into five sections namely, personal care, transport and moving around, home environment and inside, gardening and outside, and office. The predominant grasp of each task was identified guided by the GRASP Taxonomy. Grasp forces were measured with 13mm force-sensing resistors glued onto a glove attached to each of the dominant and non-dominant hand’s individual fingers. Testing equipment included Flexiforce 13millimetres FSR .5" circle, calibrated prior to testing, 10k 1/4w resistors, Arduino pro mini 5.0v – compatible, Esp-01-kit, Arduino uno r3 – compatible board, USB ab cable - 1m, Ftdi ft232 mini USB to serial, Sil 40 inline connectors, ribbon cable combo male header pins, female to female, male to female, two gloves, glue to attach the FSR to glove, Arduino software programme downloaded on a laptop. Grip strength measurements with Jamar dynamometer prior to testing and after every 25 daily tasks were taken to will avoid fatigue and ensure reliability in testing. Covid-19 precautions included wearing face masks at all times, screening questionnaires, temperatures taken, wearing surgical gloves before putting on the testing gloves 1.5 metres long wires attaching the FSR to the Arduino to maintain social distance. Findings Predominant grasps observed during 105 tasks included, adducted thumb (17), lateral tripod (10), prismatic three fingers (12), small diameter (9), prismatic two fingers (9), medium wrap (7), fixed hook (5), sphere four fingers (4), palmar (4), parallel extension (4), index finger extension (3), distal (3), power sphere (2), tripod (2), quadpod (2), prismatic four fingers (2), lateral (2), large-diameter (2), ventral (2), precision sphere (1), palmar pinch (1), light tool (1), inferior pincher (1), and writing tripod (1). Range of forces applied per category, personal care (1-25N), transport and moving around (1-9 N), home environment and inside (1-41N), gardening and outside (1-26.5N), and office (1-20N). Conclusion Scientifically measurements of finger forces with careful consideration to types of grasps used in daily tasks should guide rehabilitation practices and robotic design to ensure a return to the full participation of the individual into the community.Keywords: activities of daily living (ADL), Covid-19, force-sensing resistors, grasps, hand forces
Procedia PDF Downloads 190