Search results for: statistical optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6834

Search results for: statistical optimization

294 Comparative Effects of Resveratrol and Energy Restriction on Liver Fat Accumulation and Hepatic Fatty Acid Oxidation

Authors: Iñaki Milton-Laskibar, Leixuri Aguirre, Maria P. Portillo

Abstract:

Introduction: Energy restriction is an effective approach in preventing liver steatosis. However, due to social and economic reasons among others, compliance with this treatment protocol is often very poor, especially in the long term. Resveratrol, a natural polyphenolic compound that belongs to stilbene group, has been widely reported to imitate the effects of energy restriction. Objective: To analyze the effects of resveratrol under normoenergetic feeding conditions and under a mild energy restriction on liver fat accumulation and hepatic fatty acid oxidation. Methods: 36 male six-week-old rats were fed a high-fat high-sucrose diet for 6 weeks in order to induce steatosis. Then, rats were divided into four groups and fed a standard diet for 6 additional weeks: control group (C), resveratrol group (RSV, resveratrol 30 mg/kg/d), restricted group (R, 15 % energy restriction) and combined group (RR, 15 % energy restriction and resveratrol 30 mg/kg/d). Liver triacylglycerols (TG) and total cholesterol contents were measured by using commercial kits. Carnitine palmitoyl transferase 1a (CPT 1a) and citrate synthase (CS) activities were measured spectrophotometrically. TFAM (mitochondrial transcription factor A) and peroxisome proliferator-activator receptor alpha (PPARα) protein contents, as well as the ratio acetylated peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC1α)/Total PGC1α were analyzed by Western blot. Statistical analysis was performed by using one way ANOVA and Newman-Keuls as post-hoc test. Results: No differences were observed among the four groups regarding liver weight and cholesterol content, but the three treated groups showed reduced TG when compared to the control group, being the restricted groups the ones showing the lowest values (with no differences between them). Higher CPT 1a and CS activities were observed in the groups supplemented with resveratrol (RSV and RR), with no difference between them. The acetylated PGC1α /total PGC1α ratio was lower in the treated groups (RSV, R and RR) than in the control group, with no differences among them. As far as TFAM protein expression is concerned, only the RR group reached a higher value. Finally, no changes were observed in PPARα protein expression. Conclusions: Resveratrol administration is an effective intervention for liver triacylglycerol content reduction, but a mild energy restriction is even more effective. The mechanisms of action of these two strategies are different. Thus resveratrol, but not energy restriction, seems to act by increasing fatty acid oxidation, although mitochondriogenesis seems not to be induced. When both treatments (resveratrol administration and a mild energy restriction) were combined, no additive or synergic effects were appreciated. Acknowledgements: MINECO-FEDER (AGL2015-65719-R), Basque Government (IT-572-13), University of the Basque Country (ELDUNANOTEK UFI11/32), Institut of Health Carlos III (CIBERobn). Iñaki Milton is a fellowship from the Basque Government.

Keywords: energy restriction, fat, liver, oxidation, resveratrol

Procedia PDF Downloads 193
293 Peripheral Neuropathy after Locoregional Anesthesia

Authors: Dalila Chaid, Bennameur Fedilli, Mohammed Amine Bellelou

Abstract:

The study focuses on the experience of lower-limb amputees, who face both physical and psychological challenges due to their disability. Chronic neuropathic pain and various types of limb pain are common in these patients. They often require orthopaedic interventions for issues such as dressings, infection, ulceration, and bone-related problems. Research Aim: The aim of this study is to determine the most suitable anaesthetic technique for lower-limb amputees, which can provide them with the greatest comfort and prolonged analgesia. The study also aims to demonstrate the effectiveness and cost-effectiveness of ultrasound-guided local regional anaesthesia (LRA) in this patient population. Methodology: The study is an observational analytical study conducted over a period of eight years, from 2010 to 2018. It includes a total of 955 cases of revisions performed on lower limb stumps. The parameters analyzed in this study include the effectiveness of the block and the use of sedation, the duration of the block, the post-operative visual analog scale (VAS) scores, and patient comfort. Findings: The study findings highlight the benefits of ultrasound-guided LRA in providing comfort by optimizing post-operative analgesia, which can contribute to psychological and bodily repair in lower-limb amputees. Additionally, the study emphasizes the use of alpha2 agonist adjuvants with sedative and analgesic properties, long-acting local anaesthetics, and larger volumes for better outcomes. Theoretical Importance: This study contributes to the existing knowledge by emphasizing the importance of choosing an appropriate anaesthetic technique for lower-limb amputees. It highlights the potential of ultrasound-guided LRA and the use of specific adjuvants and local anaesthetics in improving post-operative analgesia and overall patient outcomes. Data Collection and Analysis Procedures: Data for this study were collected through the analysis of medical records and relevant documentation related to the 955 cases included in the study. The effectiveness of the anaesthetic technique, duration of the block, post-operative pain scores, and patient comfort were analyzed using statistical methods. Question Addressed: The study addresses the question of which anaesthetic technique would be most suitable for lower-limb amputees to provide them with optimal comfort and prolonged analgesia. Conclusion: The study concludes that ultrasound-guided LRA, along with the use of alpha2 agonist adjuvants, long-acting local anaesthetics, and larger volumes, can be an effective approach in providing comfort and improving post-operative analgesia for lower-limb amputees. This technique can potentially contribute to the psychological and bodily repair of these patients. The findings of this study have implications for clinical practice in the management of lower-limb amputees, highlighting the importance of personalized anaesthetic approaches for better outcomes.

Keywords: neuropathic pain, ultrasound-guided peripheral nerve block, DN4 quiz, EMG

Procedia PDF Downloads 37
292 Basic Life Support Training in Rural Uganda: A Mixed Methods Study of Training and Attitudes towards Resuscitation

Authors: William Gallagher, Harriet Bothwell, Lowri Evans, Kevin Jones

Abstract:

Background: Worldwide, a third of adult deaths are caused by cardiovascular disease, a high proportion occurring in the developing world. Contributing to these poor outcomes are suboptimal assessments, treatments and monitoring of the acutely unwell patient. Successful training in trauma and neonates is recognised in the developing world but there is little literature supporting adult resuscitation. As far as the authors are aware no literature has been published on resuscitation training in Uganda since 2000 when a resuscitation training officer ran sessions in neonatal and paediatric resuscitation. The aim of this project was to offer training in Basic Life Support ( BLS) to staff and healthcare students based at Villa Maria Hospital in the Kalungu District, Central Uganda. This project was undertaken as a student selected component (SSC) offered by Swindon Academy, based at the Great Western Hospital, to medical students in their fourth year of the undergraduate programme. Methods: Semi-structured, informal interviews and focus groups were conducted with different clinicians in the hospital. These interviews were designed to focus on the level of training and understanding of BLS. A training session was devised which focused on BLS (excluding the use of an automatic external defribrillator) involving pre and post-training questionnaires and clinical assessments. Three training sessions were run for different cohorts: a pilot session for 5 Ugandan medical students, a second session for a group of 8 nursing and midwifery students and finally, a third was devised for physicians. The data collected was analysed in excel. Paired T-Tests determined statistical significance between pre and post-test scores and confidence before and after the sessions. Average clinical skill assessment scores were converted to percentages based on the area of BLS being assessed. Results: 27 participants were included in the analysis. 14 received ‘small group training’ whilst 13 received’ large group training’ 88% of all participants had received some form of resuscitation training. Of these, 46% had received theory training, 27% practical training and only 15% received both. 12% had received no training. On average, all participants demonstrated a significant increase of 5.3 in self-assessed confidence (p <0.05). On average, all participants thought the session was very useful. Analysis of qualitative date from clinician interviews in ongoing but identified themes identified include rescue breaths being considered the most important aspect resuscitation and doubts of a ‘good’ outcome from resuscitation. Conclusions: The results of this small study reflect the need for regular formal training in BLS in low resource settings. The active engagement and positive opinions concerning the utility of the training are promising as well as the evidence of improvement in knowledge.

Keywords: basic life support, education, resuscitation, sub-Saharan Africa, training, Uganda

Procedia PDF Downloads 117
291 Theoretical-Methodological Model to Study Vulnerability of Death in the Past from a Bioarchaeological Approach

Authors: Geraldine G. Granados Vazquez

Abstract:

Every human being is exposed to the risk of dying; wherein some of them are more susceptible than others depending on the cause. Therefore, the cause could be the hazard to die that a group or individual has, making this irreversible damage the condition of vulnerability. Risk is a dynamic concept; which means that it depends on the environmental, social, economic and political conditions. Thus vulnerability may only be evaluated in terms of relative parameters. This research is focusing specifically on building a model that evaluate the risk or propensity of death in past urban societies in connection with the everyday life of individuals, considering that death can be a consequence of two coexisting issues: hazard and the deterioration of the resistance to destruction. One of the most important discussions in bioarchaeology refers to health and life conditions in ancient groups; the researchers are looking for more flexible models that evaluate these topics. In that way, this research proposes a theoretical-methodological model that assess the vulnerability of death in past urban groups. This model pretends to be useful to evaluate the risk of death, considering their sociohistorical context, and their intrinsic biological features. This theoretical and methodological model, propose four areas to assess vulnerability. The first three areas use statistical methods or quantitative analysis. While the last and fourth area, which corresponds to the embodiment, is based on qualitative analysis. The four areas and their techniques proposed are a) Demographic dynamics. From the distribution of age at the time of death, the analysis of mortality will be performed using life tables. From here, four aspects may be inferred: population structure, fertility, mortality-survival, and productivity-migration, b) Frailty. Selective mortality and heterogeneity in frailty can be assessed through the relationship between characteristics and the age at death. There are two indicators used in contemporary populations to evaluate stress: height and linear enamel hypoplasias. Height estimates may account for the individual’s nutrition and health history in specific groups; while enamel hypoplasias are an account of the individual’s first years of life, c) Inequality. Space reflects various sectors of society, also in ancient cities. In general terms, the spatial analysis uses measures of association to show the relationship between frail variables and space, d) Embodiment. The story of everyone leaves some evidence on the body, even in the bones. That led us to think about the dynamic individual's relations in terms of time and space; consequently, the micro analysis of persons will assess vulnerability from the everyday life, where the symbolic meaning also plays a major role. In sum, using some Mesoamerica examples, as study cases, this research demonstrates that not only the intrinsic characteristics related to the age and sex of individuals are conducive to vulnerability, but also the social and historical context that determines their state of frailty before death. An attenuating factor for past groups is that some basic aspects –such as the role they played in everyday life– escape our comprehension, and are still under discussion.

Keywords: bioarchaeology, frailty, Mesoamerica, vulnerability

Procedia PDF Downloads 195
290 Correlation between the Levels of Some Inflammatory Cytokines/Haematological Parameters and Khorana Scores of Newly Diagnosed Ambulatory Cancer Patients

Authors: Angela O. Ugwu, Sunday Ocheni

Abstract:

Background: Cancer-associated thrombosis (CAT) is a cause of morbidity and mortality among cancer patients. Several risk factors for developing venous thromboembolism (VTE) also coexist with cancer patients, such as chemotherapy and immobilization, thus contributing to the higher risk of VTE in cancer patients when compared to non-cancer patients. This study aimed to determine if there is any correlation between levels of some inflammatory cytokines/haematological parameters and Khorana scores of newly diagnosed chemotherapy naïve ambulatory cancer patients (CNACP). Methods: This was a cross-sectional analytical study carried out from June 2021 to May 2022. Eligible newly diagnosed cancer patients 18 years and above (case group) were enrolled consecutively from the adult Oncology Clinics of the University of Nigeria Teaching Hospital, Ituku/Ozalla (UNTH). The control group was blood donors at UNTH Ituku/Ozalla, Enugu blood bank, and healthy members of the Medical and Dental Consultants Association of Nigeria (MDCAN), UNTH Chapter. Blood samples collected from the participants were assayed for IL-6, TNF-Alpha, and haematological parameters such as haemoglobin, white blood cell count (WBC), and platelet count. Data were entered into an Excel worksheet and were then analyzed using Statistical Package for Social Sciences (SPSS) computer software version 21.0 for windows. A P value of < 0.05 was considered statistically significant. Results: A total of 200 participants (100 cases and 100 controls) were included in the study. The overall mean age of the participants was 47.42 ±15.1 (range 20-76). The sociodemographic characteristics of the two groups, including age, sex, educational level, body mass index (BMI), and occupation, were similar (P > 0.05). Following One Way ANOVA, there were significant differences between the mean levels of interleukin-6 (IL-6) (p = 0.036) and tumor necrotic factor-α (TNF-α) (p = 0.001) in the three Khorana score groups of the case group. Pearson’s correlation analysis showed a significant positive correlation between the Khorana scores and IL-6 (r=0.28, p = 0.031), TNF-α (r= 0.254, p= 0.011), and PLR (r= 0.240, p=0.016). The mean serum levels of IL-6 were significantly higher in CNACP than in the healthy controls [8.98 (8-12) pg/ml vs. 8.43 (2-10) pg/ml, P=0.0005]. There were also significant differences in the mean levels of the haemoglobin (Hb) level (P < 0.001)); white blood cell (WBC) count ((P < 0.001), and platelet (PL) count (P = 0.005) between the two groups of participants. Conclusion: There is a significant positive correlation between the serum levels of IL-6, TNF-α, and PLR and the Khorana scores of CNACP. The mean serum levels of IL-6, TNF-α, PLR, WBC, and PL count were significantly higher in CNACP than in the healthy controls. Ambulatory cancer patients with high-risk Khorana scores may benefit from anti-inflammatory drugs because of the positive correlation with inflammatory cytokines. Recommendations: Ambulatory cancer patients with 2 Khorana scores may benefit from thromboprophylaxis since they have higher Khorana scores. A multicenter study with a heterogeneous population and larger sample size is recommended in the future to further elucidate the relationship between IL-6, TNF-α, PLR, and the Khorana scores among cancer patients in the Nigerian population.

Keywords: thromboprophylaxis, cancer, Khorana scores, inflammatory cytokines, haematological parameters

Procedia PDF Downloads 57
289 Prevalence and Risk Factors of Musculoskeletal Disorders among School Teachers in Mangalore: A Cross Sectional Study

Authors: Junaid Hamid Bhat

Abstract:

Background: Musculoskeletal disorders are one of the main causes of occupational illness. Mechanisms and the factors like repetitive work, physical effort and posture, endangering the risk of musculoskeletal disorders would now appear to have been properly identified. Teacher’s exposure to work-related musculoskeletal disorders appears to be insufficiently described in the literature. Little research has investigated the prevalence and risk factors of musculoskeletal disorders in teaching profession. Very few studies are available in this regard and there are no studies evident in India. Purpose: To determine the prevalence of musculoskeletal disorders and to identify and measure the association of such risk factors responsible for developing musculoskeletal disorders among school teachers. Methodology: An observational cross sectional study was carried out. 500 school teachers from primary, middle, high and secondary schools were selected, based on eligibility criteria. A signed consent was obtained and a self-administered, validated questionnaire was used. Descriptive statistics was used to compute the statistical mean and standard deviation, frequency and percentage to estimate the prevalence of musculoskeletal disorders among school teachers. The data analysis was done by using SPSS version 16.0. Results: Results indicated higher pain prevalence (99.6%) among school teachers during the past 12 months. Neck pain (66.1%), low back pain (61.8%) and knee pain (32.0%) were the most prevalent musculoskeletal complaints of the subjects. Prevalence of shoulder pain was also found to be high among school teachers (25.9%). 52.0% subjects reported pain as disabling in nature, causing sleep disturbance (44.8%) and pain was found to be associated with work (87.5%). A significant association was found between musculoskeletal disorders and sick leaves/absenteeism. Conclusion: Work-related musculoskeletal disorders particularly neck pain, low back pain, and knee pain, is highly prevalent and risk factors are responsible for the development of same in school teachers. There is little awareness of musculoskeletal disorders among school teachers, due to work load and prolonged/static postures. Further research should concentrate on specific risk factors like repetitive movements, psychological stress, and ergonomic factors and should be carried out all over the country and the school teachers should be studied carefully over a period of time. Also, an ergonomic investigation is needed to decrease the work-related musculoskeletal disorder problems. Implication: Recall bias and self-reporting can be considered as limitations. Also, cause and effect inferences cannot be ascertained. Based on these results, it is important to disseminate general recommendations for prevention of work-related musculoskeletal disorders with regards to the suitability of furniture, equipment and work tools, environmental conditions, work organization and rest time to school teachers. School teachers in the early stage of their careers should try to adapt the ergonomically favorable position whilst performing their work for a safe and healthy life later. Employers should be educated on practical aspects of prevention to reduce musculoskeletal disorders, since changes in workplace and work organization and physical/recreational activities are required.

Keywords: work related musculoskeletal disorders, school teachers, risk factors funding, medical and health sciences

Procedia PDF Downloads 245
288 Cardiac Rehabilitation Program and Health-Related Quality of Life; A Randomized Control Trial

Authors: Zia Ul Haq, Saleem Muhammad, Naeem Ullah, Abbas Shah, Abdullah Shah

Abstract:

Pakistan being the developing country is facing double burden of communicable and non-communicable disease. The aspect of secondary prevention of ischemic heart disease in developing countries is the dire need for public health specialists, clinicians and policy makers. There is some evidence that psychotherapeutic measures, including psychotherapy, recreation, exercise and stress management training have positive impact on secondary prevention of cardiovascular diseases but there are some contradictory findings as well. Cardiac rehabilitation program (CRP) has not yet fully implemented in Pakistan. Psychological, physical and specific health-related quality of life (HRQoL) outcomes needs assessment with respect to its practicality, effectiveness, and success. Objectives: To determine the effect of cardiac rehabilitation program (CRP) on the health-related quality of life (HRQoL) measures of post MI patients compared to the usual care. Hypothesis: Post MI patients who receive the interventions (CRP) will have better HRQoL as compared to those who receive the usual cares. Methods: The randomized control trial was conducted at a Cardiac Rehabilitation Unit of Lady Reading Hospital (LRH), Peshawar. LRH is the biggest hospital of the Province Khyber Pakhtunkhwa (KP). A total 206 participants who had recent first myocardial infarction were inducted in the study. Participants were randomly allocated into two group i.e. usual care group (UCG) and cardiac rehabilitation group (CRG) by permuted-block randomization (PBR) method. CRP was conducted in CRG in two phases. Three HRQoL outcomes i.e. general health questionnaire (GHQ), self-rated health (SRH) and MacNew quality of life after myocardial infarction (MacNew QLMI) were assessed at baseline and follow-up visits among both groups. Data were entered and analyzed by appropriate statistical test in STATA version 12. Results: A total of 195 participants were assessed at the follow-up period due to lost-to-follow-up. The mean age of the participants was 53.66 + 8.3 years. Males were dominant in both groups i.e. 150 (76.92%). Regarding educational status, majority of the participants were illiterate in both groups i.e. 128 (65.64%). Surprisingly, there were 139 (71.28%) who were non-smoker on the whole. The comorbid status was positive in 120 (61.54%) among all the patients. The SRH at follow-up among UCG and CRG was 4.06 (95% CI: 3.93, 4.19) and 2.36 (95% CI: 2.2, 2.52) respectively (p<0.001). GHQ at the follow-up of UCG and CRG was 20.91 (95% CI: 18.83, 21.97) and 7.43 (95% CI: 6.59, 8.27) respectively (p<0.001). The MacNew QLMI at follow-up of UCG and CRG was 3.82 (95% CI: 3.7, 3.94) and 5.62 (95% CI: 5.5, 5.74) respectively (p<0.001). All the HRQoL measures showed strongly significant improvement in the CRG at follow-up period. Conclusion: HRQOL improved in post MI patients after comprehensive CRP. Education of the patients and their supervision is needed when they are involved in their rehabilitation activities. It is concluded that establishing CRP in cardiac units, recruiting post-discharged MI patients and offering them CRP does not impose high costs and can result in significant improvement in HRQoL measures. Trial registration no: ACTRN12617000832370

Keywords: cardiovascular diseases, cardiac rehabilitation, health-related quality of life, HRQoL, myocardial infarction, quality of life, QoL, rehabilitation, randomized control trial

Procedia PDF Downloads 198
287 Hygrothermal Interactions and Energy Consumption in Cold Climate Hospitals: Integrating Numerical Analysis and Case Studies to Investigate and Analyze the Impact of Air Leakage and Vapor Retarding

Authors: Amir E. Amirzadeh, Richard K. Strand

Abstract:

Moisture-induced problems are a significant concern for building owners, architects, construction managers, and building engineers, as they can have substantial impacts on building enclosures' durability and performance. Computational analyses, such as hygrothermal and thermal analysis, can provide valuable information and demonstrate the expected relative performance of building enclosure systems but are not grounded in absolute certainty. This paper evaluates the hygrothermal performance of common enclosure systems in hospitals in cold climates. The study aims to investigate the impact of exterior wall systems on hospitals, focusing on factors such as durability, construction deficiencies, and energy performance. The study primarily examines the impact of air leakage and vapor retarding layers relative to energy consumption. While these factors have been studied in residential and commercial buildings, there is a lack of information on their impact on hospitals in a holistic context. The study integrates various research studies and professional experience in hospital building design to achieve its objective. The methodology involves surveying and observing exterior wall assemblies, reviewing common exterior wall assemblies and details used in hospital construction, performing simulations and numerical analyses of various variables, validating the model and mechanism using available data from industry and academia, visualizing the outcomes of the analysis, and developing a mechanism to demonstrate the relative performance of exterior wall systems for hospitals under specific conditions. The data sources include case studies from real-world projects and peer-reviewed articles, industry standards, and practices. This research intends to integrate and analyze the in-situ and as-designed performance and durability of building enclosure assemblies with numerical analysis. The study's primary objective is to provide a clear and precise roadmap to better visualize and comprehend the correlation between the durability and performance of common exterior wall systems used in the construction of hospitals and the energy consumption of these buildings under certain static and dynamic conditions. As the construction of new hospitals and renovation of existing ones have grown over the last few years, it is crucial to understand the effect of poor detailing or construction deficiencies on building enclosure systems' performance and durability in healthcare buildings. This study aims to assist stakeholders involved in hospital design, construction, and maintenance in selecting durable and high-performing wall systems. It highlights the importance of early design evaluation, regular quality control during the construction of hospitals, and understanding the potential impacts of improper and inconsistent maintenance and operation practices on occupants, owner, building enclosure systems, and Heating, Ventilation, and Air Conditioning (HVAC) systems, even if they are designed to meet the project requirements.

Keywords: hygrothermal analysis, building enclosure, hospitals, energy efficiency, optimization and visualization, uncertainty and decision making

Procedia PDF Downloads 41
286 Unifying RSV Evolutionary Dynamics and Epidemiology Through Phylodynamic Analyses

Authors: Lydia Tan, Philippe Lemey, Lieselot Houspie, Marco Viveen, Darren Martin, Frank Coenjaerts

Abstract:

Introduction: Human respiratory syncytial virus (hRSV) is the leading cause of severe respiratory tract infections in infants under the age of two. Genomic substitutions and related evolutionary dynamics of hRSV are of great influence on virus transmission behavior. The evolutionary patterns formed are due to a precarious interplay between the host immune response and RSV, thereby selecting the most viable and less immunogenic strains. Studying genomic profiles can teach us which genes and consequent proteins play an important role in RSV survival and transmission dynamics. Study design: In this study, genetic diversity and evolutionary rate analysis were conducted on 36 RSV subgroup B whole genome sequences and 37 subgroup A genome sequences. Clinical RSV isolates were obtained from nasopharyngeal aspirates and swabs of children between 2 weeks and 5 years old of age. These strains, collected during epidemic seasons from 2001 to 2011 in the Netherlands and Belgium by either conventional or 454-sequencing. Sequences were analyzed for genetic diversity, recombination events, synonymous/non-synonymous substitution ratios, epistasis, and translational consequences of mutations were mapped to known 3D protein structures. We used Bayesian statistical inference to estimate the rate of RSV genome evolution and the rate of variability across the genome. Results: The A and B profiles were described in detail and compared to each other. Overall, the majority of the whole RSV genome is highly conserved among all strains. The attachment protein G was the most variable protein and its gene had, similar to the non-coding regions in RSV, more elevated (two-fold) substitution rates than other genes. In addition, the G gene has been identified as the major target for diversifying selection. Overall, less gene and protein variability was found within RSV-B compared to RSV-A and most protein variation between the subgroups was found in the F, G, SH and M2-2 proteins. For the F protein mutations and correlated amino acid changes are largely located in the F2 ligand-binding domain. The small hydrophobic phosphoprotein and nucleoprotein are the most conserved proteins. The evolutionary rates were similar in both subgroups (A: 6.47E-04, B: 7.76E-04 substitution/site/yr), but estimates of the time to the most recent common ancestor were much lower for RSV-B (B: 19, A: 46.8 yrs), indicating that there is more turnover in this subgroup. Conclusion: This study provides a detailed description of whole RSV genome mutations, the effect on translation products and the first estimate of the RSV genome evolution tempo. The immunogenic G protein seems to require high substitution rates in order to select less immunogenic strains and other conserved proteins are most likely essential to preserve RSV viability. The resulting G gene variability makes its protein a less interesting target for RSV intervention methods. The more conserved RSV F protein with less antigenic epitope shedding is, therefore, more suitable for developing therapeutic strategies or vaccines.

Keywords: drug target selection, epidemiology, respiratory syncytial virus, RSV

Procedia PDF Downloads 384
285 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices

Authors: Kaustav Mukherjee

Abstract:

In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parameters

Keywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss

Procedia PDF Downloads 100
284 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 89
283 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement

Authors: Rajkumar Ghosh

Abstract:

Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.

Keywords: earthquake, out-of-sequence thrust, disaster, human life

Procedia PDF Downloads 45
282 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 230
281 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 353
280 Multivariate Ecoregion Analysis of Nutrient Runoff From Agricultural Land Uses in North America

Authors: Austin P. Hopkins, R. Daren Harmel, Jim A Ippolito, P. J. A. Kleinman, D. Sahoo

Abstract:

Field-scale runoff and water quality data are critical to understanding the fate and transport of nutrients applied to agricultural lands and minimizing their off-site transport because it is at that scale that agricultural management decisions are typically made based on hydrologic, soil, and land use factors. However, regional influences such as precipitation, temperature, and prevailing cropping systems and land use patterns also impact nutrient runoff. In the present study, the recently-updated MANAGE (Measured Annual Nutrient loads from Agricultural Environments) database was used to conduct an ecoregion-level analysis of nitrogen and phosphorus runoff from agricultural lands in the North America. Specifically, annual N and P runoff loads for cropland and grasslands in North American Level II EPA ecoregions were presented, and the impact of factors such as land use, tillage, and fertilizer timing and placement on N and P runoff were analyzed. Specifically we compiled annual N and P runoff load data (i.e., dissolved, particulate, and total N and P, kg/ha/yr) for each Level 2 EPA ecoregion and for various agricultural management practices (i.e., land use, tillage, fertilizer timing, fertilizer placement) within each ecoregion to showcase the analyses possible with the data in MANAGE. Potential differences in N and P runoff loads were evaluated between and within ecoregions with statistical and graphical approaches. Non-parametric analyses, mainly Mann-Whitney tests were conducted on median values weighted by the site years of data utilizing R because the data were not normally distributed, and we used Dunn tests and box and whisker plots to visually and statistically evaluate significant differences. Out of the 50 total North American Ecoregions, 11 were found that had significant data and site years to be utilized in the analysis. When examining ecoregions alone, it was observed that ER 9.2 temperate prairies had a significantly higher total N at 11.7 kg/ha/yr than ER 9.4 South Central Semi Arid Prairies with a total N of 2.4. When examining total P it was observed that ER 8.5 Mississippi Alluvial and Southeast USA Coastal Plains had a higher load at 3.0 kg/ha/yr than ER 8.2 Southeastern USA Plains with a load of 0.25 kg/ha/yr. Tillage and Land Use had severe impacts on nutrient loads. In ER 9.2 Temperate Prairies, conventional tillage had a total N load of 36.0 kg/ha/yr while conservation tillage had a total N load of 4.8 kg/ha/yr. In all relevant ecoregions, when corn was the predominant land use, total N levels significantly increased compared to grassland or other grains. In ER 8.4 Ozark-Ouachita, Corn had a total N of 22.1 kg/ha/yr while grazed grassland had a total N of 2.9 kg/ha/yr. There are further intricacies of the interactions that agricultural management practices have on one another combined with ecological conditions and their impacts on the continental aquatic nutrient loads that still need to be explored. This research provides a stepping stone to further understanding of land and resource stewardship and best management practices.

Keywords: water quality, ecoregions, nitrogen, phosphorus, agriculture, best management practices, land use

Procedia PDF Downloads 57
279 Catalytic Decomposition of Formic Acid into H₂/CO₂ Gas: A Distinct Approach

Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy

Abstract:

Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of the biomass platform, comprising a potential pool of hydrogen energy that stands as a distinct energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need for in-situ H₂ production, which plays a key role in the hydrogenation reactions of biomass into higher-value components. It is reported elsewhere in the literature that catalytic decomposition of FA is usually performed in poorly designed setups using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. Our work suggests an approach that integrates designing a distinct catalyst featuring magnetic properties with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for the dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H₂ gas from FA. Using an ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under an inert medium. Through a distinct approach, FA is charged into the reactor via a high-pressure positive displacement pump at steady-state conditions. The produced gas (H₂+CO₂) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The uniqueness of this work lies in designing a very responsive catalyst, pumping a consistent amount of FA into a sealed reactor running at steady-state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at a lower temperature range (35-50°C) yielded more gas, while the catalyst loading and Pd doping wt.% were found to be the most significant factors with P-values 0.026 & 0.031, respectively.

Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles

Procedia PDF Downloads 19
278 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 225
277 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow

Authors: Masood Otarod, Ronald M. Supkowski

Abstract:

This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.

Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations

Procedia PDF Downloads 242
276 Bacteriophages for Sustainable Wastewater Treatment: Application in Black Water Decontamination with an Emphasis to DRDO Biotoilet

Authors: Sonika Sharma, Mohan G. Vairale, Sibnarayan Datta, Soumya Chatterjee, Dharmendra Dubey, Rajesh Prasad, Raghvendra Budhauliya, Bidisha Das, Vijay Veer

Abstract:

Bacteriophages are viruses that parasitize specific bacteria and multiply in metabolising host bacteria. Bacteriophages hunt for a single or a subset of bacterial species, making them potential antibacterial agents. Utilizing the ability of phages to control bacterial populations has several applications from medical to the fields of agriculture, aquaculture and the food industry. However, harnessing phage based techniques in wastewater treatments to improve quality of effluent and sludge release into the environment is a potential area for R&D application. Phage mediated bactericidal effect in any wastewater treatment process has many controlling factors that lead to treatment performance. In laboratory conditions, titer of bacteriophages (coliphages) isolated from effluent water of a specially designed anaerobic digester of human night soil (DRDO Biotoilet) was successfully increased with a modified protocol of the classical double layer agar technique. Enrichment of the same was carried out and efficacy of the phage enriched medium was evaluated at different conditions (specific media, temperature, storage conditions). Growth optimization study was carried out on different media like soybean casein digest medium (Tryptone soya medium), Luria-Bertani medium, phage deca broth medium and MNA medium (Modified nutrient medium). Further, temperature-phage yield relationship was also observed at three different temperatures 27˚C, 37˚C and 44˚C at laboratory condition. Results showed the higher activity of coliphage 27˚C and at 37˚C. Further, addition of divalent ions (10mM MgCl2, 5mM CaCl2) and 5% glycerol resulted in a significant increase in phage titer. Besides this, effect of antibiotics addition like ampicillin and kanamycin at different concentration on plaque formation was analysed and reported that ampicillin at a concentration of 1mg/ml ampicillin stimulates phage infection and results in more number of plaques. Experiments to test viability of phage showed that it can remain active for 6 months at 4˚C in fresh tryptone soya broth supplemented with fresh culture of coliforms (early log phase). The application of bacteriophages (especially coliphages) for treatment of effluent of human faecal matter contaminated effluent water is unique. This environment-friendly treatment system not only reduces the pathogenic coliforms, but also decreases the competition between nuisance bacteria and functionally important microbial populations. Therefore, the phage based cocktail to treat fecal pathogenic bacteria present in black water has many implication in wastewater treatment processes including ‘DRDO Biotoilet’, which is an ecofriendly appropriate and affordable human faecal matter treatment technology for different climates and situations.

Keywords: wastewater, microbes, virus, biotoilet, phage viability

Procedia PDF Downloads 403
275 Common Misconceptions around Human Immunodeficiency Virus in Rural Uganda: Establishing the Role for Patient Education Leaflets Using Patient and Staff Surveys

Authors: Sara Qandil, Harriet Bothwell, Lowri Evans, Kevin Jones, Simon Collin

Abstract:

Background: Uganda suffers from high rates of HIV. Misconceptions around HIV are known to be prevalent in Sub-Saharan Africa (SSA). Two of the most common misconceptions in Uganda are that HIV can be transmitted by mosquito bites or from sharing food. The aim of this project was to establish the local misconceptions around HIV in a Central Ugandan population, and identify if there is a role for patient education leaflets. This project was undertaken as a student selected component (SSC) offered by Swindon Academy, based at the Great Western Hospital, to medical students in their fourth year of the undergraduate programme. Methods: The study was conducted at Villa Maria Hospital; a private, rural hospital in Kalungu District, Central Uganda. 36 patients, 23 from the hospital clinic and 13 from the community were interviewed regarding their understanding of HIV and by what channels they had obtained this understanding. Interviews were conducted using local student nurses as translators. Verbal responses were translated and then transcribed by the researcher. The same 36 patients then undertook a 'misconception' test consisting of 35 questions. Quantitative data was analysed using descriptive statistics and results were scored based on three components of 'transmission knowledge', 'prevention knowledge' and 'misconception rejection'. Each correct response to a question was scored one point, otherwise zero e.g. correctly rejecting a misconception scored one point, but answering ‘yes’ or ‘don’t know’ scored zero. Scores ≤ 27 (the average score) were classified as having ‘poor understanding’. Mean scores were compared between participants seen at the HIV clinic and in the community, and p-values (including Fisher’s exact test) were calculated using Stata 2015. Level of significance was set at 0.05. Interviews with 7 members of staff working in the HIV clinic were undertaken to establish what methods of communication are used to educate patients. Interviews were transcribed and thematic analysis undertaken. Results: The commonest misconceptions which failed to be rejected included transmission of HIV by kissing (78%), mosquitoes (69%) and touching (36%). 33% believed HIV may be prevented by praying. The overall mean scores for transmission knowledge (87.5%) and prevention knowledge (81.1%) were better than misconception rejection scores (69.3%). HIV clinic respondents did tend to have higher scores, i.e. fewer misconceptions, although there was statistical evidence of a significant difference only for prevention knowledge (p=0.03). Analysis of the qualitative data is ongoing but several patients expressed concerns about not being able to read and therefore leaflets not having a helpful role. Conclusions: Results from this paper identified that a high proportion of the population studied held misconceptions about HIV. Qualitative data suggests that there may be a role for patient education leaflets, if pictorial-based and suitable for those with low literacy skill.

Keywords: HIV, human immunodeficiency virus, misconceptions, patient education, Sub-Saharan Africa, Uganda

Procedia PDF Downloads 231
274 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data

Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann

Abstract:

Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.

Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers

Procedia PDF Downloads 176
273 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 126
272 An Introspective look into Hotel Employees Career Satisfaction

Authors: Anastasios Zopiatis, Antonis L. Theocharous

Abstract:

In the midst of a fierce war for talent, the hospitality industry is seeking new and innovative ways to enrich its image as an employer of choice and not a necessity. Historically, the industry’s professions are portrayed as ‘unattractive’ due to their repetitious nature, long and unsocial working schedules, below average remunerations, and the mental and physical demands of the job. Aligning with the industry, hospitality and tourism scholars embarked on a journey to investigate pertinent topics with the aim of enhancing our conceptual understanding of the elements that influence employees at the hospitality world of work. Topics such as job involvement, commitment, job and career satisfaction, and turnover intentions became the focal points in a multitude of relevant empirical and conceptual investigations. Nevertheless, gaps or inconsistencies in existing theories, as a result of both the volatile complexity of the relationships governing human behavior in the hospitality workplace, and the academic community’s unopposed acceptance of theoretical frameworks mainly propounded in the United States and United Kingdom years ago, necessitate our continuous vigilance. Thus, in an effort to enhance and enrich the discourse, we set out to investigate the relationship between intrinsic and extrinsic job satisfaction traits and the individual’s career satisfaction, and subsequent intention to remain in the hospitality industry. Reflecting on existing literature, a quantitative survey was developed and administered, face-to-face, to 650 individuals working as full-time employees in 4- and 5- star hotel establishments in Cyprus, whereas a multivariate statistical analysis method, namely Structural Equation Modeling (SEM), was utilized to determine whether relationships existed between constructs as a means to either accept or reject the hypothesized theory. Findings, of interest to both industry stakeholders and academic scholars, suggest that the individual’s future intention to remain within the industry is primarily associated with extrinsic job traits. Our findings revealed that positive associations exist between extrinsic job traits, and both career satisfaction and future intention. In contrast, when investigating the relationship of intrinsic traits, a positive association was revealed only with career satisfaction. Apparently, the local industry’s environmental factors of seasonality, excessive turnover, overdependence on seasonal, and part-time migrant workers, prohibit industry stakeholders in effectively investing the time and resources in the development and professional growth of their employees. Consequently intrinsic job satisfaction factors such as advancement, growth, and achievement, take backstage to the more materialistic extrinsic factors. Findings from the subsequent mediation analysis support the notion that intrinsic traits can positively influence future intentions indirectly only through career satisfaction, whereas extrinsic traits can positively impact both career satisfaction and future intention both directly and indirectly.

Keywords: career satisfaction, Cyprus, hotel employees, structural equation modeling, SEM

Procedia PDF Downloads 255
271 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management

Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal

Abstract:

Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.

Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management

Procedia PDF Downloads 81
270 Health Risk Assessment from Potable Water Containing Tritium and Heavy Metals

Authors: Olga A. Momot, Boris I. Synzynys, Alla A. Oudalova

Abstract:

Obninsk is situated in the Kaluga region 100 km southwest of Moscow on the left bank of the Protva River. Several enterprises utilizing nuclear energy are operating in the town. A special attention in the region where radiation-hazardous facilities are located has traditionally been paid to radioactive gas and aerosol releases into the atmosphere; liquid waste discharges into the Protva river and groundwater pollution. Municipal intakes involve 34 wells arranged 15 km apart in a sequence north-south along the foot of the left slope of the Protva river valley. Northern and southern water intakes are upstream and downstream of the town, respectively. They belong to river valley intakes with mixed feeding, i.e. precipitation infiltration is responsible for a smaller part of groundwater, and a greater amount is being formed by overflowing from Protva. Water intakes are maintained by the Protva river runoff, the volume of which depends on the precipitation fallen out and watershed area. Groundwater contamination with tritium was first detected in a sanitary-protective zone of the Institute of Physics and Power Engineering (SRC-IPPE) by Roshydromet researchers when realizing the “Program of radiological monitoring in the territory of nuclear industry enterprises”. A comprehensive survey of the SRC-IPPE’s industrial site and adjacent territories has revealed that research nuclear reactors and accelerators where tritium targets are applied as well as radioactive waste storages could be considered as potential sources of technogenic tritium. All the above sources are located within the sanitary controlled area of intakes. Tritium activity in water of springs and wells near the SRC-IPPE is about 17.4 – 3200 Bq/l. The observed values of tritium activity are below the intervention levels (7600 Bq/l for inorganic compounds and 3300 Bq/l for organically bound tritium). The risk has being assessed to estimate possible effect of considered tritium concentrations on human health. Data on tritium concentrations in pipe-line drinking water were used for calculations. The activity of 3H amounted to 10.6 Bq/l and corresponded to the risk of such water consumption of ~ 3·10-7 year-1. The risk value given in magnitude is close to the individual annual death risk for population living near a NPP – 1.6·10-8 year-1 and at the same time corresponds to the level of tolerable risk (10-6) and falls within “risk optimization”, i.e. in the sphere for planning the economically sound measures on exposure risk reduction. To estimate the chemical risk, physical and chemical analysis was made of waters from all springs and wells near the SRC-IPPE. Chemical risk from groundwater contamination was estimated according to the EPA US guidance. The risk of carcinogenic diseases at a drinking water consumption amounts to 5·10-5. According to the classification accepted the health risk in case of spring water consumption is inadmissible. The compared assessments of risk associated with tritium exposure, on the one hand, and the dangerous chemical (e.g. heavy metals) contamination of Obninsk drinking water, on the other hand, have confirmed that just these chemical pollutants are responsible for health risk.

Keywords: radiation-hazardous facilities, water intakes, tritium, heavy metal, health risk

Procedia PDF Downloads 218
269 Reconceptualizing Evidence and Evidence Types for Digital Journalism Studies

Authors: Hai L. Tran

Abstract:

In the digital age, evidence-based reporting is touted as a best practice for seeking the truth and keeping the public well-informed. Journalists are expected to rely on evidence to demonstrate the validity of a factual statement and lend credence to an individual account. Evidence can be obtained from various sources, and due to a rich supply of evidence types available, the definition of this important concept varies semantically. To promote clarity and understanding, it is necessary to break down the various types of evidence and categorize them in a more coherent, systematic way. There is a wide array of devices that digital journalists deploy as proof to back up or refute a truth claim. Evidence can take various formats, including verbal and visual materials. Verbal evidence encompasses quotes, soundbites, talking heads, testimonies, voice recordings, anecdotes, and statistics communicated through written or spoken language. There are instances where evidence is simply non-verbal, such as when natural sounds are provided without any verbalized words. On the other hand, other language-free items exhibited in photos, video footage, data visualizations, infographics, and illustrations can serve as visual evidence. Moreover, there are different sources from which evidence can be cited. Supporting materials, such as public or leaked records and documents, data, research studies, surveys, polls, or reports compiled by governments, organizations, and other entities, are frequently included as informational evidence. Proof can also come from human sources via interviews, recorded conversations, public and private gatherings, or press conferences. Expert opinions, eye-witness insights, insider observations, and official statements are some of the common examples of testimonial evidence. Digital journalism studies tend to make broad references when comparing qualitative versus quantitative forms of evidence. Meanwhile, limited efforts are being undertaken to distinguish between sister terms, such as “data,” “statistical,” and “base-rate” on one side of the spectrum and “narrative,” “anecdotal,” and “exemplar” on the other. The present study seeks to develop the evidence taxonomy, which classifies evidence through the quantitative-qualitative juxtaposition and in a hierarchical order from broad to specific. According to this scheme, data, statistics, and base rate belong to the quantitative evidence group, whereas narrative, anecdote, and exemplar fall into the qualitative evidence group. Subsequently, the taxonomical classification arranges data versus narrative at the top of the hierarchy of types of evidence, followed by statistics versus anecdote and base rate versus exemplar. This research reiterates the central role of evidence in how journalists describe and explain social phenomena and issues. By defining the various types of evidence and delineating their logical connections it helps remove a significant degree of conceptual inconsistency, ambiguity, and confusion in digital journalism studies.

Keywords: evidence, evidence forms, evidence types, taxonomy

Procedia PDF Downloads 39
268 Impact of Lined and Unlined Water Bodies on the Distribution and Abundance of Fresh Water Snails in Certain Governorates in Egypt

Authors: Nahed Mohamed Ismail, Bayomy Mostafa, Ahmed Abdel Kader, Ahmed Mohamed Azzam

Abstract:

Effect of lining watercourses on the distribution and abundance of fresh water snails at two Egyptian governorates, Baheria (new reclaimed area) and Giza was studied. Seasonal survey in lined and unlined sites during two successive years was carried out. Samples of snails and water were collected from each examined site and the ecological conditions were recorded. The collected snails from each site were placed in plastic aquaria and transferred to the laboratory, where they were sorted out, identified, counted and examined for natural infection. The size frequency distribution was calculated for each snail species. Results revealed that snails were represented in all examined watercourses (lined and unlined) at the two tested habitats by 14 species. (Biomphalaria alexandrina, B. glabrata, Bulinus truncatus, Physa acuta. Helisoma duryi, Lymnaea natalensis, Planorbis planorbis, Cleopatra bulimoids, Lanistes carinatus, Bellamya unicolor, Melanoides tuberculata, Theodoxus nilotica, Succinia cleopatra and Gabbiella senaarensis). During spring, the percentage of live (45%) and dead (55%) snail species was extremely highly significant lower (p>0.001) in lined water bodies compared to the unlined ones (93.5% and 6.5%, respectively) in the examined sites at Baheria. At Giza, the percentage values of live snail species from all lined watercourses (82.6% and 60.2%, during winter and spring, respectively) was significantly lower (p>0.05 & p>0.01) than those in unlined ones (91.1% and 79%, respectively). Size frequency distribution of snails collected from the lined and unlined water bodies at Baheria and Giza governorates during all seasons revealed that during survey, snail populations were stable and the recruitment of young to adult was continuing for some species, where the recruits were observed with adults. However, there was no sign of small snails occurrence in case of B. glabrata and B. alexandrina during autumn, winter and spring and disappear during summer at Giza. Meanwhile they completely absent during all seasons at Baheria Governorate. Chemical analysis of some heavy metals of water samples collected from lined and unlined sites from Baheria and Giza governorates during autumn, winter and spring were approximately as the same in both lined and unlined water bodies. However, Zn and Fe were higher in lined sites (0.78±0.37and 17.4 ± 4.3, respectively) than that of unlined ones (0.4±0.1 and 10.95 ± 1.93, respectively) and Cu was absent in both lined and unlined sites during summer at Baheria governorate. At Giza, Cu and Pb were absent and Fe were higher in lined sites (4.7± 4.2) than that of unlined ones (2.5 ± 1.4) during summer. Statistical analysis showed that no significant difference in all physico-chemical parameters of water in lined and unlined water bodies at the two tested habitats during all seasons. However, it was found that the water conductivity and TDS showed a lower mean values in lined sites than those of unlined ones. Thus, the present obtained data support the concept of utilizing environmental modification such as lining of water courses to help in minimizing the population density of certain vector snails and consequently reduce the transmission of snails born diseases.

Keywords: lining, fresh water, snails, watercourses

Procedia PDF Downloads 228
267 Bridging the Educational Gap: A Curriculum Framework for Mass Timber Construction Education and Comparative Analysis of Physical vs. Virtual Prototypes in Construction Management

Authors: Farnaz Jafari

Abstract:

The surge in mass timber construction represents a pivotal moment in sustainable building practices, yet the lack of comprehensive education in construction management poses a challenge in harnessing this innovation effectively. This research endeavors to bridge this gap by developing a curriculum framework integrating mass timber construction into undergraduate and industry certificate programs. To optimize learning outcomes, the study explores the impact of two prototype formats -Virtual Reality (VR) simulations and physical mock-ups- on students' understanding and skill development. The curriculum framework aims to equip future construction managers with a holistic understanding of mass timber, covering its unique properties, construction methods, building codes, and sustainable advantages. The study adopts a mixed-methods approach, commencing with a systematic literature review and leveraging surveys and interviews with educators and industry professionals to identify existing educational gaps. The iterative development process involves incorporating stakeholder feedback into the curriculum. The evaluation of prototype impact employs pre- and post-tests administered to participants engaged in pilot programs. Through qualitative content analysis and quantitative statistical methods, the study seeks to compare the effectiveness of VR simulations and physical mock-ups in conveying knowledge and skills related to mass timber construction. The anticipated findings will illuminate the strengths and weaknesses of each approach, providing insights for future curriculum development. The curriculum's expected contribution to sustainable construction education lies in its emphasis on practical application, bridging the gap between theoretical knowledge and hands-on skills. The research also seeks to establish a standard for mass timber construction education, contributing to the field through a unique comparative analysis of VR simulations and physical mock-ups. The study's significance extends to the development of best practices and evidence-based recommendations for integrating technology and hands-on experiences in construction education. By addressing current educational gaps and offering a comparative analysis, this research aims to enrich the construction management education experience and pave the way for broader adoption of sustainable practices in the industry. The envisioned curriculum framework is designed for versatile integration, catering to undergraduate programs and industry training modules, thereby enhancing the educational landscape for aspiring construction professionals. Ultimately, this study underscores the importance of proactive educational strategies in preparing industry professionals for the evolving demands of the construction landscape, facilitating a seamless transition towards sustainable building practices.

Keywords: curriculum framework, mass timber construction, physical vs. virtual prototypes, sustainable building practices

Procedia PDF Downloads 32
266 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 153
265 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 197