Search results for: prediction interval
315 A Randomized, Controlled Trial to Test Habit Formation Theory for Low Intensity Physical Exercise Promotion in Older Adults
Authors: Patrick Louie Robles, Jerry Suls, Ciaran Friel, Mark Butler, Samantha Gordon, Frank Vicari, Joan Duer-Hefele, Karina W. Davidson
Abstract:
Physical activity guidelines focus on increasing moderate-intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence finds increasing physical activity is positively associated with health benefits. Behavior change techniques (BCTs) have demonstrated some effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a personalized trials (N-of-1) design, delivered virtually, to evaluate the efficacy of using five BCTs in increasing low-intensity physical activity (by 2,000 steps of walking per day) in adults aged 45-75 years old. The 5 BCTs described in habit formation theory are goal setting, action planning, rehearsal, rehearsal in a consistent context, and self-monitoring. The study recruited health system employees in the target age range who had no mobility restrictions and expressed interest in increasing their daily activity by a minimum of 2,000 steps per day at least five days per week. Participants were sent a Fitbit Charge 4 fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7 but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by Fitbit for two weeks. Participants then engaged remotely with a clinical research coordinator to establish a “walking plan” that included a time and day interval (e.g., between 7am -8am on Monday-Friday), a location for the walk (e.g., park), and how much time the plan would need to achieve a minimum of 2,000 steps over their baseline average step count (20 minutes). All elements of the walking plan were required to remain consistent throughout the study. In the 10-week intervention phase of the study, participants received all five BCTs in a single, time-sensitive text message. The text message was delivered 30 minutes prior to the established walk time and signaled participants to begin walking when the context (i.e., day of the week, time of day) they pre-selected is encountered. Participants were asked to log both the start and conclusion of their activity session by pressing a button on the Fitbit tracker. Within 30 minutes of the planned conclusion of the activity session, participants received a text message with a link to a secure survey. Here, they noted whether they engaged in the BCTs when prompted and completed an automaticity survey to identify how “automatic” their walking behavior had become. At the end of their trial, participants received a personalized summary of their step data over time, helping them learn more about their responses to the five BCTs. Whether the use of these 5 ‘habit formation’ BCTs in combination elicits a change in physical activity behavior among older adults will be reported. This study will inform the feasibility of a virtually-delivered N-of-1 study design to effectively promote physical activity as a component of healthy aging.Keywords: aging, exercise, habit, walking
Procedia PDF Downloads 140314 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 121313 Negative Perceptions of Ageing Predicts Greater Dysfunctional Sleep Related Cognition Among Adults Aged 60+
Authors: Serena Salvi
Abstract:
Ageistic stereotypes and practices have become a normal and therefore pervasive phenomenon in various aspects of everyday life. Over the past years, renewed awareness towards self-directed age stereotyping in older adults has given rise to a line of research focused on the potential role of attitudes towards ageing on seniors’ health and functioning. This set of studies has showed how a negative internalisation of ageistic stereotypes would discourage older adults in seeking medical advice, in addition to be associated to negative subjective health evaluation. An important dimension of mental health that is often affected in older adults is represented by sleep quality. Self-reported sleep quality among older adults has shown to be often unreliable when compared to their objective sleep measures. Investigations focused on self-reported sleep quality among older adults have suggested how this portion of the population would tend to accept disrupted sleep if believed to be up to standard for their age. On the other hand, unrealistic expectations, and dysfunctional beliefs towards sleep in ageing, might prompt older adults to report sleep disruption even in the absence of objective disrupted sleep. Objective of this study is to examine an association between personal attitudes towards ageing in adults aged 60+ and dysfunctional sleep related cognition. More in detail, this study aims to investigate a potential association between personal attitudes towards ageing, sleep locus of control and dysfunctional beliefs towards sleep among this portion of the population. Data in this study were statistically analysed in SPSS software. Participants were recruited through the online participants recruitment system Prolific. Inclusion of attention check questions throughout the questionnaire and consistency of responses were looked at. Prior to the commencement of this study, Ethical Approval was granted (ref. 39396). Descriptive statistics were used to determine the frequency, mean, and SDs of the variables. Pearson coefficient was used for interval variables, independent T-test for comparing means between two independent groups, analysis of variance (ANOVA) test for comparing the means in several independent groups, and hierarchical linear regression models for predicting criterion variables based on predictor variables. In this study self-perceptions of ageing were assessed using APQ-B’s subscales, while dysfunctional sleep related cognition was operationalised using the SLOC and the DBAS16 scales. Of the final subscales taken in consideration in the brief version of the APQ questionnaire, Emotional Representations (ER), Control Positive (PC) and Control and Consequences Negative (NC) have shown to be of particularly relevance for the remits of this study. Regression analysis show how an increase in the APQ-B subscale Emotional Representations (ER) predicts an increase in dysfunctional beliefs and attitudes towards sleep in this sample, after controlling for subjective sleep quality, level of depression and chronological age. A second regression analysis showed that APQ-B subscales Control Positive (PC) and Control and Consequences Negative (NC) were significant predictors in the change of variance of SLOC, after controlling for subjective sleep quality, level of depression and dysfunctional beliefs about sleep.Keywords: sleep-related cognition, perceptions of aging, older adults, sleep quality
Procedia PDF Downloads 103312 Data Analysis for Taxonomy Prediction and Annotation of 16S rRNA Gene Sequences from Metagenome Data
Authors: Suchithra V., Shreedhanya, Kavya Menon, Vidya Niranjan
Abstract:
Skin metagenomics has a wide range of applications with direct relevance to the health of the organism. It gives us insight to the diverse community of microorganisms (the microbiome) harbored on the skin. In the recent years, it has become increasingly apparent that the interaction between skin microbiome and the human body plays a prominent role in immune system development, cancer development, disease pathology, and many other biological implications. Next Generation Sequencing has led to faster and better understanding of environmental organisms and their mutual interactions. This project is studying the human skin microbiome of different individuals having varied skin conditions. Bacterial 16S rRNA data of skin microbiome is downloaded from SRA toolkit provided by NCBI to perform metagenomics analysis. Twelve samples are selected with two controls, and 3 different categories, i.e., sex (male/female), skin type (moist/intermittently moist/sebaceous) and occlusion (occluded/intermittently occluded/exposed). Quality of the data is increased using Cutadapt, and its analysis is done using FastQC. USearch, a tool used to analyze an NGS data, provides a suitable platform to obtain taxonomy classification and abundance of bacteria from the metagenome data. The statistical tool used for analyzing the USearch result is METAGENassist. The results revealed that the top three abundant organisms found were: Prevotella, Corynebacterium, and Anaerococcus. Prevotella is known to be an infectious bacterium found on wound, tooth cavity, etc. Corynebacterium and Anaerococcus are opportunist bacteria responsible for skin odor. This result infers that Prevotella thrives easily in sebaceous skin conditions. Therefore it is better to undergo intermittently occluded treatment such as applying ointments, creams, etc. to treat wound for sebaceous skin type. Exposing the wound should be avoided as it leads to an increase in Prevotella abundance. Moist skin type individuals can opt for occluded or intermittently occluded treatment as they have shown to decrease the abundance of bacteria during treatment.Keywords: bacterial 16S rRNA , next generation sequencing, skin metagenomics, skin microbiome, taxonomy
Procedia PDF Downloads 172311 Replacement of the Distorted Dentition of the Cone Beam Computed Tomography Scan Models for Orthognathic Surgery Planning
Authors: T. Almutairi, K. Naudi, N. Nairn, X. Ju, B. Eng, J. Whitters, A. Ayoub
Abstract:
Purpose: At present Cone Beam Computed Tomography (CBCT) imaging does not record dental morphology accurately due to the scattering produced by metallic restorations and the reported magnification. The aim of this pilot study is the development and validation of a new method for the replacement of the distorted dentition of CBCT scans with the dental image captured by the digital intraoral camera. Materials and Method: Six dried skulls with orthodontics brackets on the teeth were used in this study. Three intra-oral markers made of dental stone were constructed which were attached to orthodontics brackets. The skulls were CBCT scanned, and occlusal surface was captured using TRIOS® 3D intraoral scanner. Marker based and surface based registrations were performed to fuse the digital intra-oral scan(IOS) into the CBCT models. This produced a new composite digital model of the skull and dentition. The skulls were scanned again using the commercially accurate Laser Faro® arm to produce the 'gold standard' model for the assessment of the accuracy of the developed method. The accuracy of the method was assessed by measuring the distance between the occlusal surfaces of the new composite model and the 'gold standard' 3D model of the skull and teeth. The procedure was repeated a week apart to measure the reproducibility of the method. Results: The results showed no statistically significant difference between the measurements on the first and second occasions. The absolute mean distance between the new composite model and the laser model ranged between 0.11 mm to 0.20 mm. Conclusion: The dentition of the CBCT can be accurately replaced with the dental image captured by the intra-oral scanner to create a composite model. This method will improve the accuracy of orthognathic surgical prediction planning, with the final goal of the fabrication of a physical occlusal wafer without to guide orthognathic surgery and eliminate the need for dental impression.Keywords: orthognathic surgery, superimposition, models, cone beam computed tomography
Procedia PDF Downloads 198310 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh
Authors: Zahid Khalil, Saad Ul Haque, Asif Khan
Abstract:
Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).Keywords: Remote sensing, GIS, AHP, RWH
Procedia PDF Downloads 389309 Investigating the Association between Escherichia Coli Infection and Breast Cancer Incidence: A Retrospective Analysis and Literature Review
Authors: Nadia Obaed, Lexi Frankel, Amalia Ardeljan, Denis Nigel, Anniki Witter, Omar Rashid
Abstract:
Breast cancer is the most common cancer among women, with a lifetime risk of one in eight of all women in the United States. Although breast cancer is prevalent throughout the world, the uneven distribution in incidence and mortality rates is shaped by the variation in population structure, environment, genetics and known lifestyle risk factors. Furthermore, the bacterial profile in healthy and cancerous breast tissue differs with a higher relative abundance of bacteria capable of causing DNA damage in breast cancer patients. Previous bacterial infections may change the composition of the microbiome and partially account for the environmental factors promoting breast cancer. One study found that higher amounts of Staphylococcus, Bacillus, and Enterobacteriaceae, of which Escherichia coli (E. coli) is a part, were present in breast tumor tissue. Based on E. coli’s ability to damage DNA, it is hypothesized that there is an increased risk of breast cancer associated with previous E. coli infection. Therefore, the purpose of this study was to evaluate the correlation between E. coli infection and the incidence of breast cancer. Holy Cross Health, Fort Lauderdale, provided access to the Health Insurance Portability and Accountability (HIPAA) compliant national database for the purpose of academic research. International Classification of Disease 9th and 10th Codes (ICD-9, ICD-10) was then used to conduct a retrospective analysis using data from January 2010 to December 2019. All breast cancer diagnoses and all patients infected versus not infected with E. coli that underwent typical E. coli treatment were investigated. The obtained data were matched for age, Charlson Comorbidity Score (CCI score), and antibiotic treatment. Standard statistical methods were applied to determine statistical significance and an odds ratio was used to estimate the relative risk. A total of 81286 patients were identified and analyzed from the initial query and then reduced to 31894 antibiotic-specific treated patients in both the infected and control group, respectively. The incidence of breast cancer was 2.51% and present in 2043 patients in the E. coli group compared to 5.996% and present in 4874 patients in the control group. The incidence of breast cancer was 3.84% and present in 1223 patients in the treated E. coli group compared to 6.38% and present in 2034 patients in the treated control group. The decreased incidence of breast cancer in the E. coli and treated E. coli groups was statistically significant with a p-value of 2.2x10-16 and 2.264x10-16, respectively. The odds ratio in the E. coli and treated E. coli groups was 0.784 and 0.787 with a 95% confidence interval, respectively (0.756-0.813; 0.743-0.833). The current study shows a statistically significant decrease in breast cancer incidence in association with previous Escherichia coli infection. Researching the relationship between single bacterial species is important as only up to 10% of breast cancer risk is attributable to genetics, while the contribution of environmental factors including previous infections potentially accounts for a majority of the preventable risk. Further evaluation is recommended to assess the potential and mechanism of E. coli in decreasing the risk of breast cancer.Keywords: breast cancer, escherichia coli, incidence, infection, microbiome, risk
Procedia PDF Downloads 255308 Development and Validation of a Coronary Heart Disease Risk Score in Indian Type 2 Diabetes Mellitus Patients
Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad
Abstract:
Diabetes in India is growing at an alarming rate and the complications caused by it need to be controlled. Coronary heart disease (CHD) is one of the complications that will be discussed for prediction in this study. India has the second most number of diabetes patients in the world. To the best of our knowledge, there is no CHD risk score for Indian type 2 diabetes patients. Any form of CHD has been taken as the event of interest. A sample of 750 was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of CHD. Predictive risk scores of CHD events are designed by cox proportional hazard regression. Model calibration and discrimination is assessed from Hosmer Lemeshow and area under receiver operating characteristic (ROC) curve. Overfitting and underfitting of the model is checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Youden’s index is used to choose the optimal cut off point from the scores. Five year probability of CHD is predicted by both survival function and Markov chain two state model and the better technique is concluded. The risk scores for CHD developed can be calculated by doctors and patients for self-control of diabetes. Furthermore, the five-year probabilities can be implemented as well to forecast and maintain the condition of patients.Keywords: coronary heart disease, cox proportional hazard regression, ROC curve, type 2 diabetes Mellitus
Procedia PDF Downloads 220307 Embryonic Aneuploidy – Morphokinetic Behaviors as a Potential Diagnostic Biomarker
Authors: Banafsheh Nikmehr, Mohsen Bahrami, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Mallory Pitts, Tolga B. Mesen, Tamer M. Yalcinkaya
Abstract:
The number of people who receive in vitro fertilization (IVF) treatment has increased on a startling trajectory over the past two decades. Despite advances in this field, particularly the introduction of intracytoplasmic sperm injection (ICSI) and the preimplantation genetic screening (PGS), the IVF success remains low. A major factor contributing to IVF failure is embryonic aneuploidy (abnormal chromosome content), which often results in miscarriage and birth defects. Although PGS is often used as the standard diagnostic tool to identify aneuploid embryos, it is an invasive approach that could affect the embryo development, and yet inaccessible to many patients due its high costs. As such, there is a clear need for a non-invasive cost-effective approach to identify euploid embryos for single embryo transfer (SET). The reported differences between morphokinetic behaviors of aneuploid and euploid embryos has shown promise to address this need. However, current literature is inconclusive and further research is urgently needed to translate current findings into clinical diagnostics. In this ongoing study, we found significant differences between morphokinetic behaviors of euploid and aneuploid embryos that provides important insights and reaffirms the promise of such behaviors for developing non-invasive methodologies. Methodology—A total of 242 embryos (euploid: 149, aneuploid: 93) from 74 patients who underwent IVF treatment in Carolinas Fertility Clinics in Winston-Salem, NC, were analyzed. All embryos were incubated in an EmbryoScope incubator. The patients were randomly selected from January 2019 to June 2021 with most patients having both euploid and aneuploid embryos. All embryos reached the blastocyst stage and had known PGS outcomes. The ploidy assessment was done by a third-party testing laboratory on day 5-7 embryo biopsies. The morphokinetic variables of each embryo were measured by the EmbryoViewer software (Uniesense FertiliTech) on time-lapse images using 7 focal depths. We compared the time to: pronuclei fading (tPNf), division to 2,3,…,9 cells (t2, t3,…,t9), start of embryo compaction (tSC), Morula formation (tM), start of blastocyst formation (tSC), blastocyst formation (tB), and blastocyst expansion (tEB), as well as intervals between them (e.g., c23 = t3 – t2). We used a mixed regression method for our statistical analyses to account for the correlation between multiple embryos per patient. Major Findings— The average age of the patients was 35.04 yrs. The average patient age associated with euploid and aneuploid embryos was not different (P = 0.6454). We found a significant difference in c45 = t5-t4 (P = 0.0298). Our results indicated this interval on average lasts significantly longer for aneuploid embryos - c45(aneuploid) = 11.93hr vs c45(euploid) = 7.97hr. In a separate analysis limited to embryos from the same patients (patients = 47, total embryos=200, euploid=112, aneuploid=88), we obtained the same results (P = 0.0316). The statistical power for this analysis exceeded 87%. No other variable was different between the two groups. Conclusion— Our results demonstrate the importance of morphokinetic variables as potential biomarkers that could aid in non-invasively characterizing euploid and aneuploid embryos. We seek to study a larger population of embryos and incorporate the embryo quality in future studies.Keywords: IVF, embryo, euploidy, aneuploidy, morphokinteic
Procedia PDF Downloads 88306 The Role of Serum Fructosamine as a Monitoring Tool in Gestational Diabetes Mellitus Treatment in Vietnam
Authors: Truong H. Le, Ngoc M. To, Quang N. Tran, Luu T. Cao, Chi V. Le
Abstract:
Introduction: In Vietnam, the current monitoring and treatment for ordinary diabetic patient mostly based on glucose monitoring with HbA1c test for every three months (recommended goal is HbA1c < 6.5%~7%). For diabetes in pregnant women or Gestational diabetes mellitus (GDM), glycemic control until the time of delivery is extremly important because it could reduce significantly medical implications for both the mother and the child. Besides, GDM requires continuos glucose monitoring at least every two weeks and therefore an alternative marker of glycemia for short-term control is considering a potential tool for the healthcare providers. There are published studies have indicated that the glycosylated serum protein is a better indicator than glycosylated hemoglobin in GDM monitoring. Based on the actual practice in Vietnam, this study was designed to evaluate the role of serum fructosamine as a monitoring tool in GDM treament and its correlations with fasting blood glucose (G0), 2-hour postprandial glucose (G2) and glycosylated hemoglobin (HbA1c). Methods: A cohort study on pregnant women diagnosed with GDM by the 75-gram oralglucose tolerance test was conducted at Endocrinology Department, Cho Ray hospital, Vietnam from June 2014 to March 2015. Cho Ray hospital is the final destination for GDM patient in the southern of Vietnam, the study population has many sources from other pronvinces and therefore researchers belive that this demographic characteristic can help to provide the study result as a reflection for the whole area. In this study, diabetic patients received a continuos glucose monitoring method which consists of bi-weekly on-site visit every 2 weeks with glycosylated serum protein test, fasting blood glucose test and 2-hour postprandial glucose test; HbA1c test for every 3 months; and nutritious consultance for daily diet program. The subjects still received routine treatment at the hospital, with tight follow-up from their healthcare providers. Researchers recorded bi-weekly health conditions, serum fructosamine level and delivery outcome from the pregnant women, using Stata 13 programme for the analysis. Results: A total of 500 pregnant women was enrolled and follow-up in this study. Serum fructosamine level was found to have a light correlation with G0 ( r=0.3458, p < 0.001) and HbA1c ( r=0.3544, p < 0.001), and moderately correlated with G2 ( r=0.4379, p < 0.001). During study timeline, the delivery outcome of 287 women were recorded with the average age of 38.5 ± 1.5 weeks, 9% of them have macrosomia, 2.8% have premature birth before week 35th and 9.8% have premature birth before week 37th; 64.8% of cesarean section and none of them have perinatal or neonatal mortality. The study provides a reference interval of serum fructosamine for GDM patient was 112.9 ± 20.7 μmol/dL. Conclusion: The present results suggests that serum fructosamine is as effective as HbA1c as a reflection of blood glucose control in GDM patient, with a positive result in delivery outcome (0% perinatal or neonatal mortality). The reference value of serum fructosamine measurement provided a potential monitoring utility in GDM treatment for hospitals in Vietnam. Healthcare providers in Cho Ray hospital is considering to conduct more studies to test this reference as a target value in their GDM treatment and monitoring.Keywords: gestational diabetes mellitus, monitoring tool, serum fructosamine, Vietnam
Procedia PDF Downloads 281305 Predicting Child Attachment Style Based on Positive and Safe Parenting Components and Mediating Maternal Attachment Style in Children With ADHD
Authors: Alireza Monzavi Chaleshtari, Maryam Aliakbari
Abstract:
Objective: The aim of this study was to investigate the prediction of child attachment style based on a positive and safe combination parenting method mediated by maternal attachment styles in children with attention deficit hyperactivity disorder. Method: The design of the present study was descriptive of correlation and structural equations and applied in terms of purpose. The population of this study includes all children with attention deficit hyperactivity disorder living in Chaharmahal and Bakhtiari province and their mothers. The sample size of the above study includes 165children with attention deficit hyperactivity disorder in Chaharmahal and Bakhtiari province with their mothers, who were selected by purposive sampling method based on the inclusion criteria. The obtained data were analyzed in two sections of descriptive and inferential statistics. In the descriptive statistics section, statistical indices of mean, standard deviation, frequency distribution table and graph were used. In the inferential section, according to the nature of the hypotheses and objectives of the research, the data were analyzed using Pearson correlation coefficient tests, Bootstrap test and structural equation model. findings:The results of structural equation modeling showed that the research models fit and showed a positive and safe combination parenting style mediated by the mother attachment style has an indirect effect on the child attachment style. Also, a positive and safe combined parenting style has a direct relationship with child attachment style, and She has a mother attachment style. Conclusion:The results and findings of the present study show that there is a significant relationship between positive and safe combination parenting methods and attachment styles of children with attention deficit hyperactivity disorder with maternal attachment style mediation. Therefore, it can be expected that parents using a positive and safe combination232 parenting method can effectively lead to secure attachment in children with attention deficit hyperactivity disorder.Keywords: child attachment style, positive and safe parenting, maternal attachment style, ADHD
Procedia PDF Downloads 67304 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya
Authors: Abdel Rahman Khider Hassan
Abstract:
This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha
Procedia PDF Downloads 207303 Blood Chemo-Profiling in Workers Exposed to Occupational Pyrethroid Pesticides to Identify Associated Diseases
Authors: O. O. Sufyani, M. E. Oraiby, S. A. Qumaiy, A. I. Alaamri, Z. M. Eisa, A. M. Hakami, M. A. Attafi, O. M. Alhassan, W. M. Elsideeg, E. M. Noureldin, Y. A. Hobani, Y. Q. Majrabi, I. A. Khardali, A. B. Maashi, A. A. Al Mane, A. H. Hakami, I. M. Alkhyat, A. A. Sahly, I. M. Attafi
Abstract:
According to the Food and Agriculture Organization (FAO) Pesticides Use Database, pesticide use in agriculture in Saudi Arabia has more than doubled from 4539 tons in 2009 to 10496 tons in 2019. Among pesticides, pyrethroids is commonly used in Saudi Arabia. Pesticides may increase susceptibility to a variety of diseases, particularly among pesticide workers, due to their extensive use, indiscriminate use, and long-term exposure. Therefore, analyzing blood chemo-profiles and evaluating the detected substances as biomarkers for pyrethroid pesticide exposure may assist to identify and predicting adverse effects of exposure, which may be used for both preventative and risk assessment purposes. The purpose of this study was to (a) analyze chemo-profiling by Gas Chromatography-Mass Spectrometry (GC-MS) analysis, (b) identify the most commonly detected chemicals in a time-exposure-dependent manner using a Venn diagram, and (c) identify their associated disease among pesticide workers using analyzer tools on the Comparative Toxicogenomics Database (CTD) website, (250 healthy male volunteers (20-60 years old) who deal with pesticides in the Jazan region of Saudi Arabia (exposure intervals: 1-2, 4-6, 6-8, more than 8 years) were included in the study. A questionnaire was used to collect demographic information, the duration of pesticide exposure, and the existence of chronic conditions. Blood samples were collected for biochemistry analysis and extracted by solid-phase extraction for gas chromatography-mass spectrometry (GC-MS) analysis. Biochemistry analysis reveals no significant changes in response to the exposure period; however, an inverse association between the albumin level and the exposure interval was observed. The blood chemo-profiling was differentially expressed in an exposure time-dependent manner. This analysis identified the common chemical set associated with each group and their associated significant occupational diseases. While some of these chemicals are associated with a variety of diseases, the distinguishing feature of these chemically associated disorders is their applicability for prevention measures. The most interesting finding was the identification of several chemicals; erucic acid, pelargonic acid, alpha-linolenic acid, dibutyl phthalate, diisobutyl phthalate, dodecanol, myristic Acid, pyrene, and 8,11,14-eicosatrienoic acid, associated with pneumoconiosis, asbestosis, asthma, silicosis and berylliosis. Chemical-disease association study also found that cancer, digestive system disease, nervous system disease, and metabolic disease were the most often recognized disease categories in the common chemical set. The hierarchical clustering approach was used to compare the expression patterns and exposure intervals of the chemicals found commonly. More study is needed to validate these chemicals as early markers of pyrethroid insecticide-related occupational disease, which might assist evaluate and reducing risk. The current study contributes valuable data and recommendations to public health.Keywords: occupational, toxicology, chemo-profiling, pesticide, pyrethroid, GC-MS
Procedia PDF Downloads 103302 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit
Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek
Abstract:
In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage
Procedia PDF Downloads 269301 Teacher-Student Interactions: Case-Control Studies on Teacher Social Skills and Children’s Behavior
Authors: Alessandra Turini Bolsoni-Silva, Sonia Regina Loureiro
Abstract:
It is important to evaluate such variables simultaneously and differentiating types of behavior problems: internalizing, externalizing and with comorbidity of internalizing and externalizing. The objective was to compare, correlate and predict teacher educational practices (educational social skills and negative practices) and children's behaviors (social skills and behavior problems) of children with internalizing, externalizing and combined internalizing and externalizing problems, controlling variables of child (gender and education). A total of 262 children were eligible to compose the participants, considering preschool age from 3 to 5 years old (n = 109) and school age from 6 to 11 (n = 153) years old, and their teachers who were distributed, in designs case-control, non-clinical, with internalizing, externalizing problems and internalizing and externalizing comorbidity, using the Teacher's Report Form (TRF) as a criterion. The instruments were applied with the teachers, after consent from the parents/guardians: a) Teacher’s Report Form (TRF); b) Educational Social Skills Interview Guide for Teachers (RE-HSE-Pr); (c) Socially Skilled Response Questionnaire – Teachers (QRSH-Pr). The data were treated by univariate and multivariate analyses, proceeding with comparisons, correlations and predictions regarding the outcomes of children with and without behavioral problems, considering the types of problems. As main results stand out: (a) group comparison studies: in the Inter group there is emphasis on behavior problems in affection interactions, which does not happen in the other groups; as for positive practices, they discriminate against groups with externalizing and combined problems and not in internalizing ones, positive educational practices – hse are more frequent in the G-Exter and G-Inter+Exter groups; negative practices differed only in the G-Exter and G-Inter+Exter groups; b) correlation studies: it can be seen that the Inter+Exter group presents a greater number of correlations in the relationship between behavioral problems/complaints and negative practices and between children's social skills and positive practices/contexts; c) prediction studies: children's social skills predict internalizing, externalizing and combined problems; it is also verified that the negative practices are in the multivariate model for the externalizing and combined ones. This investigation collaborates in the identification of risk and protective factors for specific problems, helping in interventions for different problems.Keywords: development, educational practices, social skills, behavior problems, teacher
Procedia PDF Downloads 94300 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence
Authors: Mohammed Al Sulaimani, Hamad Al Manhi
Abstract:
With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems
Procedia PDF Downloads 34299 The Contact between a Rigid Substrate and a Thick Elastic Layer
Authors: Nicola Menga, Giuseppe Carbone
Abstract:
Although contact mechanics has been widely focused on the study of contacts between half-space, it has been recently pointed out that in presence of finite thickness elastic layers the results of the contact problem show significant difference in terms of the main contact quantities (e.g. contact area, penetration, mean pressure, etc.). Actually, there exist a wide range of industrial application demanding for this kind of studies, such as seals leakage prediction or pressure-sensitive coatings for electrical applications. In this work, we focus on the contact between a rigid profile and an elastic layer of thickness h confined under two different configurations: rigid constrain and applied uniform pressure. The elastic problem at hand has been formalized following Green’s function method and then numerically solved by means of a matrix inversion. We study different contact conditions, both considering and neglecting adhesive interactions at the interface. This leads to different solution techniques: Adhesive contacts equilibrium solution is found, in term of contact area for given penetration, making stationary the total free energy of the system; whereas, adhesiveless contacts are addressed defining an equilibrium criterion, again on the contact area, relying on the fracture mechanics stress intensity factor KI. In particular, we make the KI vanish at the edges of the contact area, as peculiar for adhesiveless elastic contacts. The results are obtained in terms of contact area, penetration, and mean pressure for both adhesive and adhesiveless contact conditions. As expected, in the case of a uniform applied pressure the slab turns out much more compliant than the rigidly constrained one. Indeed, we have observed that the peak value of the contact pressure, for both the adhesive and adhesiveless condition, is much higher for the rigidly constrained configuration than in the case of applied uniform pressure. Furthermore, we observed that, for little contact area, both systems behave the same and the pull-off occurs at approximately the same contact area and mean contact pressure. This is an expected result since in this condition the ratio between the layers thickness and the contact area is very high and both layer configurations recover the half-space behavior where the pull-off occurrence is mainly controlled by the adhesive interactions, which are kept constant among the cases.Keywords: contact mechanics, adhesion, friction, thick layer
Procedia PDF Downloads 513298 Numerical Investigation of Indoor Environmental Quality in a Room Heated with Impinging Jet Ventilation
Authors: Mathias Cehlin, Arman Ameen, Ulf Larsson, Taghi Karimipanah
Abstract:
The indoor environmental quality (IEQ) is increasingly recognized as a significant factor influencing the overall level of building occupants’ health, comfort and productivity. An air-conditioning and ventilation system is normally used to create and maintain good thermal comfort and indoor air quality. Providing occupant thermal comfort and well-being with minimized use of energy is the main purpose of heating, ventilating and air conditioning system. Among different types of ventilation systems, the most widely known and used ventilation systems are mixing ventilation (MV) and displacement ventilation (DV). Impinging jet ventilation (IJV) is a promising ventilation strategy developed in the beginning of 2000s. IJV has the advantage of supplying air downwards close to the floor with high momentum and thereby delivering fresh air further out in the room compare to DV. Operating in cooling mode, IJV systems can have higher ventilation effectiveness and heat removal effectiveness compared to MV, and therefore a higher energy efficiency. However, how is the performance of IJV when operating in heating mode? This paper presents the function of IJV in a typical office room for winter conditions (heating mode). In this paper, a validated CFD model, which uses the v2-f model is used for the prediction of air flow pattern, thermal comfort and air change effectiveness. The office room under consideration has the dimensions 4.2×3.6×2.5m, which can be designed like a single-person or two-person office. A number of important factors influencing in the room with IJV are studied. The considered parameters are: heating demand, number of occupants and supplied air conditions. A total of 6 simulation cases are carried out to investigate the effects of the considered parameters. Heat load in the room is contributed by occupants, computer and lighting. The model consists of one external wall including a window. The interaction effects of heat sources, supply air flow and down draught from the window result in a complex flow phenomenon. Preliminary results indicate that IJV can be used for heating of a typical office room. The IEQ seems to be suitable in the occupied region for the studied cases.Keywords: computation fluid dynamics, impinging jet ventilation, indoor environmental quality, ventilation strategy
Procedia PDF Downloads 180297 A Computational Fluid Dynamics Simulation of Single Rod Bundles with 54 Fuel Rods without Spacers
Authors: S. K. Verma, S. L. Sinha, D. K. Chandraker
Abstract:
The Advanced Heavy Water Reactor (AHWR) is a vertical pressure tube type, heavy water moderated and boiling light water cooled natural circulation based reactor. The fuel bundle of AHWR contains 54 fuel rods arranged in three concentric rings of 12, 18 and 24 fuel rods. This fuel bundle is divided into a number of imaginary interacting flow passage called subchannels. Single phase flow condition exists in reactor rod bundle during startup condition and up to certain length of rod bundle when it is operating at full power. Prediction of the thermal margin of the reactor during startup condition has necessitated the determination of the turbulent mixing rate of coolant amongst these subchannels. Thus, it is vital to evaluate turbulent mixing between subchannels of AHWR rod bundle. With the remarkable progress in the computer processing power, the computational fluid dynamics (CFD) methodology can be useful for investigating the thermal–hydraulic characteristics phenomena in the nuclear fuel assembly. The present report covers the results of simulation of pressure drop, velocity variation and turbulence intensity on single rod bundle with 54 rods in circular arrays. In this investigation, 54-rod assemblies are simulated with ANSYS Fluent 15 using steady simulations with an ANSYS Workbench meshing. The simulations have been carried out with water for Reynolds number 9861.83. The rod bundle has a mean flow area of 4853.0584 mm2 in the bare region with the hydraulic diameter of 8.105 mm. In present investigation, a benchmark k-ε model has been used as a turbulence model and the symmetry condition is set as boundary conditions. Simulation are carried out to determine the turbulent mixing rate in the simulated subchannels of the reactor. The size of rod and the pitch in the test has been same as that of actual rod bundle in the prototype. Water has been used as the working fluid and the turbulent mixing tests have been carried out at atmospheric condition without heat addition. The mean velocity in the subchannel has been varied from 0-1.2 m/s. The flow conditions are found to be closer to the actual reactor condition.Keywords: AHWR, CFD, single-phase turbulent mixing rate, thermal–hydraulic
Procedia PDF Downloads 322296 Data and Model-based Metamodels for Prediction of Performance of Extended Hollo-Bolt Connections
Authors: M. Cabrera, W. Tizani, J. Ninic, F. Wang
Abstract:
Open section beam to concrete-filled tubular column structures has been increasingly utilized in construction over the past few decades due to their enhanced structural performance, as well as economic and architectural advantages. However, the use of this configuration in construction is limited due to the difficulties in connecting the structural members as there is no access to the inner part of the tube to install standard bolts. Blind-bolted systems are a relatively new approach to overcome this limitation as they only require access to one side of the tubular section to tighten the bolt. The performance of these connections in concrete-filled steel tubular sections remains uncharacterized due to the complex interactions between concrete, bolt, and steel section. Over the last years, research in structural performance has moved to a more sophisticated and efficient approach consisting of machine learning algorithms to generate metamodels. This method reduces the need for developing complex, and computationally expensive finite element models, optimizing the search for desirable design variables. Metamodels generated by a data fusion approach use numerical and experimental results by combining multiple models to capture the dependency between the simulation design variables and connection performance, learning the relations between different design parameters and predicting a given output. Fully characterizing this connection will transform high-rise and multistorey construction by means of the introduction of design guidance for moment-resisting blind-bolted connections, which is currently unavailable. This paper presents a review of the steps taken to develop metamodels generated by means of artificial neural network algorithms which predict the connection stress and stiffness based on the design parameters when using Extended Hollo-Bolt blind bolts. It also provides consideration of the failure modes and mechanisms that contribute to the deformability as well as the feasibility of achieving blind-bolted rigid connections when using the blind fastener.Keywords: blind-bolted connections, concrete-filled tubular structures, finite element analysis, metamodeling
Procedia PDF Downloads 158295 A Study of Anthropometric Correlation between Upper and Lower Limb Dimensions in Sudanese Population
Authors: Altayeb Abdalla Ahmed
Abstract:
Skeletal phenotype is a product of a balanced interaction between genetics and environmental factors throughout different life stages. Therefore, interlimb proportions are variable between populations. Although interlimb proportion indices have been used in anthropology in assessing the influence of various environmental factors on limbs, an extensive literature review revealed that there is a paucity of published research assessing interlimb part correlations and possibility of reconstruction. Hence, this study aims to assess the relationships between upper and lower limb parts and develop regression formulae to reconstruct the parts from one another. The left upper arm length, ulnar length, wrist breadth, hand length, hand breadth, tibial length, bimalleolar breadth, foot length, and foot breadth of 376 right-handed subjects, comprising 187 males and 189 females (aged 25-35 years), were measured. Initially, the data were analyzed using basic univariate analysis and independent t-tests; then sex-specific simple and multiple linear regression models were used to estimate upper limb parts from lower limb parts and vice-versa. The results of this study indicated significant sexual dimorphism for all variables. The results indicated a significant correlation between the upper and lower limbs parts (p < 0.01). Linear and multiple (stepwise) regression equations were developed to reconstruct the limb parts in the presence of a single or multiple dimension(s) from the other limb. Multiple stepwise regression equations generated better reconstructions than simple equations. These results are significant in forensics as it can aid in identification of multiple isolated limb parts particularly during mass disasters and criminal dismemberment. Although a DNA analysis is the most reliable tool for identification, its usage has multiple limitations in undeveloped countries, e.g., cost, facility availability, and trained personnel. Furthermore, it has important implication in plastic and orthopedic reconstructive surgeries. This study is the only reported study assessing the correlation and prediction capabilities between many of the upper and lower dimensions. The present study demonstrates a significant correlation between the interlimb parts in both sexes, which indicates a possibility to reconstruction using regression equations.Keywords: anthropometry, correlation, limb, Sudanese
Procedia PDF Downloads 295294 Study of the Association between Salivary Microbiological Data, Oral Health Indicators, Behavioral Factors, and Social Determinants among Post-COVID Patients Aged 7 to 12 Years in Tbilisi City
Authors: Lia Mania, Ketevan Nanobashvili
Abstract:
Background: The coronavirus disease COVID-19 has become the cause of a global health crisis during the current pandemic. This study aims to fill the paucity of epidemiological studies on the impact of COVID-19 on the oral health of pediatric populations. Methods: It was conducted an observational, cross-sectional study in Georgia, in Tbilisi (capital of Georgia), among 7 to 12-year-old PCR or rapid test-confirmed post-Covid populations in all districts of Tbilisi (10 districts in total). 332 beneficiaries who were infected with Covid within one year were included in the study. The population was selected in schools of Tbilisi according to the principle of cluster selection. A simple random selection took place in the selected clusters. According to this principle, an equal number of beneficiaries were selected in all districts of Tbilisi. By July 1, 2022, according to National Center for Disease Control and Public Health data (NCDC.Ge), the number of test-confirmed cases in the population aged 0-18 in Tbilisi was 115137 children (17.7% of all confirmed cases). The number of patients to be examined was determined by the sample size. Oral screening, microbiological examination of saliva, and administration of oral health questionnaires to guardians were performed. Statistical processing of data was done with SPSS-23. Risk factors were estimated by odds ratio and logistic regression with 95% confidence interval. Results: Statistically reliable differences between the averages of oral health indicators in asymptomatic and symptomatic covid-infected groups are: for caries intensity (DMF+def) t=4.468 and p=0.000, for modified gingival index (MGI) t=3.048, p=0.002, for simplified oral hygiene index (S-OHI) t=4.853; p=0.000. Symptomatic covid-infection has a reliable effect on the oral microbiome (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis); (n=332; 77.3% vs n=332; 58.0%; OR=2.46, 95%CI: 1.318-4.617). According to the logistic regression, it was found that the severity of the covid infection has a significant effect on the frequency of pathogenic and conditionally pathogenic bacteria in the oral cavity B=0.903 AOR=2.467 (CL 1.318-4.617). Symptomatic covid-infection affects oral health indicators, regardless of the presence of other risk factors, such as parental employment status, tooth brushing behaviors, carbohydrate meal, fruit consumption. (p<0.05). Conclusion: Risk factors (parental employment status, tooth brushing behaviors, carbohydrate consumption) were associated with poorer oral health status in a post-Covid population of 7- to 12-year-old children. However, such a risk factor as symptomatic ongoing covid-infection affected the oral microbiome in terms of the abundant growth of pathogenic and conditionally pathogenic bacteria (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis) and further worsened oral health indicators. Thus, a close association was established between symptomatic covid-infection and microbiome changes in the post-covid period; also - between the variables of oral health indicators and the symptomatic course of covid-infection.Keywords: oral microbiome, COVID-19, population based research, oral health indicators
Procedia PDF Downloads 70293 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 181292 Scale-Up Study of Gas-Liquid Two Phase Flow in Downcomer
Authors: Jayanth Abishek Subramanian, Ramin Dabirian, Ilias Gavrielatos, Ram Mohan, Ovadia Shoham
Abstract:
Downcomers are important conduits for multiphase flow transfer from offshore platforms to the seabed. Uncertainty in the predictions of the pressure drop of multiphase flow between platforms is often dominated by the uncertainty associated with the prediction of holdup and pressure drop in the downcomer. The objectives of this study are to conduct experimental and theoretical scale-up study of the downcomer. A 4-in. diameter vertical test section was designed and constructed to study two-phase flow in downcomer. The facility is equipped with baffles for flow area restriction, enabling interchangeable annular slot openings between 30% and 61.7%. Also, state-of-the-art instrumentation, the capacitance Wire-Mesh Sensor (WMS) was utilized to acquire the experimental data. A total of 76 experimental data points were acquired, including falling film under 30% and 61.7% annular slot opening for air-water and air-Conosol C200 oil cases as well as gas carry-under for 30% and 61.7% opening utilizing air-Conosol C200 oil. For all experiments, the parameters such as falling film thickness and velocity, entrained liquid holdup in the core, gas void fraction profiles at the cross-sectional area of the liquid column, the void fraction and the gas carry under were measured. The experimental results indicated that the film thickness and film velocity increase as the flow area reduces. Also, the increase in film velocity increases the gas entrainment process. Furthermore, the results confirmed that the increase of gas entrainment for the same liquid flow rate leads to an increase in the gas carry-under. A power comparison method was developed to enable evaluation of the Lopez (2011) model, which was created for full bore downcomer, with the novel scale-up experiment data acquired from the downcomer with the restricted area for flow. Comparison between the experimental data and the model predictions shows a maximum absolute average discrepancy of 22.9% and 21.8% for the falling film thickness and velocity, respectively; and a maximum absolute average discrepancy of 22.2% for fraction of gas carried with the liquid (oil).Keywords: two phase flow, falling film, downcomer, wire-mesh sensor
Procedia PDF Downloads 167291 Simon Says: What Should I Study?
Authors: Fonteyne Lot
Abstract:
SIMON (Study capacities and Interest Monitor is a freely accessible online self-assessment tool that allows secondary education pupils to evaluate their interests and capacities in order to choose a post-secondary major that maximally suits their potential. The tool consists of two broad domains that correspond with two general questions pupils ask: 'What study fields interest me?' and 'Am I capable to succeed in this field of study?'. The first question is addressed by a RIASEC-type interest inventory that links personal interests to post-secondary majors. Pupils are provided with a personal profile and an overview of majors with their degree of congruence. The output is dynamic: respondents can manipulate their score and they can compare their results to the profile of all fields of study. That way they are stimulated to explore the broad range of majors. To answer whether pupils are capable of succeeding in a preferred major, a battery of tests is provided. This battery comprises a range of factors that are predictive of academic success. Traditional predictors such as (educational) background and cognitive variables (mathematical and verbal skills) are included. Moreover, non-cognitive predictors of academic success (such as 'motivation', 'test anxiety', 'academic self-efficacy' and 'study skills') are assessed. These non-cognitive factors are generally not included in admission decisions although research shows they are incrementally predictive of success and are less discriminating. These tests inform pupils on potential causes of success and failure. More important, pupils receive their personal chances of success per major. These differential probabilities are validated through the underlying research on academic success of students. For example, the research has shown that we can identify 22 % of the failing students in psychology and educational sciences. In this group, our prediction is 95% accurate. SIMON leads more students to a suitable major which in turn alleviates student success and retention. Apart from these benefits, the instrument grants insight into risk factors of academic failure. It also supports and fosters the development of evidence-based remedial interventions and therefore gives way to a more efficient use of means.Keywords: academic success, online self-assessment, student retention, vocational choice
Procedia PDF Downloads 405290 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks
Authors: Mehrdad Shafiei Dizaji, Hoda Azari
Abstract:
The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven
Procedia PDF Downloads 43289 The Sustainable Development for Coastal Tourist Building
Authors: D. Avila
Abstract:
The tourism industry is a phenomenon that has become a growing presence in international socio-economic dynamics, which in most cases exceeds the control parameters in the various environmental regulations and sustainability of existing resources. Because of this, the effects on the natural environment at the regional and national levels represent a challenge, for which a number of strategies are necessary to minimize the environmental impact generated by the occupation of the territory. The hotel tourist building and sustainable development in the coastal zone, have an important impact on the environment and on the physical and psychological health of the inhabitants. Environmental quality associated with the comfort of humans to the sustainable development of natural resources; applied to the hotel architecture this concept involves the incorporation of new demands on all of the constructive process of a building, changing customs of developers and users. The methodology developed provides an initial analysis to determine and rank the different tourist buildings, with the above it will be feasible to establish methods of study and environmental impact assessment. Finally, it is necessary to establish an overview regarding the best way to implement tourism development on the coast, containing guidelines to improve and protect the natural environment. This paper analyzes the parameters and strategies to reduce environmental impacts derived from deployments tourism on the coast, through a series of recommendations towards sustainability, in the context of the Bahia de Banderas, Puerto Vallarta, Jalisco. The environmental impact caused by the implementation of tourism development, perceived in a coastal environment, forcing a series of processes, ranging from the identification of impacts, prediction and evaluation of them. For this purpose are described below, different techniques and valuation procedures: Identification of impacts. Methods for the identification of damage caused to the environment pursue general purpose to obtain a group of negative indicators that are subsequently used in the study of environmental impact. There are several systematic methods to identify the impacts caused by human activities. In the present work, develops a procedure based and adapted from the Ministry of works public urban reference in studies of environmental impacts, the representative methods are: list of contrast, arrays, and networks, method of transparencies and superposition of maps.Keywords: environmental impact, physical health, sustainability, tourist building
Procedia PDF Downloads 330288 Structural Health Monitoring-Integrated Structural Reliability Based Decision Making
Authors: Caglayan Hizal, Kutay Yuceturk, Ertugrul Turker Uzun, Hasan Ceylan, Engin Aktas, Gursoy Turan
Abstract:
Monitoring concepts for structural systems have been investigated by researchers for decades since such tools are quite convenient to determine intervention planning of structures. Despite the considerable development in this regard, the efficient use of monitoring data in reliability assessment, and prediction models are still in need of improvement in their efficiency. More specifically, reliability-based seismic risk assessment of engineering structures may play a crucial role in the post-earthquake decision-making process for the structures. After an earthquake, professionals could identify heavily damaged structures based on visual observations. Among these, it is hard to identify the ones with minimum signs of damages, even if they would experience considerable structural degradation. Besides, visual observations are open to human interpretations, which make the decision process controversial, and thus, less reliable. In this context, when a continuous monitoring system has been previously installed on the corresponding structure, this decision process might be completed rapidly and with higher confidence by means of the observed data. At this stage, the Structural Health Monitoring (SHM) procedure has an important role since it can make it possible to estimate the system reliability based on a recursively updated mathematical model. Therefore, integrating an SHM procedure into the reliability assessment process comes forward as an important challenge due to the arising uncertainties for the updated model in case of the environmental, material and earthquake induced changes. In this context, this study presents a case study on SHM-integrated reliability assessment of the continuously monitored progressively damaged systems. The objective of this study is to get instant feedback on the current state of the structure after an extreme event, such as earthquakes, by involving the observed data rather than the visual inspections. Thus, the decision-making process after such an event can be carried out on a rational basis. In the near future, this can give wing to the design of self-reported structures which can warn about its current situation after an extreme event.Keywords: condition assessment, vibration-based SHM, reliability analysis, seismic risk assessment
Procedia PDF Downloads 145287 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept
Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua
Abstract:
River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel
Procedia PDF Downloads 127286 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities
Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho
Abstract:
In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.Keywords: on-site stormwater detention, source control, stormwater, urban drainage
Procedia PDF Downloads 181