Search results for: the health belief model
18562 Parent’s Expectations and School Achievement: Longitudinal Perspective among Chilean Pupils
Authors: Marine Hascoet, Valentina Giaconi, Ludivine Jamain
Abstract:
The aim of our study is to examine if the family socio-economic status (SES) has an influence on students’ academic achievement. We first make the hypothesis that the more their families have financial and social resources, the more students succeed at school. We second make the hypothesis that this family SES has also an impact on parents’ expectations about their children educational outcomes. Moreover, we want to study if that parents’ expectations play the role of mediator between parents’ socio-economic status and the student’ self-concept and academic outcome. We test this model with a longitudinal design thanks to the census-based assessment from the System of Measurement of the Quality of Education (SIMCE). The SIMCE tests aim to assess all the students attending to regular education in a defined level. The sample used in this study came from the SIMCE assessments done three times: in 4th, 8th and 11th grade during the years 2007, 2011 and 2014 respectively. It includes 156.619 students (75.084 boys and 81.535 girls) that had valid responses for the three years. The family socio-economic status was measured at the first assessment (in 4th grade). The parents’ educational expectations and the students’ self-concept were measured at the second assessment (in 8th grade). The achievement score was measured twice; once when children were in 4th grade and a second time when they were in 11th grade. To test our hypothesis, we have defined a structural equation model. We found that our model fit well the data (CFI = 0.96, TLI = 0.95, RMSEA = 0.05, SRMR = 0.05). Both family SES and prior achievements predict parents’ educational expectations and effect of SES is important in comparison to the other coefficients. These expectations predict students’ achievement three years later (with prior achievement controlled) but not their self-concept. Our model explains 51.9% of the achievement in the 11th grade. Our results confirm the importance of the parents’ expectations and the significant role of socio-economic status in students’ academic achievement in Chile.Keywords: Chilean context, parent’s expectations, school achievement, self-concept, socio-economic status
Procedia PDF Downloads 14118561 Downside Risk Analysis of the Nigerian Stock Market: A Value at Risk Approach
Authors: Godwin Chigozie Okpara
Abstract:
This paper using standard GARCH, EGARCH, and TARCH models on day of the week return series (of 246 days) from the Nigerian Stock market estimated the model variants’ VaR. An asymmetric return distribution and fat-tail phenomenon in financial time series were considered by estimating the models with normal, student t and generalized error distributions. The analysis based on Akaike Information Criterion suggests that the EGARCH model with student t innovation distribution can furnish more accurate estimate of VaR. In the light of this, we apply the likelihood ratio tests of proportional failure rates to VaR derived from EGARCH model in order to determine the short and long positions VaR performances. The result shows that as alpha ranges from 0.05 to 0.005 for short positions, the failure rate significantly exceeds the prescribed quintiles while it however shows no significant difference between the failure rate and the prescribed quantiles for long positions. This suggests that investors and portfolio managers in the Nigeria stock market have long trading position or can buy assets with concern on when the asset prices will fall. Precisely, the VaR estimates for the long position range from -4.7% for 95 percent confidence level to -10.3% for 99.5 percent confidence level.Keywords: downside risk, value-at-risk, failure rate, kupiec LR tests, GARCH models
Procedia PDF Downloads 44318560 Three-Dimensional Model of Leisure Activities: Activity, Relationship, and Expertise
Authors: Taekyun Hur, Yoonyoung Kim, Junkyu Lim
Abstract:
Previous works on leisure activities had been categorizing activities arbitrarily and subjectively while focusing on a single dimension (e.g. active-passive, individual-group). To overcome these problems, this study proposed a Korean leisure activities’ matrix model that considered multidimensional features of leisure activities, which was comprised of 3 main factors and 6 sub factors: (a) Active (physical, mental), (b) Relational (quantity, quality), (c) Expert (entry barrier, possibility of improving). We developed items for measuring the degree of each dimension for every leisure activity. Using the developed Leisure Activities Dimensions (LAD) questionnaire, we investigated the presented dimensions of a total of 78 leisure activities which had been enjoyed by most Koreans recently (e.g. watching movie, taking a walk, watching media). The study sample consisted of 1348 people (726 men, 658 women) ranging in age from teenagers to elderlies in their seventies. This study gathered 60 data for each leisure activity, a total of 4860 data, which were used for statistical analysis. First, this study compared 3-factor model (Activity, Relation, Expertise) fit with 6-factor model (physical activity, mental activity, relational quantity, relational quality, entry barrier, possibility of improving) fit by using confirmatory factor analysis. Based on several goodness-of-fit indicators, the 6-factor model for leisure activities was a better fit for the data. This result indicates that it is adequate to take account of enough dimensions of leisure activities (6-dimensions in our study) to specifically apprehend each leisure attributes. In addition, the 78 leisure activities were cluster-analyzed with the scores calculated based on the 6-factor model, which resulted in 8 leisure activity groups. Cluster 1 (e.g. group sports, group musical activity) and Cluster 5 (e.g. individual sports) had generally higher scores on all dimensions than others, but Cluster 5 had lower relational quantity than Cluster 1. In contrast, Cluster 3 (e.g. SNS, shopping) and Cluster 6 (e.g. playing a lottery, taking a nap) had low scores on a whole, though Cluster 3 showed medium levels of relational quantity and quality. Cluster 2 (e.g. machine operating, handwork/invention) required high expertise and mental activity, but low physical activity. Cluster 4 indicated high mental activity and relational quantity despite low expertise. Cluster 7 (e.g. tour, joining festival) required not only moderate degrees of physical activity and relation, but low expertise. Lastly, Cluster 8 (e.g. meditation, information searching) had the appearance of high mental activity. Even though clusters of our study had a few similarities with preexisting taxonomy of leisure activities, there was clear distinctiveness between them. Unlike the preexisting taxonomy that had been created subjectively, we assorted 78 leisure activities based on objective figures of 6-dimensions. We also could identify that some leisure activities, which used to belong to the same leisure group, were included in different clusters (e.g. filed ball sports, net sports) because of different features. In other words, the results can provide a different perspective on leisure activities research and be helpful for figuring out what various characteristics leisure participants have.Keywords: leisure, dimensional model, activity, relationship, expertise
Procedia PDF Downloads 31118559 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang
Abstract:
Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.Keywords: CNN, classification, deep learning, GAN, Resnet50
Procedia PDF Downloads 8818558 Physico-Chemical Characterization of an Algerian Biomass: Application in the Adsorption of an Organic Pollutant
Authors: Djelloul Addad, Fatiha Belkhadem Mokhtari
Abstract:
The objective of this work is to study the retention of methylene blue (MB) by biomass. The Biomass is characterized by X-ray diffraction (XRD), infrared absorption (IRTF). Results show that the biomass contains organic and mineral substances. The effect of certain physicochemical parameters on the adsorption of MB is studied (effect of the pH). This study shows that the increase in the initial concentration of MB leads to an increase in the adsorbed quantity. The adsorption efficiency of MB decreases with increasing biomass mass. The adsorption kinetics show that the adsorption is rapid, and the maximum amount is reached after 120 min of contact time. It is noted that the pH has no great influence on the adsorption. The isotherms are best modelled by the Langmuir model. The adsorption kinetics follow the pseudo-second-order model. The thermodynamic study of adsorption shows that the adsorption is spontaneous and exothermic.Keywords: dyes, adsorption, biomass, methylene blue, langmuir
Procedia PDF Downloads 6718557 Structural Analysis and Modelling in an Evolving Iron Ore Operation
Authors: Sameh Shahin, Nannang Arrys
Abstract:
Optimizing pit slope stability and reducing strip ratio of a mining operation are two key tasks in geotechnical engineering. With a growing demand for minerals and an increasing cost associated with extraction, companies are constantly re-evaluating the viability of mineral deposits and challenging their geological understanding. Within Rio Tinto Iron Ore, the Structural Geology (SG) team investigate and collect critical data, such as point based orientations, mapping and geological inferences from adjacent pits to re-model deposits where previous interpretations have failed to account for structurally controlled slope failures. Utilizing innovative data collection methods and data-driven investigation, SG aims to address the root causes of slope instability. Committing to a resource grid drill campaign as the primary source of data collection will often bias data collection to a specific orientation and significantly reduce the capability to identify and qualify complexity. Consequently, these limitations make it difficult to construct a realistic and coherent structural model that identifies adverse structural domains. Without the consideration of complexity and the capability of capturing these structural domains, mining operations run the risk of inadequately designed slopes that may fail and potentially harm people. Regional structural trends have been considered in conjunction with surface and in-pit mapping data to model multi-batter fold structures that were absent from previous iterations of the structural model. The risk is evident in newly identified dip-slope and rock-mass controlled sectors of the geotechnical design rather than a ubiquitous dip-slope sector across the pit. The reward is two-fold: 1) providing sectors of rock-mass controlled design in previously interpreted structurally controlled domains and 2) the opportunity to optimize the slope angle for mineral recovery and reduced strip ratio. Furthermore, a resulting high confidence model with structures and geometries that can account for historic slope instabilities in structurally controlled domains where design assumptions failed.Keywords: structural geology, geotechnical design, optimization, slope stability, risk mitigation
Procedia PDF Downloads 4718556 Effect of Climate Change on Runoff in the Upper Mun River Basin, Thailand
Authors: Preeyaphorn Kosa, Thanutch Sukwimolseree
Abstract:
The climate change is a main parameter which affects the element of hydrological cycle especially runoff. Then, the purpose of this study is to determine the impact of the climate change on surface runoff using land use map on 2008 and daily weather data during January 1, 1979 to September 30, 2010 for SWAT model. SWAT continuously simulate time model and operates on a daily time step at basin scale. The results present that the effect of temperature change cannot be clearly presented on the change of runoff while the rainfall, relative humidity and evaporation are the parameters for the considering of runoff change. If there are the increasing of rainfall and relative humidity, there is also the increasing of runoff. On the other hand, if there is the increasing of evaporation, there is the decreasing of runoff.Keywords: climate, runoff, SWAT, upper Mun River basin
Procedia PDF Downloads 39618555 Impact of Urbanization on the Performance of Higher Education Institutions
Authors: Chandan Jha, Amit Sachan, Arnab Adhikari, Sayantan Kundu
Abstract:
The purpose of this study is to evaluate the performance of Higher Education Institutions (HEIs) of India and examine the impact of urbanization on the performance of HEIs. In this study, the Data Envelopment Analysis (DEA) has been used, and the authors have collected the required data related to performance measures from the National Institutional Ranking Framework web portal. In this study, the authors have evaluated the performance of HEIs by using two different DEA models. In the first model, geographic locations of the institutes have been categorized into two categories, i.e., Urban Vs. Non-Urban. However, in the second model, these geographic locations have been classified into three categories, i.e., Urban, Semi-Urban, Non-Urban. The findings of this study provide several insights related to the degree of urbanization and the performance of HEIs.Keywords: DEA, higher education, performance evaluation, urbanization
Procedia PDF Downloads 21518554 Visualization and Performance Measure to Determine Number of Topics in Twitter Data Clustering Using Hybrid Topic Modeling
Authors: Moulana Mohammed
Abstract:
Topic models are widely used in building clusters of documents for more than a decade, yet problems occurring in choosing optimal number of topics. The main problem is the lack of a stable metric of the quality of topics obtained during the construction of topic models. The authors analyzed from previous works, most of the models used in determining the number of topics are non-parametric and quality of topics determined by using perplexity and coherence measures and concluded that they are not applicable in solving this problem. In this paper, we used the parametric method, which is an extension of the traditional topic model with visual access tendency for visualization of the number of topics (clusters) to complement clustering and to choose optimal number of topics based on results of cluster validity indices. Developed hybrid topic models are demonstrated with different Twitter datasets on various topics in obtaining the optimal number of topics and in measuring the quality of clusters. The experimental results showed that the Visual Non-negative Matrix Factorization (VNMF) topic model performs well in determining the optimal number of topics with interactive visualization and in performance measure of the quality of clusters with validity indices.Keywords: interactive visualization, visual mon-negative matrix factorization model, optimal number of topics, cluster validity indices, Twitter data clustering
Procedia PDF Downloads 13418553 Planning the Journey of Unifying Medical Record Numbers in Five Facilities and the Expected Challenges: Case Study in Saudi Arabia
Authors: N. Al Khashan, H. Al Shammari, W. Al Bahli
Abstract:
Patients who are eligible to receive treatment at the National Guard Health Affairs (NGHA), Saudi Arabia will typically have four medical record numbers (MRN), one in each of the geographical areas. More hospitals and primary healthcare facilities in other geographical areas will launch soon which means more MRNs. When patients own four MRNs, this will cause major drawbacks in patients’ quality of care such as creating new medical files in different regions for relocated patients and using referral system among regions. Consequently, the access to a patient’s medical record from other regions and the interoperability of health information between the four hospitals’ information system would be challenging. Thus, there is a need to unify medical records among these five facilities. As part of the effort to increase the quality of care, a new Hospital Information Systems (HIS) was implemented in all NGHA facilities by the end of 2016. NGHA’s plan is put to be aligned with the Saudi Arabian national transformation program 2020; whereby 70% citizens and residents of Saudi Arabia would have a unified medical record number that enables transactions between multiple Electronic Medical Records (EMRs) vendors. The aim of the study is to explore the plan, the challenges and barriers of unifying the 4 MRNs into one Enterprise Patient Identifier (EPI) in NGHA hospitals by December 2018. A descriptive study methodology was used. A journey map and a project plan are created to be followed by the project team to ensure a smooth implementation of the EPI. It includes the following: 1) Approved project charter, 2) Project management plan, 3) Change management plan, 4) Project milestone dates. Currently, the HIS is using the regional MRN. Therefore, the HIS and all integrated health care systems in all regions will need modification to move from MRN to EPI without interfering with patient care. For now, the NGHA have successfully implemented an EPI connected with the 4 MRNs that work in the back end in the systems’ database.Keywords: consumer health, health informatics, hospital information system, universal medical record number
Procedia PDF Downloads 19618552 The Mental Workload of Intensive Care Unit Nurses in Performing Human-Machine Tasks: A Cross-Sectional Survey
Authors: Yan Yan, Erhong Sun, Lin Peng, Xuchun Ye
Abstract:
Aims: The present study aimed to explore Intensive Care Unit (ICU) nurses’ mental workload (MWL) and associated factors with it in performing human-machine tasks. Background: A wide range of emerging technologies have penetrated widely in the field of health care, and ICU nurses are facing a dramatic increase in nursing human-machine tasks. However, there is still a paucity of literature reporting on the general MWL of ICU nurses performing human-machine tasks and the associated influencing factors. Methods: A cross-sectional survey was employed. The data was collected from January to February 2021 from 9 tertiary hospitals in 6 provinces (Shanghai, Gansu, Guangdong, Liaoning, Shandong, and Hubei). Two-stage sampling was used to recruit eligible ICU nurses (n=427). The data were collected with an electronic questionnaire comprising sociodemographic characteristics and the measures of MWL, self-efficacy, system usability, and task difficulty. The univariate analysis, two-way analysis of variance (ANOVA), and a linear mixed model were used for data analysis. Results: Overall, the mental workload of ICU nurses in performing human-machine tasks was medium (score 52.04 on a 0-100 scale). Among the typical nursing human-machine tasks selected, the MWL of ICU nurses in completing first aid and life support tasks (‘Using a defibrillator to defibrillate’ and ‘Use of ventilator’) was significantly higher than others (p < .001). And ICU nurses’ MWL in performing human-machine tasks was also associated with age (p = .001), professional title (p = .002), years of working in ICU (p < .001), willingness to study emerging technology actively (p = .006), task difficulty (p < .001), and system usability (p < .001). Conclusion: The MWL of ICU nurses is at a moderate level in the context of a rapid increase in nursing human-machine tasks. However, there are significant differences in MWL when performing different types of human-machine tasks, and MWL can be influenced by a combination of factors. Nursing managers need to develop intervention strategies in multiple ways. Implications for practice: Multidimensional approaches are required to perform human-machine tasks better, including enhancing nurses' willingness to learn emerging technologies actively, developing training strategies that vary with tasks, and identifying obstacles in the process of human-machine system interaction.Keywords: mental workload, nurse, ICU, human-machine, tasks, cross-sectional study, linear mixed model, China
Procedia PDF Downloads 7018551 Capacity for Care: A Management Model for Increasing Animal Live Release Rates, Reducing Animal Intake and Euthanasia Rates in an Australian Open Admission Animal Shelter
Authors: Ann Enright
Abstract:
More than ever, animal shelters need to identify ways to reduce the number of animals entering shelter facilities and the incidence of euthanasia. Managing animal overpopulation using euthanasia can have detrimental health and emotional consequences for the shelter staff involved. There are also community expectations with moral and financial implications to consider. To achieve the goals of reducing animal intake and the incidence of euthanasia, shelter best practice involves combining programs, procedures and partnerships to increase live release rates (LRR), reduce the incidence of disease, length of stay (LOS) and shelter intake whilst overall remaining financially viable. Analysing daily processes, tracking outcomes and implementing simple strategies enabled shelter staff to more effectively focus their efforts and achieve amazing results. The objective of this retrospective study was to assess the effect of implementing the capacity for care (C4C) management model. Data focusing on the average daily number of animals on site for a two year period (2016 – 2017) was exported from a shelter management system, Customer Logic (CL) Vet to Excel for manipulation and comparison. Following the implementation of C4C practices the average daily number of animals on site was reduced by >50%, (2016 average 103 compared to 2017 average 49), average LOS reduced by 50% from 8 weeks to 4 weeks and incidence of disease reduced from ≥ 70% to less than 2% of the cats on site at the completion of the study. The total number of stray cats entering the shelter due to council contracts reduced by 50% (486 to 248). Improved cat outcomes were attributed to strategies that increased adoptions and reduced euthanasia of poorly socialized cats, including foster programs. To continue to achieve improvements in LRR and LOS, strategies to decrease intake further would be beneficial, for example, targeted sterilisation programs. In conclusion, the study highlighted the benefits of using C4C as a management tool, delivering a significant reduction in animal intake and euthanasia with positive emotional, financial and community outcomes.Keywords: animal welfare, capacity for care, cat, euthanasia, length of stay, managed intake, shelter
Procedia PDF Downloads 13918550 Moderating and Mediating Effects of Business Model Innovation Barriers during Crises: A Structural Equation Model Tested on German Chemical Start-Ups
Authors: Sarah Mueller-Saegebrecht, André Brendler
Abstract:
Business model innovation (BMI) as an intentional change of an existing business model (BM) or the design of a new BM is essential to a firm's development in dynamic markets. The relevance of BMI is also evident in the ongoing COVID-19 pandemic, in which start-ups, in particular, are affected by limited access to resources. However, first studies also show that they react faster to the pandemic than established firms. A strategy to successfully handle such threatening dynamic changes represents BMI. Entrepreneurship literature shows how and when firms should utilize BMI in times of crisis and which barriers one can expect during the BMI process. Nevertheless, research merging BMI barriers and crises is still underexplored. Specifically, further knowledge about antecedents and the effect of moderators on the BMI process is necessary for advancing BMI research. The addressed research gap of this study is two-folded: First, foundations to the subject on how different crises impact BM change intention exist, yet their analysis lacks the inclusion of barriers. Especially, entrepreneurship literature lacks knowledge about the individual perception of BMI barriers, which is essential to predict managerial reactions. Moreover, internal BMI barriers have been the focal point of current research, while external BMI barriers remain virtually understudied. Second, to date, BMI research is based on qualitative methodologies. Thus, a lack of quantitative work can specify and confirm these qualitative findings. By focusing on the crisis context, this study contributes to BMI literature by offering a first quantitative attempt to embed BMI barriers into a structural equation model. It measures managers' perception of BMI development and implementation barriers in the BMI process, asking the following research question: How does a manager's perception of BMI barriers influence BMI development and implementation in times of crisis? Two distinct research streams in economic literature explain how individuals react when perceiving a threat. "Prospect Theory" claims that managers demonstrate risk-seeking tendencies when facing a potential loss, and opposing "Threat-Rigidity Theory" suggests that managers demonstrate risk-averse behavior when facing a potential loss. This study quantitively tests which theory can best predict managers' BM reaction to a perceived crisis. Out of three in-depth interviews in the German chemical industry, 60 past BMIs were identified. The participating start-up managers gave insights into their start-up's strategic and operational functioning. After, each interviewee described crises that had already affected their BM. The participants explained how they conducted BMI to overcome these crises, which development and implementation barriers they faced, and how severe they perceived them, assessed on a 5-point Likert scale. In contrast to current research, results reveal that a higher perceived threat level of a crisis harms BM experimentation. Managers seem to conduct less BMI in times of crisis, whereby BMI development barriers dampen this relation. The structural equation model unveils a mediating role of BMI implementation barriers on the link between the intention to change a BM and the concrete BMI implementation. In conclusion, this study confirms the threat-rigidity theory.Keywords: barrier perception, business model innovation, business model innovation barriers, crises, prospect theory, start-ups, structural equation model, threat-rigidity theory
Procedia PDF Downloads 9418549 Pollution by Iron of the Quaternary Drinking Water and its Effect on Human Health
Authors: Raafat A. Mandour
Abstract:
Background; Water may be regarded as polluted if it contains substances that render it unsafe for public use. The surface, subsoil waters and the shallow water-bearing geologic formation are more subjected to pollution due to its closeness to the human daily work. Aim of the work; determine the distribution of iron level in drinking water and its relation to iron level in blood patients suffering from liver diseases. Materials and Methods; For the present study, a total number of (71) drinking water samples (surface, wells and tap) have been collected and Blood samples were carried out on (71) selected inhabitants who attended in different hospitals, from different localities and suffering from liver diseases. Serum iron level in these patients was estimated by using IRON-B kit, Biocon company (Germany) and the 1, 10-phenanthroline method. Results; The water samples analyzed for iron are found suitable for drinking except two samples at Mit-Ghamr district showing values higher than the permissible limit of Egyptian Ministry of Health (EMH) and World Health Organization (WHO).The comparison between iron concentrations in drinking water and human blood samples shows a positive relationship. Conclusion; groundwater samples from the polluted areas should have special attention for treatment.Keywords: water samples, blood samples, EMH, WHO
Procedia PDF Downloads 46818548 Microwave-Assisted Chemical Pre-Treatment of Waste Sorghum Leaves: Process Optimization and Development of an Intelligent Model for Determination of Volatile Compound Fractions
Authors: Daneal Rorke, Gueguim Kana
Abstract:
The shift towards renewable energy sources for biofuel production has received increasing attention. However, the use and pre-treatment of lignocellulosic material are inundated with the generation of fermentation inhibitors which severely impact the feasibility of bioprocesses. This study reports the profiling of all volatile compounds generated during microwave assisted chemical pre-treatment of sorghum leaves. Furthermore, the optimization of reducing sugar (RS) from microwave assisted acid pre-treatment of sorghum leaves was assessed and gave a coefficient of determination (R2) of 0.76, producing an optimal RS yield of 2.74 g FS/g substrate. The development of an intelligent model to predict volatile compound fractions gave R2 values of up to 0.93 for 21 volatile compounds. Sensitivity analysis revealed that furfural and phenol exhibited high sensitivity to acid concentration, alkali concentration and S:L ratio, while phenol showed high sensitivity to microwave duration and intensity as well. These findings illustrate the potential of using an intelligent model to predict the volatile compound fraction profile of compounds generated during pre-treatment of sorghum leaves in order to establish a more robust and efficient pre-treatment regime for biofuel production.Keywords: artificial neural networks, fermentation inhibitors, lignocellulosic pre-treatment, sorghum leaves
Procedia PDF Downloads 24818547 Interplay of Physical Activity, Hypoglycemia, and Psychological Factors: A Longitudinal Analysis in Diabetic Youth
Authors: Georges Jabbour
Abstract:
Background and aims: This two-year follow-up study explores the long-term sustainability of physical activity (PA) levels in young people with type 1 diabetes, focusing on the relationship between PA, hypoglycemia, and behavioral scores. The literature highlights the importance of PA and its health benefits, as well as the barriers to engaging in PA practices. Studies have shown that individuals with high levels of vigorous physical activity have higher fear of hypoglycemia (FOH) scores and more hypoglycemia episodes. Considering that hypoglycemia episodes are a major barrier to physical activity, and many studies reported a negative association between PA and high FOH scores, it cannot be guaranteed that those experiencing hypoglycemia over a long period will remain active. Building on that, the present work assesses whether high PA levels, despite elevated hypoglycemia risk, can be maintained over time. The study tracks PA levels at one and two years, correlating them with hypoglycemia instances and Fear of Hypoglycemia (FOH) scores. Materials and methods: A self-administered questionnaire was completed by 61 youth with T1D, and their PA was assessed. Hypoglycemia episodes, fear of hypoglycemia scores and HbA1C levels were collected. All assessments were realized at baseline (visit 0: V0), one year (V1) and two years later (V2). For the purpose of the present work, we explore the relationships between PA levels, hypoglycemia episodes, and FOH scores at each time point. We used multiple linear regression to model the mean outcomes for each exposure of interest. Results: Findings indicate no changes in total moderate to vigorous PA (MVPA) and VPA levels among visits, and HbA1c (%) was negatively correlated with the total amount of VPA per day in minutes (β= -0.44; p=0.01, β= -0.37; p=0.04, and β= -0.66; p=0.01 for V0, V1, and V2, respectively). Our linear regression model reported a significant negative correlation between VPA and FOH across the visits (β=-0.59, p=0.01; β= -0.44, p=0.01; and β= -0.34, p=0.03 for V0, V1, and V2, respectively), and HbA1c (%) was influenced by both the number of hypoglycemic episodes and FOH score at V2 (β=0.48; p=0.02 and β=0.38; p=0.03, respectively). Conclusion: The sustainability of PA levels and HbA1c (%) in young individuals with type 1 diabetes is influenced by various factors, including fear of hypoglycemia. Understanding these complex interactions is essential for developing effective interventions to promote sustained PA levels in this population. Our results underline the necessity of a multi-strategic approach to promoting active lifestyles among diabetic youths. This approach should synergize PA enhancement with vigilant glucose monitoring and effective FOH management.Keywords: physical activity, hypoglycemia, fear of hypoglycemia, youth
Procedia PDF Downloads 2618546 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation
Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell
Abstract:
Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models
Procedia PDF Downloads 14618545 Applying And Connecting The Microgrid Of Artificial Intelligence In The Form Of A Spiral Model To Optimize Renewable Energy Sources
Authors: PR
Abstract:
Renewable energy is a sustainable substitute to fossil fuels, which are depleting and attributing to global warming as well as greenhouse gas emissions. Renewable energy innovations including solar, wind, and geothermal have grown significantly and play a critical role in meeting energy demands recently. Consequently, Artificial Intelligence (AI) could further enhance the benefits of renewable energy systems. The combination of renewable technologies and AI could facilitate the development of smart grids that can better manage energy distribution and storage. AI thus has the potential to optimize the efficiency and reliability of renewable energy systems, reduce costs, and improve their overall performance. The conventional methods of using smart micro-grids are to connect these micro-grids in series or parallel or a combination of series and parallel. Each of these methods has its advantages and disadvantages. In this study, the proposal of using the method of connecting microgrids in a spiral manner is investigated. One of the important reasons for choosing this type of structure is the two-way reinforcement and exchange of each inner layer with the outer and upstream layer. With this model, we have the ability to increase energy from a small amount to a significant amount based on exponential functions. The geometry used to close the smart microgrids is based on nature.This study provides an overview of the applications of algorithms and models of AI as well as its advantages and challenges in renewable energy systems.Keywords: artificial intelligence, renewable energy sources, spiral model, optimize
Procedia PDF Downloads 918544 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India
Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel
Abstract:
Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM
Procedia PDF Downloads 24718543 Ranking All of the Efficient DMUs in DEA
Authors: Elahe Sarfi, Esmat Noroozi, Farhad Hosseinzadeh Lotfi
Abstract:
One of the important issues in Data Envelopment Analysis is the ranking of Decision Making Units. In this paper, a method for ranking DMUs is presented through which the weights related to efficient units should be chosen in a way that the other units preserve a certain percentage of their efficiency with the mentioned weights. To this end, a model is presented for ranking DMUs on the base of their superefficiency by considering the mentioned restrictions related to weights. This percentage can be determined by decision Maker. If the specific percentage is unsuitable, we can find a suitable and feasible one for ranking DMUs accordingly. Furthermore, the presented model is capable of ranking all of the efficient units including nonextreme efficient ones. Finally, the presented models are utilized for two sets of data and related results are reported.Keywords: data envelopment analysis, efficiency, ranking, weight
Procedia PDF Downloads 45718542 Habitat Model Review and a Proposed Methodology to Value Economic Trade-Off between Cage Culture and Habitat of an Endemic Species in Lake Maninjau, Indonesia
Authors: Ivana Yuniarti, Iwan Ridwansyah
Abstract:
This paper delivers a review of various methodologies for habitat assessment and a proposed methodology to assess an endemic fish species habitat in Lake Maninjau, Indonesia as a part of a Ph.D. project. This application is mainly aimed to assess the trade-off between the economic value of aquaculture and the fisheries. The proposed methodology is a generalized linear model (GLM) combined with GIS to assess presence-absence data or habitat suitability index (HSI) combined with the analytical hierarchy process (AHP). Further, a cost of habitat replacement approach is planned to be used to calculate the habitat value as well as its trade-off with the economic value of aquaculture. The result of the study is expected to be a scientific consideration in local decision making and to provide a reference for other areas in the country.Keywords: AHP, habitat, GLM, HSI, Maninjau
Procedia PDF Downloads 15218541 Predicting Survival in Cancer: How Cox Regression Model Compares to Artifial Neural Networks?
Authors: Dalia Rimawi, Walid Salameh, Amal Al-Omari, Hadeel AbdelKhaleq
Abstract:
Predication of Survival time of patients with cancer, is a core factor that influences oncologist decisions in different aspects; such as offered treatment plans, patients’ quality of life and medications development. For a long time proportional hazards Cox regression (ph. Cox) was and still the most well-known statistical method to predict survival outcome. But due to the revolution of data sciences; new predication models were employed and proved to be more flexible and provided higher accuracy in that type of studies. Artificial neural network is one of those models that is suitable to handle time to event predication. In this study we aim to compare ph Cox regression with artificial neural network method according to data handling and Accuracy of each model.Keywords: Cox regression, neural networks, survival, cancer.
Procedia PDF Downloads 20018540 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution
Authors: Al Omari Mohammed Ahmed
Abstract:
This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring
Procedia PDF Downloads 44118539 On the PTC Thermistor Model with a Hyperbolic Tangent Electrical Conductivity
Authors: M. O. Durojaye, J. T. Agee
Abstract:
This paper is on the one-dimensional, positive temperature coefficient (PTC) thermistor model with a hyperbolic tangent function approximation for the electrical conductivity. The method of asymptotic expansion was adopted to obtain the steady state solution and the unsteady-state response was obtained using the method of lines (MOL) which is a well-established numerical technique. The approach is to reduce the partial differential equation to a vector system of ordinary differential equations and solve numerically. Our analysis shows that the hyperbolic tangent approximation introduced is well suitable for the electrical conductivity. Numerical solutions obtained also exhibit correct physical characteristics of the thermistor and are in good agreement with the exact steady state solutions.Keywords: electrical conductivity, hyperbolic tangent function, PTC thermistor, method of lines
Procedia PDF Downloads 32218538 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 39018537 Finite Element and Experimental Investigation of Ductile Crack Growth of Surface Cracks
Authors: Osama A. Terfas, Abdelhakim A. Hameda, Abdusalam A. Alktiwi
Abstract:
An investigation on ductile crack growth of shallow semi-elliptical surface cracks with a/w=0.2, a/c=0.33 under bending was carried out, where a is the crack depth, w is the plate thickness and c is the crack length at surface. Finite element analysis and experiments were modelling and the crack growth model were verified with experimental data. The results showed that the initial crack shape was no longer maintained as the crack developed under ductile tearing. The maximum growth at the deepest point at early stages was stopped when the crack depth reached half thickness and growth occurred beneath surface. Excellent agreement in the crack shape patterns was observed between the experiments and the crack growth model.Keywords: crack growth, ductile tearing, mean stress, surface cracks
Procedia PDF Downloads 48818536 Phase Behavior Modelling of Libyan Near-Critical Gas-Condensate Field
Authors: M. Khazam, M. Altawil, A. Eljabri
Abstract:
Fluid properties in states near a vapor-liquid critical region are the most difficult to measure and to predict with EoS models. The principal model difficulty is that near-critical property variations do not follow the same mathematics as at conditions far away from the critical region. Libyan NC98 field in Sirte basin is a typical example of near critical fluid characterized by high initial condensate gas ratio (CGR) greater than 160 bbl/MMscf and maximum liquid drop-out of 25%. The objective of this paper is to model NC98 phase behavior with the proper selection of EoS parameters and also to model reservoir depletion versus gas cycling option using measured PVT data and EoS Models. The outcomes of our study revealed that, for accurate gas and condensate recovery forecast during depletion, the most important PVT data to match are the gas phase Z-factor and C7+ fraction as functions of pressure. Reasonable match, within -3% error, was achieved for ultimate condensate recovery at abandonment pressure of 1500 psia. The smooth transition from gas-condensate to volatile oil was fairly simulated by the tuned PR-EoS. The predicted GOC was approximately at 14,380 ftss. The optimum gas cycling scheme, in order to maximize condensate recovery, should not be performed at pressures less than 5700 psia. The contribution of condensate vaporization for such field is marginal, within 8% to 14%, compared to gas-gas miscible displacement. Therefore, it is always recommended, if gas recycle scheme to be considered for this field, to start it at the early stage of field development.Keywords: EoS models, gas-condensate, gas cycling, near critical fluid
Procedia PDF Downloads 31818535 Spatial Distribution of Ambient BTEX Concentrations at an International Airport in South Africa
Authors: Raeesa Moolla, Ryan S. Johnson
Abstract:
Air travel, and the use of airports, has experienced proliferative growth in the past few decades, resulting in the concomitant release of air pollutants. Air pollution needs to be monitored because of the known relationship between exposure to air pollutants and increased adverse effects on human health. This study monitored a group of volatile organic compounds (VOCs); specifically BTEX (viz. benzene, toluene, ethyl-benzene and xylenes), as many are detrimental to human health. Through the use of passive sampling methods, the spatial variability of BTEX within an international airport was investigated, in order to determine ‘hotspots’ where occupational exposure to BTEX may be intensified. The passive sampling campaign revealed BTEXtotal concentrations ranged between 12.95–124.04 µg m-3. Furthermore, BTEX concentrations were dispersed heterogeneously within the airport. Due to the slow wind speeds recorded (1.13 m.s-1); the hotspots were located close to their main BTEX sources. The main hotspot was located over the main apron of the airport. Employees working in this area may be chronically exposed to these emissions, which could be potentially detrimental to their health.Keywords: air pollution, air quality, hotspot monitoring, volatile organic compounds
Procedia PDF Downloads 17218534 Use of Technology Based Intervention for Continuous Professional Development of Teachers in Pakistan
Authors: Rabia Aslam
Abstract:
Overwhelming evidence from all around the world suggests that high-quality teacher professional development facilitates the improvement of teaching practices which in turn could improve student learning outcomes. The new Continuous Professional Development (CPD) model for primary school teachers in Punjab uses a blended approach in which pedagogical content knowledge is delivered through technology (high-quality instructional videos and lesson plans delivered to school tablets or mobile phones) with face-to-face support by Assistant Education Officers (AEOs). The model also develops Communities of Practice operationalized through formal meetings led by the AEOs and informal interactions through social media groups to provide opportunities for teachers to engage with each other and share their ideas, reflect on learning, and come up with solutions to issues they experience. Using Kirkpatrick’s 4 levels of the learning evaluation model, this paper investigates how school tablets and teacher mobile phones may act as transformational cultural tools to potentially expand perceptions and access to teaching and learning resources and explore some of the affordances of social media (Facebook, WhatsApp groups) in learning in an informal context. The results will be used to inform policy-level decisions on what shape could CPD of all teachers take in the context of a developing country like Pakistan.Keywords: CPD, teaching & learning, blended learning, learning technologies
Procedia PDF Downloads 8418533 An Example of University Research Driving University-Industry Collaboration
Authors: Stephen E. Cross, Donald P. McConnell
Abstract:
In the past decade, market pressures and decreasing U.S. federal budgets for science and technology have led to a fundamental change in expectations for corporate investments in innovation. The trend to significant, sustained corporate research collaboration with major academic centres has called for rethinking the balance between academic and corporate roles in these relationships. The Georgia Institute of Technology has developed a system-focused strategy for transformational research focused on grand challenges in areas of importance both to faculty and to industry collaborators. A model of an innovation ecosystem is used to guide both research and university-industry collaboration. The paper describes the strategy, the model, and the results to date including the benefits both to university research and industry collaboration. Key lessons learned are presented based on this experience.Keywords: ecosystem, industry collaboration, innovation, research strategy
Procedia PDF Downloads 420