Search results for: TensorFlow probability
122 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids
Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout
Abstract:
Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.Keywords: graphene, layered material, field emission, plasma, doping
Procedia PDF Downloads 361121 Grain Size Statistics and Depositional Pattern of the Ecca Group Sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa
Authors: Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava
Abstract:
Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicates the dominance of low energy environment. The bivariate plots that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function (LDF) analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are fluvial (deltaic) deposits. The graphic mean value shows the dominance of fine sand-size particles, which point to relatively low energy conditions of deposition. In addition, the LDF results point to low energy conditions during the deposition of the Prince Albert, Collingham and part of the Ripon Formation (Pluto Vale and Wonderfontein Shale Members), whereas the Trumpeters Member of the Ripon Formation and the overlying Fort Brown Formation accumulated under high energy conditions. The CM pattern shows a clustered distribution of sediments in the PQ and QR segments, indicating that the sediments were deposited mostly by suspension and rolling/saltation, and graded suspension. Furthermore, the plots also show that the sediments are mainly deposited by turbidity currents. Visher diagrams show the variability of hydraulic depositional conditions for the Permian Ecca Group sandstones. Saltation is the major process of transportation, although suspension and traction also played some role during deposition of the sediments. The sediments were mainly in saltation and suspension before being deposited.Keywords: grain size analysis, hydrodynamic condition, depositional environment, Ecca Group, South Africa
Procedia PDF Downloads 483120 Social Ties and the Prevalence of Single Chronic Morbidity and Multimorbidity among the Elderly Population in Selected States of India
Authors: Sree Sanyal
Abstract:
Research in ageing often highlights the age-related health dimension more than the psycho-social characteristics of the elderly, which also influences and challenges the health outcomes. Multimorbidity is defined as the person having more than one chronic non-communicable diseases and their prevalence increases with ageing. The study aims to evaluate the influence of social ties on self-reported prevalence of multimorbidity (selected chronic non-communicable diseases) among the selected states of elderly population in India. The data is accessed from Building Knowledge Base on Population Ageing in India (BKPAI), collected in 2011 covering the self-reported chronic non-communicable diseases like arthritis, heart disease, diabetes, lung disease with asthma, hypertension, cataract, depression, dementia, Alzheimer’s disease, and cancer. The data of the above diseases were taken together and categorized as: ‘no disease’, ‘one disease’ and ‘multimorbidity’. The predicted variables were demographic, socio-economic, residential types, and the variable of social ties includes social support, social engagement, perceived support, connectedness, and importance of the elderly. Predicted probability for multiple logistic regression was used to determine the background characteristics of the old in association with chronic morbidities showing multimorbidity. The finding suggests that 24.35% of the elderly are suffering from multimorbidity. Research shows that with reference to ‘no disease’, according to the socio-economic characteristics of the old, the female oldest old (80+) from others in caste and religion, widowed, never had any formal education, ever worked in their life, coming from the second wealth quintile standard, from rural Maharashtra are more prone with ‘one disease’. From the social ties background, the elderly who perceives they are important to the family, after getting older their decision-making status has been changed, prefer to stay with son and spouse only, satisfied with the communication from their children are more likely to have less single morbidity and the results are significant. Again, with respect to ‘no disease’, the female oldest old (80+), who are others in caste, Christian in religion, widowed, having less than 5 years of education completed, ever worked, from highest wealth quintile, residing in urban Kerala are more associated with multimorbidity. The elderly population who are more socially connected through family visits, public gatherings, gets support in decision making, who prefers to spend their later years with son and spouse only but stays alone shows lesser prevalence of multimorbidity. In conclusion, received and perceived social integration and support from associated neighborhood in the older days, knowing about their own needs in life facilitates better health and wellbeing of the elderly population in selected states of India.Keywords: morbidity, multi-morbidity, prevalence, social ties
Procedia PDF Downloads 122119 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 192118 Slope Instability Study Using Kinematic Analysis and Lineament Density Mapping along a Part of National Highway 58, Uttarakhand, India
Authors: Kush Kumar, Varun Joshi
Abstract:
Slope instability is a major problem of the mountainous region, especially in parts of the Indian Himalayan Region (IHR). The on-going tectonic, rugged topography, steep slope, heavy precipitation, toe erosion, structural discontinuities, and deformation are the main triggering factors of landslides in this region. Besides the loss of life, property, and infrastructure caused by a landslide, it also results in various environmental problems, i.e., degradation of slopes, land use, river quality by increased sediments, and loss of well-established vegetation. The Indian state of Uttarakhand, being a part of the active Himalayas, also faces numerous cases of slope instability. Therefore, the vulnerable landslide zones need to be delineated to safeguard various losses. The study area is focused in Garhwal and Tehri -Garhwal district of Uttarakhand state along National Highway 58, which is a strategic road and also connects the four important sacred pilgrims (Char Dham) of India. The lithology of these areas mainly comprises of sandstone, quartzite of Chakrata formation, and phyllites of Chandpur formation. The greywacke and sandstone rock of Saknidhar formation dips northerly and is overlain by phyllite of Chandpur formation. The present research incorporates the lineament density mapping using remote sensing satellite data supplemented by a detailed field study via kinematic analysis. The DEM data of ALOS PALSAR (12.5 m resolution) is resampled to 10 m resolution and used for preparing various thematic maps such as slope, aspect, drainage, hill shade, lineament, and lineament density using ARCGIS 10.6 software. Furthermore, detailed field mapping, including structural mapping, geomorphological mapping, is integrated for kinematic analysis of the slope using Dips 6.0 software of Rockscience. The kinematic analysis of 40 locations was carried out, among which 15 show the planar type of failure, five-show wedge failure, and rest, 20 show no failures. The lineament density map is overlapped with the location of the unstable slope inferred from kinematic analysis to infer the association of the field information and remote sensing derived information, and significant compatibility was observed. With the help of the present study, location-specific mitigation measures could be suggested. The mitigation measures would be helping in minimizing the probability of slope instability, especially during the rainy season, and reducing the hampering of road traffic.Keywords: Indian Himalayan Region, kinematic analysis, lineament density mapping, slope instability
Procedia PDF Downloads 139117 Family Cohesion, Social Networks, and Cultural Differences in Latino and Asian American Help Seeking Behaviors
Authors: Eileen Y. Wong, Katherine Jin, Anat Talmon
Abstract:
Background: Help seeking behaviors are highly contingent on socio-cultural factors such as ethnicity. Both Latino and Asian Americans underutilize mental health services compared to their White American counterparts. This difference may be related to the composite of one’s social support system, which includes family cohesion and social networks. Previous studies have found that Latino families are characterized by higher levels of family cohesion and social support, and Asian American families with greater family cohesion exhibit lower levels of help seeking behaviors. While both are broadly considered collectivist communities, within-culture variability is also significant. Therefore, this study aims to investigate the relationship between help seeking behaviors in the two cultures with levels of family cohesion and strength of social network. We also consider such relationships in light of previous traumatic events and diagnoses, particularly post-traumatic stress disorder (PTSD), to understand whether clinically diagnosed individuals differ in their strength of network and help seeking behaviors. Method: An adult sample (N = 2,990) from the National Latino and Asian American Study (NLAAS) provided data on participants’ social network, family cohesion, likelihood of seeking professional help, and DSM-IV diagnoses. T-tests compared Latino American (n = 1,576) and Asian American respondents (n = 1,414) in strength of social network, level of family cohesion, and likelihood of seeking professional help. Linear regression models were used to identify the probability of help-seeking behavior based on ethnicity, PTSD diagnosis, and strength of social network. Results: Help-seeking behavior was significantly associated with family cohesion and strength of social network. It was found that higher frequency of expressing one’s feelings with family significantly predicted lower levels of help-seeking behaviors (β = [-.072], p = .017), while higher frequency of spending free time with family significantly predicted higher levels of help-seeking behaviors (β = [.129], p = .002) in the Asian American sample. Subjective importance of family relations compared to that of one’s peers also significantly predict higher levels of help-seeking behaviors (β = [.095], p = .011) in the Asian American sample. Frequency of sharing one’s problems with relatives significantly predicted higher levels of help-seeking behaviors (β = [.113], p < .01) in the Latino American sample. A PTSD diagnosis did not have any significant moderating effect. Conclusion: Considering the underutilization of mental health services in Latino and Asian American minority groups, it is crucial to understand ways in which help seeking behavior can be encouraged. Our findings suggest that different dimensions within family cohesion and social networks have differential impacts on help-seeking behavior. Given the multifaceted nature of family cohesion and cultural relevance, the implications of our findings for theory and practice will be discussed.Keywords: family cohesion, social networks, Asian American, Latino American, help-seeking behavior
Procedia PDF Downloads 70116 A Comparison between Five Indices of Overweight and Their Association with Myocardial Infarction and Death, 28-Year Follow-Up of 1000 Middle-Aged Swedish Employed Men
Authors: Lennart Dimberg, Lala Joulha Ian
Abstract:
Introduction: Overweight (BMI 25-30) and obesity (BMI 30+) have consistently been associated with cardiovascular (CV) risk and death since the Framingham heart study in 1948, and BMI was included in the original Framingham risk score (FRS). Background: Myocardial infarction (MI) poses a serious threat to the patient's life. In addition to BMI, several other indices of overweight have been presented and argued to replace FRS as more relevant measures of CV risk. These indices include waist circumference (WC), waist/hip ratio (WHR), sagittal abdominal diameter (SAD), and sagittal abdominal diameter to height (SADHtR). Specific research question: The research question of this study is to evaluate the interrelationship between the various body measurements, BMI, WC, WHR, SAD, and SADHtR, and which measurement is strongly associated with MI and death. Methods: In 1993, 1,000 middle-aged Caucasian, randomly selected working men of the Swedish Volvo-Renault cohort were surveyed at a nurse-led health examination with a questionnaire, EKG, laboratory tests, blood pressure, height, weight, waist, and sagittal abdominal diameter measurements. Outcome data of myocardial infarction over 28 years come from Swedeheart (the Swedish national myocardial infarction registry) and the Swedish death registry. The Aalen-Johansen and Kaplan–Meier methods were used to estimate the cumulative incidences of MI and death. Multiple logistic regression analyses were conducted to compare BMI with the other four body measurements. The risk for the various measures of obesity was calculated with outcomes of accumulated first-time myocardial infarction and death as odds ratios (OR) in quartiles. The ORs between the 4th and the 1st quartile of each measure were calculated to estimate the association between the body measurement variables and the probability of cumulative incidences of myocardial infarction (MI) over time. Double-sided P values below 0.05 will be considered statistically significant. Unadjusted odds ratios were calculated for obesity indicators, MI, and death. Adjustments for age, diabetes, SBP, and the ratio of total cholesterol/HDL-C and blue/white collar status were performed. Results: Out of 1000 people, 959 subjects had full information about the five different body measurements. Of those, 90 participants had a first MI, and 194 persons died. The study showed that there was a high and significant correlation between the five different body measurements, and they were all associated with CVD risk factors. All body measurements were significantly associated with MI, with the highest (OR=3.6) seen for SADHtR and WC. After adjustment, all but SADHtR remained significant with weaker ORs. As for all-cause mortality, WHR (OR=1.7), SAD (OR=1.9), and SADHtR (OR=1.6) were significantly associated, but not WC and BMI. However, after adjustment, only WHR and SAD were significantly associated with death, but with attenuated ORs.Keywords: BMI, death, epidemiology, myocardial infarction, risk factor, sagittal abdominal diameter, sagittal abdominal diameter to height, waist circumference, waist-hip ratio
Procedia PDF Downloads 98115 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles
Authors: Antonina A. Shumakova, Sergey A. Khotimchenko
Abstract:
The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.Keywords: absorption, cadmium, engineered nanomaterials, lead
Procedia PDF Downloads 87114 The Social Structuring of Mate Selection: Assortative Marriage Patterns in the Israeli Jewish Population
Authors: Naava Dihi, Jon Anson
Abstract:
Love, so it appears, is not socially blind. We show that partner selection is socially constrained, and the freedom to choose is limited by at least two major factors or capitals: on the one hand, material resources and education, locating the partners on a scale of personal achievement and economic independence. On the other, the partners' ascriptive belonging to particular ethnic, or origin, groups, differentiated by the groups' social prestige, as well as by their culture, history and even physical characteristics. However, the relative importance of achievement and ascriptive factors, as well as the overlap between them, varies from society to society, depending on the society's structure and the factors shaping it. Israeli social structure has been shaped by the waves of new immigrants who arrived over the years. The timing of their arrival, their patterns of physical settlement and their occupational inclusion or exclusion have together created a mosaic of social groups whose principal common feature has been the country of origin from which they arrived. The analysis of marriage patterns helps illuminate the social meanings of the groups and their borders. To the extent that ethnic group membership has meaning for individuals and influences their life choices, the ascriptive factor will gain in importance relative to the achievement factor in their choice of marriage partner. In this research, we examine Jewish Israeli marriage patterns by looking at the marriage choices of 5,041 women aged 15 to 49 who were single at the census in 1983, and who were married at the time of the 1995 census, 12 years later. The database for this study was a file linking respondents from the 1983 and the 1995 censuses. In both cases, 5 percent of household were randomly chosen, so that our sample includes about 4 percent of women in Israel in 1983. We present three basic analyses: (1) Who was still single in 1983, using personal and household data from the 1983 census (binomial model), (2) Who married between 1983 and a1995, using personal and household data from the 1983 census (binomial model), (3) What were the personal characteristics of the womens’ partners in 1995, using data from the 1995 census (loglinear model). We show (i) that material and cultural capital both operate to delay marriage and to increase the probability of remaining single; and (ii) while there is a clear association between ethnic group membership and education, endogamy and homogamy both operate as separate forces which constraint (but do not determine) the choice of marriage partner, and thus both serve to reproduce the current pattern of relationships, as well as identifying patterns of proximity and distance between the different groups.Keywords: Israel, nuptiality, ascription, achievement
Procedia PDF Downloads 117113 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 76112 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)
Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg
Abstract:
One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.Keywords: arsenic, fluoride, groundwater contamination, logistic regression
Procedia PDF Downloads 348111 Environment Patterns and Mental Health of Older Adults in Long-Term Care Facilities: The Role of Activity Profiles
Authors: Shiau-Fang Chao, Yu-Chih Chen
Abstract:
Owing to physical limitations and restrained lifestyle, older long-term care (LTC) residents are more likely to be affected by their environment than their community-dwelling counterparts. They also participate fewer activities and experience worse mental health than healthy older adults. This study adopts the ICF model to determine the extent to which the clustered patterns of LTC environment and activity participation are associated with older residents’ mental health. Method: Data were collected from a stratified equal probability sample of 634 older residents in 155 LTC institutions in Taiwan. Latent profile analysis (LPA) and latent class analysis (LCA) were conducted to explore the profiles for environment and activity participation. Multilevel modeling was performed to elucidate the relationships among environment profiles, activity profiles, and mental health. Results: LPA identified three mutually exclusive environment profiles (Low-, Moderate-, and High-Support Environment) based on the physical, social, and attitudinal environmental domains, consolidated from 12 environmental measures. LCA constructed two distinct activity profiles (Low- and High-Activity Participation) across seven activity domains (outdoor, volunteer-led leisure, spiritual, household chores, interpersonal exchange, social, and sedentary activity) that were factored from 20 activities. Compared to the Low-Support Environment class, older adults in the Moderate- and High-Support Environment classes had better mental health. Older residents in the Moderate- and High-Support Environment classes were more likely to be in the “High Activity” class, which in turn, exhibited better mental health. Conclusion: This study advances the current knowledge through rigorous methods and study design. The study findings lead to several conclusions. First, this study supports the use of ICF framework to institutionalized older individuals with functional limitations and demonstrates that both measures of environment and activity participation can be refined from multiple indicators. Second, environmental measures that encompass the physical, social, and attitudinal domains would provide a more comprehensive assessment on the place where an older individual embeds. Third, simply counting activities in which an older individual participates or considering a certain type of activity may not capture his or her way of life. Practitioners should not only focus on group or leisure activities within the institutions; rather, more efforts should be made to consider residents’ preferences for everyday life and support their remaining ability by encouraging continuous participation in activities they still willing and capable to perform. Fourth, environment and activity participation are modifiable factors which have greater potential to strengthen older LTC residents’ mental health, and activity participation should be considered in the link between environment and mental health. A combination of enhanced physical, social, and attitudinal environments, and continual engagement in various activities may optimize older LTC residents’ mental health.Keywords: activity, environment, mental health, older LTC residents
Procedia PDF Downloads 201110 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks
Authors: Afnan Al-Romi, Iman Al-Momani
Abstract:
The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN
Procedia PDF Downloads 324109 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 513108 Household Socioeconomic Factors Associated with Teenage Pregnancies in Kigali City, Rwanda
Authors: Dieudonne Uwizeye, Reuben Muhayiteto
Abstract:
Teenage pregnancy is a challenging problem for sustainable development due to restrictions it poses to socioeconomic opportunities for young mothers, their children and families. Being unable to take appropriate economic and social responsibilities, teen mothers get trapped into poverty and become economic burden to their family and country. Besides, teenage pregnancy is also a health problem because children born to very young mothers are vulnerable with greater risk of illnesses and deaths, and teenage mothers are more likely to be exposed to greater risk of maternal mortality and to other health and psychological problems. In Kigali city, in Rwanda, teenage pregnancy rate is currently high and its increase in recent years is worrisome. However, only individual factors influencing the teenage pregnancy tend to be the basis of interventions. It is important to understand the important socioeconomic factors at the household level that are associated with teenage pregnancy to help government, parents, and other stakeholders to appropriately address the problem with sustainable measures. This study analyzed secondary data from the Fifth Rwanda Demographic and Health Survey (RDHS-V 2014-2015) conducted by the National Institute of Statistics of Rwanda (NISR). The aim was to examine household socio-economic factors that are associated with incidence of teenage pregnancies in Kigali city. In addition to descriptive analysis, Pearson’s Chi Square and Binary Logistic Regression were used in the analysis. Findings indicate that marital status and age of household head, number of members in a household, number of rooms used for sleeping, educational level of the household head and household's wealth are significantly associated with teenage pregnancy in Rwanda ( p< 0.05). It was found that teenagers living with parents, those having parents with higher education and those from richer families are less likely to become pregnant. Age of household head was pinpointed as factor to teenage pregnancy, with teenage-headed households being more vulnerable. The findings also revealed that household composition correlates with the probability of teenage pregnancy (p < 0.05) with teenagers from households with less number of members being more vulnerable. Regarding the size of the house, the study suggested that the more rooms available in households, the less incidences of teenage pregnancy are likely to be observed (p < 0.05). However, teenage pregnancy was not significantly associated with physical violence among parents (p = 0.65) and sex of household heads (p = 0.52), except in teen-headed households of which female are predominantly heads. The study concludes that teenage pregnancy remains a serious social, economic and health problem in Rwanda. The study informs government officials, parents and other stakeholders to take interventions and preventive measures through community sex education, policies and strategies to foster effective parental guidance, care and control of young girls through meeting their necessary social and financial needs within households.Keywords: household socio-economic factors, Rwanda, Rwanda demographic and health survey, teenage pregnancy
Procedia PDF Downloads 179107 COVID Prevention and Working Environmental Risk Prevention and Buisness Continuety among the Sme’s in Selected Districts in Sri Lanka
Authors: Champika Amarasinghe
Abstract:
Introduction: Covid 19 pandemic was badly hit to the Sri Lankan economy during the year 2021. More than 65% of the Sri Lankan work force is engaged with small and medium scale businesses which no doubt that they had to struggle for their survival and business continuity during the pandemic. Objective: To assess the association of adherence to the new norms during the Covid 19 pandemic and maintenance of healthy working environmental conditions for business continuity. A cross sectional study was carried out to assess the OSH status and adequacy of Covid 19 preventive strategies among the 200 SME’S in selected two districts in Sri Lanka. These two districts were selected considering the highest availability of SME’s. Sample size was calculated, and probability propionate to size was used to select the SME’s which were registered with the small and medium scale development authority. An interviewer administrated questionnaire was used to collect the data, and OSH risk assessment was carried out by a team of experts to assess the OSH status in these industries. Results: According to the findings, more than 90% of the employees in these industries had a moderate awareness related to COVID 19 disease and preventive strategies such as the importance of Mask use, hand sainting practices, and distance maintenance, but the only forty percent of them were adhered to implementation of these practices. Furthermore, only thirty five percent of the employees and employers in these SME’s new the reasons behind the new norms, which may be the reason for reluctance to implement these strategies and reluctance to adhering to the new norms in this sector. The OSH risk assessment findings revealed that the working environmental organization while maintaining the distance between two employees was poor due to the inadequacy of space in these entities. More than fifty five percent of the SME’s had proper ventilation and lighting facilities. More than eighty five percent of these SME’s had poor electrical safety measures. Furthermore, eighty two percent of them had not maintained fire safety measures. Eighty five percent of them were exposed to heigh noise levels and chemicals where they were not using any personal protectives nor any other engineering controls were not imposed. Floor conditions were poor, and they were not maintaining the occupational accident nor occupational disease diseases. Conclusions: Based on the findings, proper awareness sessions were carried out by NIOSH. Six physical training sessions and continues online trainings were carried out to overcome these issues, which made a drastic change in their working environments and ended up with hundred percent implementation of the Covid 19 preventive strategies, which intern improved the worker participation in the businesses. Reduced absentees and improved business opportunities, and continued their businesses without any interruption during the third episode of Covid 19 in Sri Lanka.Keywords: working environment, Covid 19, occupational diseases, occupational accidents
Procedia PDF Downloads 88106 Geochemical Modeling of Mineralogical Changes in Rock and Concrete in Interaction with Groundwater
Authors: Barbora Svechova, Monika Licbinska
Abstract:
Geochemical modeling of mineralogical changes of various materials in contact with an aqueous solution is an important tool for predicting the processes and development of given materials at the site. The modeling focused on the mutual interaction of groundwater at the contact with the rock mass and its subsequent influence on concrete structures. The studied locality is located in Slovakia in the area of the Liptov Basin, which is a significant inter-mountain lowland, which is bordered on the north and south by the core mountains belt of the Tatras, where in the center the crystalline rises to the surface accompanied by Mesozoic cover. Groundwater in the area is bound to structures with complicated geological structures. From the hydrogeological point of view, it is an environment with a crack-fracture character. The area is characterized by a shallow surface circulation of groundwater without a significant collector structure, and from a chemical point of view, groundwater in the area has been classified as calcium bicarbonate with a high content of CO2 and SO4 ions. According to the European standard EN 206-1, these are waters with medium aggression towards the concrete. Three rock samples were taken from the area. Based on petrographic and mineralogical research, they were evaluated as calcareous shale, micritic limestone and crystalline shale. These three rock samples were placed in demineralized water for one month and the change in the chemical composition of the water was monitored. During the solution-rock interaction there was an increase in the concentrations of all major ions, except nitrates. There was an increase in concentration after a week, but at the end of the experiment, the concentration was lower than the initial value. Another experiment was the interaction of groundwater from the studied locality with a concrete structure. The concrete sample was also left in the water for 1 month. The results of the experiment confirmed the assumption of a reduction in the concentrations of calcium and bicarbonate ions in water due to the precipitation of amorphous forms of CaCO3 on the surface of the sample.Vice versa, it was surprising to increase the concentration of sulphates, sodium, iron and aluminum due to the leaching of concrete. Chemical analyzes from these experiments were performed in the PHREEQc program, which calculated the probability of the formation of amorphous forms of minerals. From the results of chemical analyses and hydrochemical modeling of water collected in situ and water from experiments, it was found: groundwater at the site is unsaturated and shows moderate aggression towards reinforced concrete structures according to EN 206-1a, which will affect the homogeneity and integrity of concrete structures; from the rocks in the given area, Ca, Na, Fe, HCO3 and SO4. Unsaturated waters will dissolve everything as soon as they come into contact with the solid matrix. The speed of this process then depends on the physicochemical parameters of the environment (T, ORP, p, n, water retention time in the environment, etc.).Keywords: geochemical modeling, concrete , dissolution , PHREEQc
Procedia PDF Downloads 198105 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems
Authors: A. G. Akhundov
Abstract:
Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning
Procedia PDF Downloads 191104 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 106103 Relationship between the Development of Sepsis, Systemic Inflammatory Response Syndrome and Body Mass Index among Adult Trauma Patients at University Hospital in Cairo
Authors: Mohamed Hendawy Mousa, Warda Youssef Mohamed Morsy
Abstract:
Background: Sepsis is a major cause of mortality and morbidity in trauma patients. Body mass index as an indicator of nutritional status was reported as a predictor of injury pattern and complications among critically ill injured patients. Aim: The aim of this study is to investigate the relationship between body mass index and the development of sepsis, systemic inflammatory response syndrome among adult trauma patients at emergency hospital - Cairo University. Research design: Descriptive correlational research design was utilized in the current study. Research questions: Q1. What is the body mass index profile of adult trauma patients admitted to the emergency hospital at Cairo University over a period of 6 months?, Q2. What is the frequency of systemic inflammatory response syndrome and sepsis among adult trauma patients admitted to the emergency hospital at Cairo University over a period of 6 months?, and Q3. What is the relationship between the development of sepsis, systemic inflammatory response syndrome and body mass index among adult trauma patients admitted to the emergency hospital at Cairo University over a period of 6 months?. Sample: A purposive sample of 52 adult male and female trauma patients with revised trauma score 10 to 12. Setting: The Emergency Hospital affiliated to Cairo University. Tools: Four tools were utilized to collect data pertinent to the study: Socio demographic and medical data tool, Systemic inflammatory response syndrome assessment tool, Revised Trauma Score tool, and Sequential organ failure assessment tool. Results: The current study revealed that, (61.5 %) of the studied subjects had normal body mass index, (25 %) were overweight, and (13.5 %) were underweight. 84.6% of the studied subjects had systemic inflammatory response syndrome and 92.3% were suffering from mild sepsis. No significant statistical relationship was found between body mass index and occurrence of Systemic inflammatory response syndrome (2= 2.89 & P = 0.23). However, Sequential organ failure assessment scores were affected significantly by body mass index was found mean of initial and last Sequential organ failure assessment score for underweight, normal and obese where t= 7.24 at p = 0.000, t= 16.49 at p = 0.000 and t= 9.80 at p = 0.000 respectively. Conclusion: Underweight trauma patients showed significantly higher rate of developing sepsis as compared to patients with normal body weight and obese. Recommendations: based on finding of this study the following are recommended: replication of the study on a larger probability sample from different geographical locations in Egypt; Carrying out of further studies in order to assess the other risk factors influencing trauma outcome and incidence of its complications; Establishment of standardized guidelines for managing underweight traumatized patients with sepsis.Keywords: body mass index, sepsis, systemic inflammatory response syndrome, adult trauma
Procedia PDF Downloads 251102 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 76101 Activating Psychological Resources of DUI (Drivers under the Influence of Alcohol) Using the Traffic Psychology Intervention (IFT Course), Germany
Authors: Parichehr Sharifi, Konrad Reschke, Hans-Liudger Dienel
Abstract:
Psychological intervention generally targets changes in attitudes and behavior. Working with DUIs is part of traffic psychologists’ work. The primary goal of this field is to reduce the probability of re-conspicuous of the delinquent driver. One of these measurements in Germany is IFT courses for DUI s. The IFT course was designed by the Institute for Therapy Research. Participants are drivers who have fallen several times or once with a blood alcohol concentration of 1.6 per mill and who have completed a medical-psychological assessment (MPU) with the result of the course recommendation. The course covers four sessions of 3.5 hours each (1 hour / 60 m) and in a period of 3 to 4 weeks in the group discussion. This work analyzes interventions for the rehabilitation of DUI (Drunk Drivers offenders) offenders in groups under the aspect of activating psychological resources. From the aspect of sustainability, they should also have long-term consequences for the maintenance of unproblematic driving behavior in terms of the activation of resources. It is also addressing a selected consistency-theory-based intervention effect, activating psychological resources. So far, this has only been considered in the psychotherapeutic field but never in the field of traffic psychology. The methodology of this survey is one qualitative and three quantitative. In four sub-studies, it will be examined which measurements can determine the resources and how traffic psychological interventions can strengthen resources. The results of the studies have the following implications for traffic psychology research and practice: (1) In the field of traffic psychology intervention for the restoration of driving fitness, it can be stated that aspects of resource activation in this work have been investigated for the first time by qualitative and quantitative methods. (2) The resource activation could be confirmed based on the determined results as an effective factor of traffic psychological intervention. (3) Two sub-studies show a range of resources and resource activation options that must be given greater emphasis in traffic psychology interventions: - Social resource activation - improvement of the life skills of participants - Reactivation of existing social support options - Re-experiencing self-esteem, self-assurance, and acceptance of traffic-related behaviors. (4) In revising the IFT-§70 course, as well as other courses on recreating aptitude for DUI, new traffic-specific resource-enabling interventions against alcohol abuse should be developed to further enhance the courses through motivational, cognitive, and behavioral effects of resource activation, Resource-activating interventions can not only be integrated into behavioral group interventions but can also be applied in psychodynamic, psychodynamic (individual psychological) and other contexts of individual traffic psychology. The results are indicative but clearly show that personal resources can be strengthened through traffic psychology interventions. In the research, practice, training, and further education of traffic psychology, the aspect of primary resource activation (Grawe, 1999), therefore, always deserves the greatest attention for the rehabilitation of DUIs and Traffic safety.Keywords: traffic safety, psychological resources, activating of resources, intervention programs for alcohol offenders, empowerment
Procedia PDF Downloads 79100 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle
Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores
Abstract:
This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino
Procedia PDF Downloads 17499 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data
Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder
Abstract:
Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods
Procedia PDF Downloads 25398 Development a Forecasting System and Reliable Sensors for River Bed Degradation and Bridge Pier Scouring
Authors: Fong-Zuo Lee, Jihn-Sung Lai, Yung-Bin Lin, Xiaoqin Liu, Kuo-Chun Chang, Zhi-Xian Yang, Wen-Dar Guo, Jian-Hao Hong
Abstract:
In recent years, climate change is a major factor to increase rainfall intensity and extreme rainfall frequency. The increased rainfall intensity and extreme rainfall frequency will increase the probability of flash flood with abundant sediment transport in a river basin. The floods caused by heavy rainfall may cause damages to the bridge, embankment, hydraulic works, and the other disasters. Therefore, the foundation scouring of bridge pier, embankment and spur dike caused by floods has been a severe problem in the worldwide. This severe problem has happened in many East Asian countries such as Taiwan and Japan because of these areas are suffered in typhoons, earthquakes, and flood events every year. Results from the complex interaction between fluid flow patterns caused by hydraulic works and the sediment transportation leading to the formation of river morphology, it is extremely difficult to develop a reliable and durable sensor to measure river bed degradation and bridge pier scouring. Therefore, an innovative scour monitoring sensor using vibration-based Micro-Electro Mechanical Systems (MEMS) was developed. This vibration-based MEMS sensor was packaged inside a stainless sphere with the proper protection of the full-filled resin, which can measure free vibration signals to detect scouring/deposition processes at the bridge pier. In addition, a friendly operational system includes rainfall runoff model, one-dimensional and two-dimensional numerical model, and the applicability of sediment transport equation and local scour formulas of bridge pier are included in this research. The friendly operational system carries out the simulation results of flood events that includes the elevation changes of river bed erosion near the specified bridge pier and the erosion depth around bridge piers. In addition, the system is developed with easy operation and integrated interface, the system can supplies users to calibrate and verify numerical model and display simulation results through the interface comparing to the scour monitoring sensors. To achieve the forecast of the erosion depth of river bed and main bridge pier in the study area, the system also connects the rainfall forecast data from Taiwan Typhoon and Flood Research Institute. The results can be provided available information for the management unit of river and bridge engineering in advance.Keywords: flash flood, river bed degradation, bridge pier scouring, a friendly operational system
Procedia PDF Downloads 19297 Climate Change and Landslide Risk Assessment in Thailand
Authors: Shotiros Protong
Abstract:
The incidents of sudden landslides in Thailand during the past decade have occurred frequently and more severely. It is necessary to focus on the principal parameters used for analysis such as land cover land use, rainfall values, characteristic of soil and digital elevation model (DEM). The combination of intense rainfall and severe monsoons is increasing due to global climate change. Landslide occurrences rapidly increase during intense rainfall especially in the rainy season in Thailand which usually starts around mid-May and ends in the middle of October. The rain-triggered landslide hazard analysis is the focus of this research. The combination of geotechnical and hydrological data are used to determine permeability, conductivity, bedding orientation, overburden and presence of loose blocks. The regional landslide hazard mapping is developed using the Slope Stability Index SINMAP model supported on Arc GIS software version 10.1. Geological and land use data are used to define the probability of landslide occurrences in terms of geotechnical data. The geological data can indicate the shear strength and the angle of friction values for soils above given rock types, which leads to the general applicability of the approach for landslide hazard analysis. To address the research objectives, the methods are described in this study: setup and calibration of the SINMAP model, sensitivity of the SINMAP model, geotechnical laboratory, landslide assessment at present calibration and landslide assessment under future climate simulation scenario A2 and B2. In terms of hydrological data, the millimetres/twenty-four hours of average rainfall data are used to assess the rain triggered landslide hazard analysis in slope stability mapping. During 1954-2012 period, is used for the baseline of rainfall data at the present calibration. The climate change in Thailand, the future of climate scenarios are simulated by spatial and temporal scales. The precipitation impact is need to predict for the climate future, Statistical Downscaling Model (SDSM) version 4.2, is used to assess the simulation scenario of future change between latitude 16o 26’ and 18o 37’ north and between longitude 98o 52’ and 103o 05’ east by SDSM software. The research allows the mapping of risk parameters for landslide dynamics, and indicates the spatial and time trends of landslide occurrences. Thus, regional landslide hazard mapping under present-day climatic conditions from 1954 to 2012 and simulations of climate change based on GCM scenarios A2 and B2 from 2013 to 2099 related to the threshold rainfall values for the selected the study area in Uttaradit province in the northern part of Thailand. Finally, the landslide hazard mapping will be compared and shown by areas (km2 ) in both the present and the future under climate simulation scenarios A2 and B2 in Uttaradit province.Keywords: landslide hazard, GIS, slope stability index (SINMAP), landslides, Thailand
Procedia PDF Downloads 56496 Evaluation of the Risk Factors on the Incidence of Adjacent Segment Degeneration After Anterior Neck Discectomy and Fusion
Authors: Sayyed Mostafa Ahmadi, Neda Raeesi
Abstract:
Background and Objectives: Cervical spondylosis is a common problem that affects the adult spine and is the most common cause of radiculopathy and myelopathy in older patients. Anterior discectomy and fusion is a well-known technique in degenerative cervical disc disease. However, one of the late undesirable complications is adjacent disc degeneration, which affects about 91% of patients in ten years. Many factors can be effective in causing this complication, but some are still debatable. Discovering these risk factors and eliminating them can improve the quality of life. Methods: This is a retrospective cohort study. All patients who underwent anterior discectomy and fusion surgery in the neurosurgery ward of Imam Khomeini Hospital between 2013 and 2016 were evaluated. Their demographic information was collected. All patients were visited and examined for radiculopathy, myelopathy, and muscular force. At the same visit, all patients were asked to have a facelift, and neck profile, as well as a neck MRI(General Tesla 3). Preoperative graphs were used to measure the diameter of the cervical canal(Pavlov ratio) and to evaluate sagittal alignment(Cobb Angle). Preoperative MRI of patients was reviewed for anterior and posterior longitudinal ligament calcification. Result: In this study, 57 patients were studied. The mean age of patients was 50.63 years, and 49.1% were male. Only 3.5% of patients had anterior and posterior longitudinal ligament calcification. Symptomatic ASD was observed in 26.6%. The X-rays and MRIs showed evidence of 80.7% radiological ASD. Among patients who underwent one-level surgery, 20% had symptomatic ASD, but among patients who underwent two-level surgery, the rate of ASD was 50%.In other words, the higher the number of surfaces that are operated and fused, the higher the probability of symptomatic ASD(P-value <0.05). The X-rays and MRIs showed 80.7% of radiological ASD. Among patients who underwent surgery at one level, 78% had radiological ASD, and this number was 92% among patients who underwent two-level surgery(P-value> 0.05). Demographic variables such as age, sex, height, weight, and BMI did not have a significant effect on the incidence of radiological ASD(P-value> 0.05), but sex and height were two influential factors on symptomatic ASD(P-value <0.05). Other related variables such as family history, smoking and exercise also have no significant effect(P-value> 0.05). Radiographic variables such as Pavlov ratio and sagittal alignment were also unaffected by the incidence of radiological and symptomatic ASD(P-value> 0.05). The number of surgical surfaces and the incidence of anterior and posterior longitudinal ligament calcification before surgery also had no statistically significant effect(P-value> 0.05). In the study of the ability of the neck to move in different directions, none of these variables are statistically significant in the two groups with radiological and symptomatic ASD and the non-affected group(P-value> 0.05). Conclusion: According to the findings of this study, this disease is considered to be a multifactorial disease. The incidence of radiological ASD is much higher than symptomatic ASD (80.7% vs. 26.3%) and sex, height and number of fused surfaces are the only factors influencing the incidence of symptomatic ASD and no variable influences radiological ASD.Keywords: risk factors, anterior neck disectomy and fusion, adjucent segment degeneration, complication
Procedia PDF Downloads 6395 Stability in Slopes Related to Expansive Soils
Authors: Ivelise M. Strozberg, Lucas O. Vale, Maria V. V. Morais
Abstract:
Expansive soils are characterized by their significant volumetric variations, tending to suffer an increase of this volume when added water in their voids and a decrease of volume when this water is removed. The parameters of resistance (especially the angle of friction, cohesion and specific weight) of expansive or non-expansive soils of the same field present differences, as found in laboratory tests. What is expected is that, through this research, demonstrate that this variation directly affects the results of the calculation of factors of safety for slope stability. The expansibility due to specific clay minerals such as montmorillonites and vermiculites is the most common form of expansion of soils or rocks, causing expansion pressures. These pressures can become an aggravating problem in regions across the globe that, when not previously studied, may present high risks to the enterprise, such as cracks, fissures, movements in structures, breaking of retaining walls, drilling of wells, among others. The study provides results based on analyzes carried out in the Slide 2018 software belonging to the Rocsience group, where the software is a two-dimensional equilibrium slope stability program that calculates the factor of safety or probability of failure of certain surfaces composed of soils or rocks (or both, depending on the situation), - through the methods of: Bishop simplified, Fellenius and Janbu corrected. This research compares the factors of safety of a homogeneous earthfill dam geometry, analysed for operation and end-of-construction situations, having a height of approximately 35 meters, with a slope of 1.5: 1 in the slope downstream and 2: 1 on the upstream slope. As the water level is 32.73m high and the water table is drawn automatically by the Slide program using the finite element method for the operating situation, considering two hypotheses for the use of materials - the first with soils with characteristics of expansion and the second with soils without expansibility. For this purpose, soil samples were collected from the region of São Bento do Una - Pernambuco, Brazil and taken to the soil mechanics laboratory to characterize and determine the percentage of expansibility. There were found 2 types of soils in that area: 1 site of expansive soils (8%) and another with non- expansive ones. Based on the results found, the analysis of the values of factors of safety indicated, both upstream and downstream slopes, the highest values were obtained in the case where there is no presence of materials with expansibility resulting, for one of the situations, values of 1.353 (Fellenius), 1,295 (Janbu corrected) and 1,409 (Bishop simplified). There is a considerable drop in safety factors in cases where soils are potentially expansive, resulting in values for the same situation of 0.859 (Fellenius), 0.809 (Janbu corrected) and 0.842 (Bishop simplified), in the case of higher expansibility (8 %). This shows that the expansibility is a determinant factor in the fall of resistance of soil, determined by the factors of cohesion and angle of friction.Keywords: dam. slope. software. swelling soil
Procedia PDF Downloads 12394 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines
Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky
Abstract:
Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods
Procedia PDF Downloads 11393 Boredom in the Classroom: Sentiment Analysis on Teaching Practices and Related Outcomes
Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís
Abstract:
Students’ emotional experiences have been a widely discussed theme among researchers, proving a central role on students’ outcomes. Yet, up to now, far too little attention has been paid to teaching practices that negatively relate with students’ negative emotions in the higher education. The present work aims to examine the relationship between teachers’ teaching practices (i.e., students’ evaluations of teaching and autonomy support), the students’ feelings of boredom and agentic engagement and motivation in the higher education context. To do so, the present study incorporates one of the most popular tools in natural processing language to address students’ evaluations of teaching: sentiment analysis. Whereas most research has focused on the creation of SA models and assessing students’ satisfaction regarding teachers and courses to the author’s best knowledge, no research before has included results from SA into an explanatory model. A total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) participated in the study. Students were enrolled in degree and masters’ studies at the faculty of Education of a public university of Spain. Data was collected using an online questionnaire students could access through a QR code they completed during a teaching period where the assessed teacher was not present. To assess students’ sentiments towards their teachers’ teaching, we asked them the following open-ended question: “If you had to explain a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?”. Sentiment analysis was performed with Microsoft's pre-trained model. For this study, we relied on the probability of the students answer belonging to the negative category. To assess the reliability of the measure, inter-rater agreement between this NLP tool and one of the researchers, who independently coded all answers, was examined. The average pairwise percent agreement and the Cohen’s kappa were calculated with ReCal2. The agreement reached was of 90.8% and Cohen’s kappa .68, both considered satisfactory. To test the hypothesis relations a structural equation model (SEM) was estimated. Results showed that the model fit indices displayed a good fit to the data; χ² (134) = 351.129, p < .001, RMSEA = .07, SRMR = .09, TLI = .91, CFI = .92. Specifically, results show that boredom was negatively predicted by autonomy support practices (β = -.47[-.61, -.33]), whereas for the negative sentiment extracted from SET, this relation was positive (β = .23[.16, .30]). In other words, when students’ opinion towards their instructors’ teaching practices was negative, it was more likely for them to feel bored. Regarding the relations among boredom and student outcomes, results showed a negative predictive value of boredom on students’ motivation to study (β = -.46[-.63, -.29]) and agentic engagement (β = -.24[-.33, -.15]). Altogether, results show a promising future for sentiment analysis techniques in the field of education as they proved the usefulness of this tool when evaluating relations among teaching practices and student outcomes.Keywords: sentiment analysis, boredom, motivation, agentic engagement
Procedia PDF Downloads 99