Search results for: 2D hydrodynamic model
948 Hydrogeophysical Investigations And Mapping of Ingress Channels Along The Blesbokspruit Stream In The East Rand Basin Of The Witwatersrand, South Africa
Authors: Melvin Sethobya, Sithule Xanga, Sechaba Lenong, Lunga Nolakana, Gbenga Adesola
Abstract:
Mining has been the cornerstone of the South African economy for the last century. Most of the gold mining in South Africa was conducted within the Witwatersrand basin, which contributed to the rapid growth of the city of Johannesburg and capitulated the city to becoming the business and wealth capital of the country. But with gradual depletion of resources, a stoppage in the extraction of underground water from mines and other factors relating to survival of the mining operations over a lengthy period, most of the mines were abandoned and left to pollute the local waterways and groundwater with toxins, heavy metal residue and increased acid mine drainage ensued. The Department of Mineral Resources and Energy commissioned a project whose aim is to monitor, maintain, and mitigate the adverse environmental impacts of polluted water mine water flowing into local streams affecting local ecosystems and livelihoods downstream. As part of mitigation efforts, the diagnosis and monitoring of groundwater or surface water polluted sites has become important. Geophysical surveys, in particular, Resistivity and Magnetics surveys, were selected as some of most suitable techniques for investigation of local ingress points along of one the major streams cutting through the Witwatersrand basin, namely the Blesbokspruit, which is found in the eastern part of the basin. The aim of the surveys was to provide information that could be used to assist in determining possible water loss/ ingress from the Blesbokspriut stream. Modelling of geophysical surveys results offered an in-depth insight into the interaction and pathways of polluted water through mapping of possible ingress channels near the Blesbokspruit. The resistivity - depth profile of the surveyed site exhibit a three(3) layered model with low resistivity values (10 to 200 Ω.m) overburden, which is underlain by a moderate resistivity weathered layer (>300 Ω.m), which sits on a more resistive crystalline bedrock (>500 Ω.m). Two locations of potential ingress channels were mapped across the two traverses at the site. The magnetic survey conducted at the site mapped a major NE-SW trending regional linearment with a strong magnetic signature, which was modeled to depth beyond 100m, with the potential to act as a conduit for dispersion of stream water away from the stream, as it shared a similar orientation with the potential ingress channels as mapped using the resistivity method.Keywords: eletrictrical resistivity, magnetics survey, blesbokspruit, ingress
Procedia PDF Downloads 64947 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 304946 Clubhouse: A Minor Rebellion against the Algorithmic Tyranny of the Majority
Authors: Vahid Asadzadeh, Amin Ataee
Abstract:
Since the advent of social media, there has been a wave of optimism among researchers and civic activists about the influence of virtual networks on the democratization process, which has gradually waned. One of the lesser-known concerns is how to increase the possibility of hearing the voices of different minorities. According to the theory of media logic, the media, using their technological capabilities, act as a structure through which events and ideas are interpreted. Social media, through the use of the learning machine and the use of algorithms, has formed a kind of structure in which the voices of minorities and less popular topics are lost among the commotion of the trends. In fact, the recommended systems and algorithms used in social media are designed to help promote trends and make popular content more popular, and content that belongs to minorities is constantly marginalized. As social networks gradually play a more active role in politics, the possibility of freely participating in the reproduction and reinterpretation of structures in general and political structures in particular (as Laclau and Mouffe had in mind) can be considered as criteria to democracy in action. The point is that the media logic of virtual networks is shaped by the rule and even the tyranny of the majority, and this logic does not make it possible to design a self-foundation and self-revolutionary model of democracy. In other words, today's social networks, though seemingly full of variety But they are governed by the logic of homogeneity, and they do not have the possibility of multiplicity as is the case in immanent radical democracies (influenced by Gilles Deleuze). However, with the emergence and increasing popularity of Clubhouse as a new social media, there seems to be a shift in the social media space, and that is the diminishing role of algorithms and systems reconditioners as content delivery interfaces. This has led to the fact that in the Clubhouse, the voices of minorities are better heard, and the diversity of political tendencies manifests itself better. The purpose of this article is to show, first, how social networks serve the elimination of minorities in general, and second, to argue that the media logic of social networks must adapt to new interpretations of democracy that give more space to minorities and human rights. Finally, this article will show how the Clubhouse serves the new interpretations of democracy at least in a minimal way. To achieve the mentioned goals, in this article by a descriptive-analytical method, first, the relation between media logic and postmodern democracy will be inquired. The political economy popularity in social media and its conflict with democracy will be discussed. Finally, it will be explored how the Clubhouse provides a new horizon for the concepts embodied in radical democracy, a horizon that more effectively serves the rights of minorities and human rights in general.Keywords: algorithmic tyranny, Clubhouse, minority rights, radical democracy, social media
Procedia PDF Downloads 147945 Principles for the Realistic Determination of the in-situ Concrete Compressive Strength under Consideration of Rearrangement Effects
Authors: Rabea Sefrin, Christian Glock, Juergen Schnell
Abstract:
The preservation of existing structures is of great economic interest because it contributes to higher sustainability and resource conservation. In the case of existing buildings, in addition to repair and maintenance, modernization or reconstruction works often take place in the course of adjustments or changes in use. Since the structural framework and the associated load level are usually changed in the course of the structural measures, the stability of the structure must be verified in accordance with the currently valid regulations. The concrete compressive strength of the existing structures concrete and the derived mechanical parameters are of central importance for the recalculation and verification. However, the compressive strength of the existing concrete is usually set comparatively low and thus underestimated. The reasons for this are too small numbers, and large scatter of material properties of the drill cores, which are used for the experimental determination of the design value of the compressive strength. Within a structural component, the load is usually transferred over the area with higher stiffness and consequently with higher compressive strength. Therefore, existing strength variations within a component only play a subordinate role due to rearrangement effects. This paper deals with the experimental and numerical determination of such rearrangement effects in order to calculate the concrete compressive strength of existing structures more realistic and economical. The influence of individual parameters such as the specimen geometry (prism or cylinder) or the coefficient of variation of the concrete compressive strength is analyzed in experimental small-part tests. The coefficients of variation commonly used in practice are adjusted by dividing the test specimens into several layers consisting of different concretes, which are monolithically connected to each other. From each combination, a sufficient number of the test specimen is produced and tested to enable evaluation on a statistical basis. Based on the experimental tests, FE simulations are carried out to validate the test results. In the frame of a subsequent parameter study, a large number of combinations is considered, which had not been investigated in the experimental tests yet. Thus, the influence of individual parameters on the size and characteristic of the rearrangement effect is determined and described more detailed. Based on the parameter study and the experimental results, a calculation model for a more realistic determination of the in situ concrete compressive strength is developed and presented. By considering rearrangement effects in concrete during recalculation, a higher number of existing structures can be maintained without structural measures. The preservation of existing structures is not only decisive from an economic, sustainable, and resource-saving point of view but also represents an added value for cultural and social aspects.Keywords: existing structures, in-situ concrete compressive strength, rearrangement effects, recalculation
Procedia PDF Downloads 120944 Assessment of Pedestrian Comfort in a Portuguese City Using Computational Fluid Dynamics Modelling and Wind Tunnel
Authors: Bruno Vicente, Sandra Rafael, Vera Rodrigues, Sandra Sorte, Sara Silva, Ana Isabel Miranda, Carlos Borrego
Abstract:
Wind comfort for pedestrians is an important condition in urban areas. In Portugal, a country with 900 km of coastline, the wind direction are predominantly from Nor-Northwest with an average speed of 2.3 m·s -1 (at 2 m height). As a result, a set of city authorities have been requesting studies of pedestrian wind comfort for new urban areas/buildings, as well as to mitigate wind discomfort issues related to existing structures. This work covers the efficiency evaluation of a set of measures to reduce the wind speed in an outdoor auditorium (open space) located in a coastal Portuguese urban area. These measures include the construction of barriers, placed at upstream and downstream of the auditorium, and the planting of trees, placed upstream of the auditorium. The auditorium is constructed in the form of a porch, aligned with North direction, driving the wind flow within the auditorium, promoting channelling effects and increasing its speed, causing discomfort in the users of this structure. To perform the wind comfort assessment, two approaches were used: i) a set of experiments using the wind tunnel (physical approach), with a representative mock-up of the study area; ii) application of the CFD (Computational Fluid Dynamics) model VADIS (numerical approach). Both approaches were used to simulate the baseline scenario and the scenarios considering a set of measures. The physical approach was conducted through a quantitative method, using hot-wire anemometer, and through a qualitative analysis (visualizations), using the laser technology and a fog machine. Both numerical and physical approaches were performed for three different velocities (2, 4 and 6 m·s-1 ) and two different directions (NorNorthwest and South), corresponding to the prevailing wind speed and direction of the study area. The numerical results show an effective reduction (with a maximum value of 80%) of the wind speed inside the auditorium, through the application of the proposed measures. A wind speed reduction in a range of 20% to 40% was obtained around the audience area, for a wind direction from Nor-Northwest. For southern winds, in the audience zone, the wind speed was reduced from 60% to 80%. Despite of that, for southern winds, the design of the barriers generated additional hot spots (high wind speed), namely, in the entrance to the auditorium. Thus, a changing in the location of the entrance would minimize these effects. The results obtained in the wind tunnel compared well with the numerical data, also revealing the high efficiency of the purposed measures (for both wind directions).Keywords: urban microclimate, pedestrian comfort, numerical modelling, wind tunnel experiments
Procedia PDF Downloads 232943 Flow Links Curiosity and Creativity: The Mediating Role of Flow
Authors: Nicola S. Schutte, John M. Malouff
Abstract:
Introduction: Curiosity is a positive emotion and motivational state that consists of the desire to know. Curiosity consists of several related dimensions, including a desire for exploration, deprivation sensitivity, and stress tolerance. Creativity involves generating novel and valuable ideas or products. How curiosity may prompt greater creativity remains to be investigated. The phenomena of flow may link curiosity and creativity. Flow is characterized by intense concentration and absorption and gives rise to optimal performance. Objective of Study: The objective of the present study was to investigate whether the phenomenon of flow may link curiosity with creativity. Methods and Design: Fifty-seven individuals from Australia (45 women and 12 men, mean age of 35.33, SD=9.4) participated. Participants were asked to design a program encouraging residents in a local community to conserve water and to record the elements of their program in writing. Participants were then asked to rate their experience as they developed and wrote about their program. Participants rated their experience on the Dimensional Curiosity Measure sub-scales assessing the exploration, deprivation sensitivity, and stress tolerance facets of curiosity, and the Flow Short Scale. Reliability of the measures as assessed by Cronbach's alpha was as follows: Exploration Curiosity =.92, Deprivation Sensitivity Curiosity =.66, Stress Tolerance Curiosity =.93, and Flow=.96. Two raters independently coded each participant’s water conservation program description on creativity. The mixed-model intraclass correlation coefficient for the two sets of ratings was .73. The mean of the two ratings produced the final creativity score for each participant. Results: During the experience of designing the program, all three types of curiosity were significantly associated with the flow. Pearson r correlations were as follows: Exploration Curiosity and flow, r =.68 (higher Exploration Curiosity was associated with more flow); Deprivation Sensitivity Curiosity and flow, r =.39 (higher Deprivation Sensitivity Curiosity was associated with more flow); and Stress Tolerance Curiosity and flow, r = .44 (more stress tolerance in relation to novelty and exploration was associated with more flow). Greater experience of flow was significantly associated with greater creativity in designing the water conservation program, r =.39. The associations between dimensions of curiosity and creativity did not reach significance. Even though the direct relationships between dimensions of curiosity and creativity were not significant, indirect relationships through the mediating effect of the experience of flow between dimensions of curiosity and creativity were significant. Mediation analysis using PROCESS showed that flow linked Exploration Curiosity with creativity, standardized beta=.23, 95%CI [.02,.25] for the indirect effect; Deprivation Sensitivity Curiosity with creativity, standardized beta=.14, 95%CI [.04,.29] for the indirect effect; and Stress Tolerance Curiosity with creativity, standardized beta=.13, 95%CI [.02,.27] for the indirect effect. Conclusions: When engaging in an activity, higher levels of curiosity are associated with greater flow. More flow is associated with higher levels of creativity. Programs intended to increase flow or creativity might build on these findings and also explore causal relationships.Keywords: creativity, curiosity, flow, motivation
Procedia PDF Downloads 183942 Practicing Inclusion for Hard of Hearing and Deaf Students in Regular Schools in Ethiopia
Authors: Mesfin Abebe Molla
Abstract:
This research aims to examine the practices of inclusion of the hard of hearing and deaf students in regular schools. It also focuses on exploring strategies for optimal benefits of students with Hard of Hearing and Deaf (HH-D) from inclusion. Concurrent mixed methods research design was used to collect quantitative and qualitative data. The instruments used to gather data for this study were questionnaire, semi- structured interview, and observations. A total of 102 HH-D students and 42 primary and High School teachers were selected using simple random sampling technique and used as participants to collect quantitative data. Non-probability sampling technique was also employed to select 14 participants (4-school principals, 6-teachers and 4-parents of HH-D students) and they were interviewed to collect qualitative data. Descriptive and inferential statistical techniques (independent sample t-test, one way ANOVA and Multiple regressions) were employed to analyze quantitative data. Qualitative data were also analyzed qualitatively by theme analysis. The findings reported that there were individual principals’, teachers’ and parents’ strong commitment and efforts for practicing inclusion of HH-D students effectively; however, most of the core values of inclusion were missing in both schools. Most of the teachers (78.6 %) and HH-D students (75.5%) had negative attitude and considerable reservations about the feasibility of inclusion of HH-D students in both schools. Furthermore, there was a statistically significant difference of attitude toward to inclusion between the two school’s teachers and the teachers’ who had taken and had not taken additional training on IE and sign language. The study also indicated that there was a statistically significant difference of attitude toward to inclusion between hard of hearing and deaf students. However, the overall contribution of the demographic variables of teachers and HH-D students on their attitude toward inclusion is not statistically significant. The finding also showed that HH-D students did not have access to modified curriculum which would maximize their abilities and help them to learn together with their hearing peers. In addition, there is no clear and adequate direction for the medium of instruction. Poor school organization and management, lack of commitment, financial resources, collaboration and teachers’ inadequate training on Inclusive Education (IE) and sign language, large class size, inappropriate assessment procedure, lack of trained deaf adult personnel who can serve as role model for HH-D students and lack of parents and community members’ involvement were some of the major factors that affect the practicing inclusion of students HH-D. Finally, recommendations are made to improve the practices of inclusion of HH-D students and to make inclusion of HH-D students an integrated part of Ethiopian education based on the findings of the study.Keywords: deaf, hard of hearing, inclusion, regular schools
Procedia PDF Downloads 344941 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process
Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa
Abstract:
Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas
Procedia PDF Downloads 92940 Integrated Approach to Attenuate Insulin Amyloidosis: Synergistic Effects of Peptide and Cysteine Protease Enzymes
Authors: Shilpa Mukundaraj, Nagaraju Shivaiah
Abstract:
Amyloidogenic conditions, driven by protein aggregation into insoluble fibrils, pose significant challenges in diabetes management, particularly through the amyloidogenic LVEALYL sequence in insulin B-chain. This study explores a dual therapeutic strategy involving cysteine protease enzymes and inhibitory peptides to target insulin amyloidosis. Combining in silico, in vitro, and in vivo methodologies, the research aims to inhibit amyloid formation and degrade preformed fibrils. Inhibitory peptides were designed using structure-guided approaches in Rosetta to specifically target the LVEALYL sequence. Concurrently, cysteine protease enzymes, including papain and ficin, were evaluated for their fibril disassembly potential. In vitro experiments utilizing SDS- PAGE and spectroscopic techniques confirmed dose-dependent degradation of amyloid aggregates by these enzymes, with significant disaggregation observed at higher concentrations. Peptide inhibitors effectively reduced fibril formation, as evidenced by reduced Thioflavin T fluorescence and circular dichroism spectroscopy. Complementary in silico analyses, including molecular docking and dynamic simulations, provided structural insights into enzyme binding interactions with amyloidogenic regions. Key residues involved in substrate recognition and cleavage were identified, with computational findings aligning strongly with experimental data. These insights confirmed the specificity of papain and ficin in targeting insulin fibrils. For translational potential, an in vivo rat model was developed involving subcutaneous insulin amyloid injections to induce localized amyloid deposits. Over six days of enzyme treatment, a marked reduction in amyloid burden was observed through histological and biochemical assays. Furthermore, inflammatory markers were significantly attenuated in treated groups, emphasizing the dual role of enzymes in amyloid clearance and inflammation modulation. This integrative study highlights the promise of cysteine protease enzymes and inhibitory peptides as complementary therapeutic strategies for managing insulin amyloidosis. By targeting both the formation and persistence of amyloid fibrils, this dual approach offers a novel and effective avenue for amyloidosis treatment.Keywords: insulin amyloidosis, peptide inhibitors, cysteine protease enzymes, amyloid degradation
Procedia PDF Downloads 0939 Resilience-Vulnerability Interaction in the Context of Disasters and Complexity: Study Case in the Coastal Plain of Gulf of Mexico
Authors: Cesar Vazquez-Gonzalez, Sophie Avila-Foucat, Leonardo Ortiz-Lozano, Patricia Moreno-Casasola, Alejandro Granados-Barba
Abstract:
In the last twenty years, academic and scientific literature has been focused on understanding the processes and factors of coastal social-ecological systems vulnerability and resilience. Some scholars argue that resilience and vulnerability are isolated concepts due to their epistemological origin, while others note the existence of a strong resilience-vulnerability relationship. Here we present an ordinal logistic regression model based on the analytical framework about dynamic resilience-vulnerability interaction along adaptive cycle of complex systems and disasters process phases (during, recovery and learning). In this way, we demonstrate that 1) during the disturbance, absorptive capacity (resilience as a core of attributes) and external response capacity explain the probability of households capitals to diminish the damage, and exposure sets the thresholds about the amount of disturbance that households can absorb, 2) at recovery, absorptive capacity and external response capacity explain the probability of households capitals to recovery faster (resilience as an outcome) from damage, and 3) at learning, adaptive capacity (resilience as a core of attributes) explains the probability of households adaptation measures based on the enhancement of physical capital. As a result, during the disturbance phase, exposure has the greatest weight in the probability of capital’s damage, and households with absorptive and external response capacity elements absorbed the impact of floods in comparison with households without these elements. At the recovery phase, households with absorptive and external response capacity showed a faster recovery on their capital; however, the damage sets the thresholds of recovery time. More importantly, diversity in financial capital increases the probability of recovering other capital, but it becomes a liability so that the probability of recovering the household finances in a longer time increases. At learning-reorganizing phase, adaptation (modifications to the house) increases the probability of having less damage on physical capital; however, it is not very relevant. As conclusion, resilience is an outcome but also core of attributes that interacts with vulnerability along the adaptive cycle and disaster process phases. Absorptive capacity can diminish the damage experienced by floods; however, when exposure overcomes thresholds, both absorptive and external response capacity are not enough. In the same way, absorptive and external response capacity diminish the recovery time of capital, but the damage sets the thresholds in where households are not capable of recovering their capital.Keywords: absorptive capacity, adaptive capacity, capital, floods, recovery-learning, social-ecological systems
Procedia PDF Downloads 134938 A New Measurement for Assessing Constructivist Learning Features in Higher Education: Lifelong Learning in Applied Fields (LLAF) Tempus Project
Authors: Dorit Alt, Nirit Raichel
Abstract:
Although university teaching is claimed to have a special task to support students in adopting ways of thinking and producing new knowledge anchored in scientific inquiry practices, it is argued that students' habits of learning are still overwhelmingly skewed toward passive acquisition of knowledge from authority sources rather than from collaborative inquiry activities.This form of instruction is criticized for encouraging students to acquire inert knowledge that can be used in instructional settings at best, however cannot be transferred into real-life complex problem settings. In order to overcome this critical inadequacy between current educational goals and instructional methods, the LLAF consortium (including 16 members from 8 countries) is aimed at developing updated instructional practices that put a premium on adaptability to the emerging requirements of present society. LLAF has created a practical guide for teachers containing updated pedagogical strategies and assessment tools, based on the constructivist approach for learning that put a premium on adaptability to the emerging requirements of present society. This presentation will be limited to teachers' education only and to the contribution of the project in providing a scale designed to measure the extent to which the constructivist activities are efficiently applied in the learning environment. A mix-method approach was implemented in two phases to construct the scale: The first phase included a qualitative content analysis involving both deductive and inductive category applications of students' observations. The results foregrounded eight categories: knowledge construction, authenticity, multiple perspectives, prior knowledge, in-depth learning, teacher- student interaction, social interaction and cooperative dialogue. The students' descriptions of their classes were formulated as 36 items. The second phase employed structural equation modeling (SEM). The scale was submitted to 597 undergraduate students. The goodness of fit of the data to the structural model yielded sufficient fit results. This research elaborates the body of literature by adding a category of in-depth learning which emerged from the content analysis. Moreover, the theoretical category of social activity has been extended to include two distinctive factors: cooperative dialogue and social interaction. Implications of these findings for the LLAF project are discussed.Keywords: constructivist learning, higher education, mix-methodology, structural equation modeling
Procedia PDF Downloads 315937 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 285936 Fine-Scale Modeling the Influencing Factors of Multi-Time Dimensions of Transit Ridership at Station Level: The Study of Guangzhou City
Authors: Dijiang Lyu, Shaoying Li, Zhangzhi Tan, Zhifeng Wu, Feng Gao
Abstract:
Nowadays, China is experiencing rapidly urban rail transit expansions in the world. The purpose of this study is to finely model factors influencing transit ridership at multi-time dimensions within transit stations’ pedestrian catchment area (PCA) in Guangzhou, China. This study was based on multi-sources spatial data, including smart card data, high spatial resolution images, points of interest (POIs), real-estate online data and building height data. Eight multiple linear regression models using backward stepwise method and Geographic Information System (GIS) were created at station-level. According to Chinese code for classification of urban land use and planning standards of development land, residential land-use were divided into three categories: first-level (e.g. villa), second-level (e.g. community) and third-level (e.g. urban villages). Finally, it concluded that: (1) four factors (CBD dummy, number of feeder bus route, number of entrance or exit and the years of station operation) were proved to be positively correlated with transit ridership, but the area of green land-use and water land-use negative correlated instead. (2) The area of education land-use, the second-level and third-level residential land-use were found to be highly connected to the average value of morning peak boarding and evening peak alighting ridership. But the area of commercial land-use and the average height of buildings, were significantly positive associated with the average value of morning peak alighting and evening peak boarding ridership. (3) The area of the second-level residential land-use was rarely correlated with ridership in other regression models. Because private car ownership is still large in Guangzhou now, and some residents living in the community around the stations go to work by transit at peak time, but others are much more willing to drive their own car at non-peak time. The area of the third-level residential land-use, like urban villages, was highly positive correlated with ridership in all models, indicating that residents who live in the third-level residential land-use are the main passenger source of the Guangzhou Metro. (4) The diversity of land-use was found to have a significant impact on the passenger flow on the weekend, but was non-related to weekday. The findings can be useful for station planning, management and policymaking.Keywords: fine-scale modeling, Guangzhou city, multi-time dimensions, multi-sources spatial data, transit ridership
Procedia PDF Downloads 142935 Epigenetic and Archeology: A Quest to Re-Read Humanity
Authors: Salma A. Mahmoud
Abstract:
Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.Keywords: epigenetics, archeology, bones, chromatin, methylome
Procedia PDF Downloads 108934 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor
Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang
Abstract:
To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel
Procedia PDF Downloads 354933 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 90932 Suspended Sediment Concentration and Water Quality Monitoring Along Aswan High Dam Reservoir Using Remote Sensing
Authors: M. Aboalazayem, Essam A. Gouda, Ahmed M. Moussa, Amr E. Flifl
Abstract:
Field data collecting is considered one of the most difficult work due to the difficulty of accessing large zones such as large lakes. Also, it is well known that the cost of obtaining field data is very expensive. Remotely monitoring of lake water quality (WQ) provides an economically feasible approach comparing to field data collection. Researchers have shown that lake WQ can be properly monitored via Remote sensing (RS) analyses. Using satellite images as a method of WQ detection provides a realistic technique to measure quality parameters across huge areas. Landsat (LS) data provides full free access to often occurring and repeating satellite photos. This enables researchers to undertake large-scale temporal comparisons of parameters related to lake WQ. Satellite measurements have been extensively utilized to develop algorithms for predicting critical water quality parameters (WQPs). The goal of this paper is to use RS to derive WQ indicators in Aswan High Dam Reservoir (AHDR), which is considered Egypt's primary and strategic reservoir of freshwater. This study focuses on using Landsat8 (L-8) band surface reflectance (SR) observations to predict water-quality characteristics which are limited to Turbidity (TUR), total suspended solids (TSS), and chlorophyll-a (Chl-a). ArcGIS pro is used to retrieve L-8 SR data for the study region. Multiple linear regression analysis was used to derive new correlations between observed optical water-quality indicators in April and L-8 SR which were atmospherically corrected by values of various bands, band ratios, and or combinations. Field measurements taken in the month of May were used to validate WQP obtained from SR data of L-8 Operational Land Imager (OLI) satellite. The findings demonstrate a strong correlation between indicators of WQ and L-8 .For TUR, the best validation correlation with OLI SR bands blue, green, and red, were derived with high values of Coefficient of correlation (R2) and Root Mean Square Error (RMSE) equal 0.96 and 3.1 NTU, respectively. For TSS, Two equations were strongly correlated and verified with band ratios and combinations. A logarithm of the ratio of blue and green SR was determined to be the best performing model with values of R2 and RMSE equal to 0.9861 and 1.84 mg/l, respectively. For Chl-a, eight methods were presented for calculating its value within the study area. A mix of blue, red, shortwave infrared 1(SWR1) and panchromatic SR yielded the greatest validation results with values of R2 and RMSE equal 0.98 and 1.4 mg/l, respectively.Keywords: remote sensing, landsat 8, nasser lake, water quality
Procedia PDF Downloads 93931 The Effectiveness of a Six-Week Yoga Intervention on Body Awareness, Warnings of Relapse, and Emotion Regulation among Incarcerated Females
Authors: James Beauchemin
Abstract:
Introduction: The incarceration of people with mental illness and substance use disorders is a major public health issue, with social, clinical, and economic implications. Yoga participation has been associated with numerous psychological benefits; however, there is a paucity of research examining impacts of yoga with incarcerated populations. The purpose of this study was to evaluate effectiveness of a six-week yoga intervention on several mental health-related variables, including emotion regulation, body awareness, and warnings of substance relapse among incarcerated females. Methods: This study utilized a pre-post, three-arm design, with participants assigned to intervention, therapeutic community, or general population groups. A between-groups analysis of covariance (ANCOVA) was conducted across groups to assess intervention effectiveness using the Difficulties in Emotion Regulation Scale (DERS), Scale of Body Connection (SBC), and Warnings of Relapse (AWARE) Questionnaire. Results: ANCOVA results for warnings of relapse (AWARE) revealed significant between-group differences F(2, 80) = 7.15, p = .001; np2 = .152), with significant pairwise comparisons between the intervention group and both the therapeutic community (p = .001) and the general population (p = .005) groups. Similarly, significant differences were found for emotional regulation (DERS) F(2, 83) = 10.521, p = .000; np2 = .278). Pairwise comparisons indicated a significant difference between the intervention and general population (p = .01). Finally, significant differences between the intervention and control groups were found for body awareness (SBC) F(2, 84) = 3.69, p = .029; np2 = .081). Between-group differences were clarified via pairwise comparisons, indicating significant differences between the intervention group and both the therapeutic community (p = .028) and general population groups (p = .020). Implications: Study results suggest that yoga may be an effective addition to integrative mental health and substance use treatment for incarcerated women, and contributes to increasing evidence that holistic interventions may be an important component for treatment with this population. Specifically, given the prevalence of mental health and substance use disorders, findings revealed that changes in body awareness and emotion regulation may be particularly beneficial for incarcerated populations with substance use challenges as a result of yoga participation. From a systemic perspective, this proactive approach may have long-term implications for both physical and psychological well-being for the incarcerated population as a whole, thereby decreasing the need for traditional treatment. By integrating a more holistic, salutogenic model that emphasizes prevention, interventions like yoga may work to improve the wellness of this population, while providing an alternative or complementary treatment option for those with current symptoms.Keywords: yoga, mental health, incarceration, wellness
Procedia PDF Downloads 140930 Sweet to Bitter Perception Parageusia: Case of Posterior Inferior Cerebellar Artery Territory Diaschisis
Authors: I. S. Gandhi, D. N. Patel, M. Johnson, A. R. Hirsch
Abstract:
Although distortion of taste perception following a cerebrovascular event may seem to be a frivolous consequence of a classic stroke presentation, altered taste perception places patients at an increased risk for malnutrition, weight loss, and depression, all of which negatively impact the quality of life. Impaired taste perception can result from a wide variety of cerebrovascular lesions to various locations, including pons, insular cortices, and ventral posteromedial nucleus of the thalamus. Wallenberg syndrome, also known as a lateral medullary syndrome, has been described to impact taste; however, specific sweet to bitter taste dysgeusia from a territory infarction is an infrequent event; as such, a case is presented. One year prior to presentation, this 64-year-old right-handed woman, suffered a right posterior inferior cerebellar artery aneurysm rupture with resultant infarction, culminating in a ventriculoperitoneal shunt placement. One and half months after this event, she noticed the gradual onset of lack of ability to taste sweet, to eventually all sweet food tasting bitter. Since the onset of her chemosensory problems, the patient has lost 60-pounds. Upon gustatory testing, the patient's taste threshold showed ageusia to sucrose and hydrochloric acid, while normogeusia to sodium chloride, urea, and phenylthiocarbamide. The gustatory cortex is made in part by the right insular cortex as well as the right anterior operculum, which are primarily involved in the sensory taste modalities. In this model, sweet is localized in the posterior-most along with the rostral aspect of the right insular cortex, notably adjacent to the region responsible for bitter taste. The sweet to bitter dysgeusia in our patient suggests the presence of a lesion in this localization. Although the primary lesion in this patient was located in the right medulla of the brainstem, neurodegeneration in the rostal and posterior-most aspect, of the right insular cortex may have occurred due to diaschisis. Diaschisis has been described as neurophysiological changes that occur in remote regions to a focal brain lesion. Although hydrocephalus and vasospasm due to aneurysmal rupture may explain the distal foci of impairment, the gradual onset of dysgeusia is more indicative of diaschisis. The perception of sweet, now tasting bitter, suggests that in the absence of sweet taste reception, the intrinsic bitter taste of food is now being stimulated rather than sweet. In the evaluation and treatment of taste parageusia secondary to cerebrovascular injury, prophylactic neuroprotective measures may be worthwhile. Further investigation is warranted.Keywords: diaschisis, dysgeusia, stroke, taste
Procedia PDF Downloads 181929 A Brief Review on the Relationship between Pain and Sociology
Authors: Hanieh Sakha, Nader Nader, Haleh Farzin
Abstract:
Introduction: Throughout history, pain theories have been supposed by biomedicine, especially regarding its diagnosis and treatment aspects. Therefore, the feeling of pain is not only a personal experience and is affected by social background; therefore, it involves extensive systems of signals. The challenges in emotional and sentimental dimensions of pain originate from scientific medicine (i.e., the dominant theory is also referred to as the specificity theory); however, this theory has accepted some alterations by emerging physiology. Then, Von Frey suggested the theory of cutaneous senses (i.e., Muller’s concept: the common sensation of combined four major skin receptors leading to a proper sensation) 50 years after the specificity theory. The pain pathway was composed of spinothalamic tracts and thalamus with an inhibitory effect on the cortex. Pain is referred to as a series of unique experiences with various reasons and qualities. Despite the gate control theory, the biological aspect overcomes the social aspect. Vrancken provided a more extensive definition of pain and found five approaches: Somatico-technical, dualistic body-oriented, behaviorist, phenomenological, and consciousness approaches. The Western model combined physical, emotional, and existential aspects of the human body. On the other hand, Kotarba felt confused about the basic origins of chronic pain. Freund demonstrated and argued with Durkhemian about the sociological approach to emotions. Lynch provided a piece of evidence about the correlation between cardiovascular disease and emotionally life-threatening occurrences. Helman supposed a distinction between private and public pain. Conclusion: The consideration of the emotional aspect of pain could lead to effective, emotional, and social responses to pain. On the contrary, the theory of embodiment is based on the sociological view of health and illness. Social epidemiology shows an imbalanced distribution of health, illness, and disability among various social groups. The social support and socio-cultural level can result in several types of pain. It means the status of athletes might define their pain experiences. Gender is one of the important contributing factors affecting the type of pain (i.e., females are more likely to seek health services for pain relief.) Chronic non-cancer pain (CNCP) has become a serious public health issue affecting more than 70 million people globally. CNCP is a serious public health issue which is caused by the lack of awareness about chronic pain management among the general population.Keywords: pain, sociology, sociological, body
Procedia PDF Downloads 71928 Immuno-Protective Role of Mucosal Delivery of Lactococcus lactis Expressing Functionally Active JlpA Protein on Campylobacter jejuni Colonization in Chickens
Authors: Ankita Singh, Chandan Gorain, Amirul I. Mallick
Abstract:
Successful adherence of the mucosal epithelial cells is the key early step for Campylobacter jejuni pathogenesis (C. jejuni). A set of Surface Exposed Colonization Proteins (SECPs) are among the major factors involved in host cell adherence and invasion of C. jejuni. Among them, constitutively expressed surface-exposed lipoprotein adhesin of C. jejuni, JlpA, interacts with intestinal heat shock protein 90 (hsp90α) and contributes in disease progression by triggering pro-inflammatory response via activation of NF-κB and p38 MAP kinase pathway. Together with its ability to express in the bacterial surface, higher sequence conservation and predicted predominance of several B cells epitopes, JlpA protein reserves its potential to become an effective vaccine candidate against wide range of Campylobacter sps including C. jejuni. Given that chickens are the primary sources for C. jejuni and persistent gut colonization remain as major cause for foodborne pathogenesis to humans, present study explicitly used chickens as model to test the immune-protective efficacy of JlpA protein. Taking into account that gastrointestinal tract is the focal site for C. jejuni colonization, to extrapolate the benefit of mucosal (intragastric) delivery of JlpA protein, a food grade Nisin inducible Lactic acid producing bacteria, Lactococcus lactis (L. lactis) was engineered to express recombinant JlpA protein (rJlpA) in the surface of the bacteria. Following evaluation of optimal surface expression and functionality of recombinant JlpA protein expressed by recombinant L. lactis (rL. lactis), the immune-protective role of intragastric administration of live rL. lactis was assessed in commercial broiler chickens. In addition to the significant elevation of antigen specific mucosal immune responses in the intestine of chickens that received three doses of rL. lactis, marked upregulation of Toll-like receptor 2 (TLR2) gene expression in association with mixed pro-inflammatory responses (both Th1 and Th17 type) was observed. Furthermore, intragastric delivery of rJlpA expressed by rL. lactis, but not the injectable form, resulted in a significant reduction in C. jejuni colonization in chickens suggesting that mucosal delivery of live rL. lactis expressing JlpA serves as a promising vaccine platform to induce strong immune-protective responses against C. jejuni in chickens.Keywords: chickens, lipoprotein adhesion of Campylobacter jejuni, immuno-protection, Lactococcus lactis, mucosal delivery
Procedia PDF Downloads 140927 Quantifying Processes of Relating Skills in Learning: The Map of Dialogical Inquiry
Authors: Eunice Gan Ghee Wu, Marcus Goh Tian Xi, Alicia Chua Si Wen, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
The Map of Dialogical Inquiry provides a conceptual basis of learning processes. According to the Map, dialogical inquiry motivates complex thinking, dialogue, reflection, and learner agency. For instance, classrooms that incorporated dialogical inquiry enabled learners to construct more meaning in their learning, to engage in self-reflection, and to challenge their ideas with different perspectives. While the Map contributes to the psychology of learning, its qualitative approach makes it hard to track and compare learning processes over time for both teachers and learners. Qualitative approach typically relies on open-ended responses, which can be time-consuming and resource-intensive. With these concerns, the present research aimed to develop and validate a quantifiable measure for the Map. Specifically, the Map of Dialogical Inquiry reflects the eight different learning processes and perspectives employed during a learner’s experience. With a focus on interpersonal and emotional learning processes, the purpose of the present study is to construct and validate a scale to measure the “Relating” aspect of learning. According to the Map, the Relating aspect of learning contains four conceptual components: using intuition and empathy, seeking personal meaning, building relationships and meaning with others, and likes stories and metaphors. All components have been shown to benefit learning in past research. This research began with a literature review with the goal of identifying relevant scales in the literature. These scales were used as a basis for item development, guided by the four conceptual dimensions in the “Relating” aspect of learning, resulting in a pool of 47 preliminary items. Then, all items were administered to 200 American participants via an online survey along with other scales of learning. Dimensionality, reliability, and validity of the “Relating” scale was assessed. Data were submitted to a confirmatory factor analysis (CFA), revealing four distinct components and items. Items with lower factor loadings were removed in an iterative manner, resulting in 34 items in the final scale. CFA also revealed that the “Relating” scale was a four-factor model, following its four distinct components as described in the Map of Dialogical Inquiry. In sum, this research was able to develop a quantitative scale for the “Relating” aspect of the Map of Dialogical Inquiry. By representing learning as numbers, users, such as educators and learners, can better track, evaluate, and compare learning processes over time in an efficient manner. More broadly, this scale may also be used as a learning tool in lifelong learning.Keywords: lifelong learning, scale development, dialogical inquiry, relating, social and emotional learning, socio-affective intuition, empathy, narrative identity, perspective taking, self-disclosure
Procedia PDF Downloads 143926 Aesthetics and Semiotics in Theatre Performance
Authors: Păcurar Diana Istina
Abstract:
Structured in three chapters, the article attempts an X-ray of the theatrical aesthetics, correctly understood through the emotions generated in the intimate structure of the spectator that precedes the triggering of the viewer’s perception and not through the superposition, unfortunately common, of the notion of aesthetics with the style in which a theater show is built. The first chapter contains a brief history of the appearance of the word aesthetic, the formulation of definitions for this new term, as well as its connections with the notions of semiotics, in particular with the perception of the message transmitted. Starting with Aristotle and Plato, and reaching Magritte, their interventions should not be interpreted in the sense that the two scientific concepts can merge into one discipline. The perception that is the object of everyone’s analysis, the understanding of meaning, the decoding of the messages sent, and the triggering of feelings that culminate in pleasure, shaping the aesthetic vision, are some elements that keep semiotics and aesthetics distinct, even though they share many methods of analysis. The compositional processes of aesthetic representation and symbolic formation are analyzed in the second part of the paper from perspectives that include or do not include historical, cultural, social, and political processes. Aesthetics and the organization of its symbolic process are treated, taking into account expressive activity. The last part of the article explores the notion of aesthetics in applied theater, more specifically in the theater show. Taking the postmodern approach that aesthetics applies to the creation of an artifact and the reception of that artifact, the intervention of these elements in the theatrical system must be emphasized –that is, the analysis of the problems arising in the stages of the creation, presentation, and reception, by the public, of the theater performance. The aesthetic process is triggered involuntarily, simultaneously, or before the moment when people perceive the meaning of the messages transmitted by the work of art. The finding of this fact makes the mental process of aesthetics similar or related to that of semiotics. No matter how perceived individually, beauty, the mechanism of production can be reduced to two. The first step presents similarities to Peirce’s model, but the process between signified and signified additionally stimulates the related memory of the evaluation of beauty, adding to the meanings related to the signification itself. Then, the second step, a process of comparison, is followed, in which one examines whether the object being looked at matches the accumulated memory of beauty. Therefore, even though aesthetics is derived from the conceptual part, the judgment of beauty and, more than that, moral judgment come to be so important to the social activities of human beings that it evolves as a visible process independent of other conceptual contents.Keywords: aesthetics, semiotics, symbolic composition, subjective joints, signifying, signified
Procedia PDF Downloads 110925 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System
Authors: Nareshkumar Harale, B. B. Meshram
Abstract:
The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design
Procedia PDF Downloads 228924 The Moderating Role of Test Anxiety in the Relationships Between Self-Efficacy, Engagement, and Academic Achievement in College Math Courses
Authors: Yuqing Zou, Chunrui Zou, Yichong Cao
Abstract:
Previous research has revealed relationships between self-efficacy (SE), engagement, and academic achievement among students in Western countries, but these relationships remain unknown in college math courses among college students in China. In addition, previous research has shown that test anxiety has a direct effect on engagement and academic achievement. However, how test anxiety affects the relationships between SE, engagement, and academic achievement is still unknown. In this study, the authors aimed to explore the mediating roles of behavioral engagement (BE), emotional engagement (EE), and cognitive engagement (CE) in the association between SE and academic achievement and the moderating role of test anxiety in college math courses. Our hypotheses are that the association between SE and academic achievement was mediated by engagement and that test anxiety played a moderating role in the association. To explore the research questions, the authors collected data through self-reported surveys among 147 students at a northwestern university in China. Self-reported surveys were used to collect data. The motivated strategies for learning questionnaire (MSLQ) (Pintrich, 1991), the metacognitive strategies questionnaire (Wolters, 2004), and the engagement versus disaffection with learning scale (Skinner et al., 2008) were used to assess SE, CE, and BE and EE, respectively. R software was used to analyze the data. The main analyses used were reliability and validity analysis of scales, descriptive statistics analysis of measured variables, correlation analysis, regression analysis, and structural equation modeling (SEM) analysis and moderated mediation analysis to look at the structural relationships between variables at the same time. The SEM analysis indicated that student SE was positively related to BE, EE, and CE and academic achievement. BE, EE, and CE were all positively associated with academic achievement. That is, as the authors expected, higher levels of SE led to higher levels of BE, EE, and CE, and greater academic achievement. Higher levels of BE, EE, and CE led to greater academic achievement. In addition, the moderated mediation analysis found that the path of SE to academic achievement in the model was as significant as expected, as was the moderating effect of test anxiety in the SE-Achievement association. Specifically, test anxiety was found to moderate the association between SE and BE, the association between SE and CE, and the association between EE and Achievement. The authors investigated possible mediating effects of BE, EE, and CE in the associations between SE and academic achievement, and all indirect effects were found to be significant. As for the magnitude of mediations, behavioral engagement was the most important mediator in the SE-Achievement association. This study has implications for college teachers, educators, and students in China regarding ways to promote academic achievement in college math courses, including increasing self-efficacy and engagement and lessening test anxiety toward math.Keywords: academic engagement, self-efficacy, test anxiety, academic achievement, college math courses, behavioral engagement, cognitive engagement, emotional engagement
Procedia PDF Downloads 94923 Criteria to Access Justice in Remote Criminal Trial Implementation
Authors: Inga Žukovaitė
Abstract:
This work aims to present postdoc research on remote criminal proceedings in court in order to streamline the proceedings and, at the same time, ensure the effective participation of the parties in criminal proceedings and the court's obligation to administer substantive and procedural justice. This study tests the hypothesis that remote criminal proceedings do not in themselves violate the fundamental principles of criminal procedure; however, their implementation must ensure the right of the parties to effective legal remedies and a fair trial and, only then, must address the issues of procedural economy, speed and flexibility/functionality of the application of technologies. In order to ensure that changes in the regulation of criminal proceedings are in line with fair trial standards, this research will provide answers to the questions of what conditions -first of all, legal and only then organisational- are required for remote criminal proceedings to ensure respect for the parties and enable their effective participation in public proceedings, to create conditions for quality legal defence and its accessibility, to give a correct impression to the party that they are heard and that the court is impartial and fair. It also seeks to present the results of empirical research in the courts of Lithuania that was made by using the interview method. The research will serve as a basis for developing a theoretical model for remote criminal proceedings in the EU to ensure a balance between the intention to have innovative, cost-effective, and flexible criminal proceedings and the positive obligation of the State to ensure the rights of participants in proceedings to just and fair criminal proceedings. Moreover, developments in criminal proceedings also keep changing the image of the court itself; therefore, in the paper will create preconditions for future research on the impact of remote criminal proceedings on the trust in courts. The study aims at laying down the fundamentals for theoretical models of a remote hearing in criminal proceedings and at making recommendations for the safeguarding of human rights, in particular the rights of the accused, in such proceedings. The following criteria are relevant for the remote form of criminal proceedings: the purpose of judicial instance, the legal position of participants in proceedings, their vulnerability, and the nature of required legal protection. The content of the study consists of: 1. Identification of the factual and legal prerequisites for a decision to organise the entire criminal proceedings by remote means or to carry out one or several procedural actions by remote means 2. After analysing the legal regulation and practice concerning the application of the elements of remote criminal proceedings, distinguish the main legal safeguards for protection of the rights of the accused to ensure: (a) the right of effective participation in a court hearing; (b) the right of confidential consultation with the defence counsel; (c) the right of participation in the examination of evidence, in particular material evidence, as well as the right to question witnesses; and (d) the right to a public trial.Keywords: remote criminal proceedings, fair trial, right to defence, technology progress
Procedia PDF Downloads 73922 Prediction of Sound Transmission Through Framed Façade Systems
Authors: Fangliang Chen, Yihe Huang, Tejav Deganyar, Anselm Boehm, Hamid Batoul
Abstract:
With growing population density and further urbanization, the average noise level in cities is increasing. Excessive noise is not only annoying but also leads to a negative impact on human health. To deal with the increasing city noise, environmental regulations bring up higher standards on acoustic comfort in buildings by mitigating the noise transmission from building envelope exterior to interior. Framed window, door and façade systems are the leading choice for modern fenestration construction, which provides demonstrated quality of weathering reliability, environmental efficiency, and installation ease. The overall sound insulation of such systems depends both on glasses and frames, where glass usually covers the majority of the exposed surfaces, thus it is the main source of sound energy transmission. While frames in modern façade systems become slimmer for aesthetic appearance, which contribute to a minimal percentage of exposed surfaces. Nevertheless, frames might provide substantial transmission paths for sound travels through because of much less mass crossing the path, thus becoming more critical in limiting the acoustic performance of the whole system. There are various methodologies and numerical programs that can accurately predict the acoustic performance of either glasses or frames. However, due to the vast variance of size and dimension between frame and glass in the same system, there is no satisfactory theoretical approach or affordable simulation tool in current practice to access the over acoustic performance of a whole façade system. For this reason, laboratory test turns out to be the only reliable source. However, laboratory test is very time consuming and high costly, moreover different lab might provide slightly different test results because of varieties of test chambers, sample mounting, and test operations, which significantly constrains the early phase design of framed façade systems. To address this dilemma, this study provides an effective analytical methodology to predict the acoustic performance of framed façade systems, based on vast amount of acoustic test results on glass, frame and the whole façade system consist of both. Further test results validate the current model is able to accurately predict the overall sound transmission loss of a framed system as long as the acoustic behavior of the frame is available. Though the presented methodology is mainly developed from façade systems with aluminum frames, it can be easily extended to systems with frames of other materials such as steel, PVC or wood.Keywords: city noise, building facades, sound mitigation, sound transmission loss, framed façade system
Procedia PDF Downloads 61921 Using Chatbots to Create Situational Content for Coursework
Authors: B. Bricklin Zeff
Abstract:
This research explores the development and application of a specialized chatbot tailored for a nursing English course, with a primary objective of augmenting student engagement through situational content and responsiveness to key expressions and vocabulary. Introducing the chatbot, elucidating its purpose, and outlining its functionality are crucial initial steps in the research study, as they provide a comprehensive foundation for understanding the design and objectives of the specialized chatbot developed for the nursing English course. These elements establish the context for subsequent evaluations and analyses, enabling a nuanced exploration of the chatbot's impact on student engagement and language learning within the nursing education domain. The subsequent exploration of the intricate language model development process underscores the fusion of scientific methodologies and artistic considerations in this application of artificial intelligence (AI). Tailored for educators and curriculum developers in nursing, practical principles extending beyond AI and education are considered. Some insights into leveraging technology for enhanced language learning in specialized fields are addressed, with potential applications of similar chatbots in other professional English courses. The overarching vision is to illuminate how AI can transform language learning, rendering it more interactive and contextually relevant. The presented chatbot is a tangible example, equipping educators with a practical tool to enhance their teaching practices. Methodologies employed in this research encompass surveys and discussions to gather feedback on the chatbot's usability, effectiveness, and potential improvements. The chatbot system was integrated into a nursing English course, facilitating the collection of valuable feedback from participants. Significant findings from the study underscore the chatbot's effectiveness in encouraging more verbal practice of target expressions and vocabulary necessary for performance in role-play assessment strategies. This outcome emphasizes the practical implications of integrating AI into language education in specialized fields. This research holds significance for educators and curriculum developers in the nursing field, offering insights into integrating technology for enhanced English language learning. The study's major findings contribute valuable perspectives on the practical impact of the chatbot on student interaction and verbal practice. Ultimately, the research sheds light on the transformative potential of AI in making language learning more interactive and contextually relevant, particularly within specialized domains like nursing.Keywords: chatbot, nursing, pragmatics, role-play, AI
Procedia PDF Downloads 65920 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 177919 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques
Authors: Soheila Sadeghi
Abstract:
In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes
Procedia PDF Downloads 43