Search results for: high-performance cycle model application
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24488

Search results for: high-performance cycle model application

758 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 166
757 The Role Played by Awareness and Complexity through the Use of a Logistic Regression Analysis

Authors: Yari Vecchio, Margherita Masi, Jorgelina Di Pasquale

Abstract:

Adoption of Precision Agriculture (PA) is involved in a multidimensional and complex scenario. The process of adopting innovations is complex and social inherently, influenced by other producers, change agents, social norms and organizational pressure. Complexity depends on factors that interact and influence the decision to adopt. Farm and operator characteristics, as well as organizational, informational and agro-ecological context directly affect adoption. This influence has been studied to measure drivers and to clarify 'bottlenecks' of the adoption of agricultural innovation. Making decision process involves a multistage procedure, in which individual passes from first hearing about the technology to final adoption. Awareness is the initial stage and represents the moment in which an individual learns about the existence of the technology. 'Static' concept of adoption has been overcome. Awareness is a precondition to adoption. This condition leads to not encountering some erroneous evaluations, arose from having carried out analysis on a population that is only in part aware of technologies. In support of this, the present study puts forward an empirical analysis among Italian farmers, considering awareness as a prerequisite for adoption. The purpose of the present work is to analyze both factors that affect the probability to adopt and determinants that drive an aware individual to not adopt. Data were collected through a questionnaire submitted in November 2017. A preliminary descriptive analysis has shown that high levels of adoption have been found among younger farmers, better educated, with high intensity of information, with large farm size and high labor-intensive, and whose perception of the complexity of adoption process is lower. The use of a logit model permits to appreciate the weight played by the intensity of labor and complexity perceived by the potential adopter in PA adoption process. All these findings suggest important policy implications: measures dedicated to promoting innovation will need to be more specific for each phase of this adoption process. Specifically, they should increase awareness of PA tools and foster dissemination of information to reduce the degree of perceived complexity of the adoption process. These implications are particularly important in Europe where is pre-announced the reform of Common Agricultural Policy, oriented to innovation. In this context, these implications suggest to the measures supporting innovation to consider the relationship between various organizational and structural dimensions of European agriculture and innovation approaches.

Keywords: adoption, awareness, complexity, precision agriculture

Procedia PDF Downloads 138
756 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization

Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford

Abstract:

The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.

Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator

Procedia PDF Downloads 131
755 Computerized Scoring System: A Stethoscope to Understand Consumer's Emotion through His or Her Feedback

Authors: Chen Yang, Jun Hu, Ping Li, Lili Xue

Abstract:

Most companies pay careful attention to consumer feedback collection, so it is popular to find the ‘feedback’ button of all kinds of mobile apps. Yet it is much more changeling to analyze these feedback texts and to catch the true feelings of a consumer regarding either a problem or a complimentary of consumers who hands out the feedback. Especially to the Chinese content, it is possible that; in one context the Chinese feedback expresses positive feedback, but in the other context, the same Chinese feedback may be a negative one. For example, in Chinese, the feedback 'operating with loudness' works well with both refrigerator and stereo system. Apparently, this feedback towards a refrigerator shows negative feedback; however, the same feedback is positive towards a stereo system. By introducing Bradley, M. and Lang, P.'s Affective Norms for English Text (ANET) theory and Bucci W.’s Referential Activity (RA) theory, we, usability researchers at Pingan, are able to decipher the feedback and to find the hidden feelings behind the content. We subtract 2 disciplines ‘valence’ and ‘dominance’ out of 3 of ANET and 2 disciplines ‘concreteness’ and ‘specificity’ out of 4 of RA to organize our own rating system with a scale of 1 to 5 points. This rating system enables us to judge the feelings/emotion behind each feedback, and it works well with both single word/phrase and a whole paragraph. The result of the rating reflects the strength of the feeling/emotion of the consumer when he/she is typing the feedback. In our daily work, we first require a consumer to answer the net promoter score (NPS) before writing the feedback, so we can determine the feedback is positive or negative. Secondly, we code the feedback content according to company problematic list, which contains 200 problematic items. In this way, we are able to collect the data that how many feedbacks left by the consumer belong to one typical problem. Thirdly, we rate each feedback based on the rating system mentioned above to illustrate the strength of the feeling/emotion when our consumer writes the feedback. In this way, we actually obtain two kinds of data 1) the portion, which means how many feedbacks are ascribed into one problematic item and 2) the severity, how strong the negative feeling/emotion is when the consumer is writing this feedback. By crossing these two, and introducing the portion into X-axis and severity into Y-axis, we are able to find which typical problem gets the high score in both portion and severity. The higher the score of a problem has, the more urgent a problem is supposed to be solved as it means more people write stronger negative feelings in feedbacks regarding this problem. Moreover, by introducing hidden Markov model to program our rating system, we are able to computerize the scoring system and are able to process thousands of feedback in a short period of time, which is efficient and accurate enough for the industrial purpose.

Keywords: computerized scoring system, feeling/emotion of consumer feedback, referential activity, text mining

Procedia PDF Downloads 176
754 Developing Gifted Students’ STEM Career Interest

Authors: Wing Mui Winnie So, Tian Luo, Zeyu Han

Abstract:

To fully explore and develop the potentials of gifted students systematically and strategically by providing them with opportunities to receive education at appropriate levels, schools in Hong Kong are encouraged to adopt the "Three-Tier Implementation Model" to plan and implement the school-based gifted education, with Level Three refers to the provision of learning opportunities for the exceptionally gifted students in the form of specialist training outside the school setting by post-secondary institutions, non-government organisations, professional bodies and technology enterprises. Due to the growing concern worldwide about low interest among students in pursuing STEM (Science, Technology, Engineering, and Mathematics) careers, cultivating and boosting STEM career interest has been an emerging research focus worldwide. Although numerous studies have explored its critical contributors, little research has examined the effectiveness of comprehensive interventions such as “Studying with STEM professional”. This study aims to examine the effect on gifted students’ career interest during their participation in an off-school support programme designed and supervised by a team of STEM educators and STEM professionals from a university. Gifted students were provided opportunities and tasks to experience STEM career topics that are not included in the school syllabus, and to experience how to think and work like a STEM professional in their learning. Participants involved 40 primary school students joining the intervention programme outside the normal school setting. Research methods included adopting the STEM career interest survey and drawing tasks supplemented with writing before and after the programme, as well as interviews before the end of the programme. The semi-structured interviews focused on students’ views regarding STEM professionals; what’s it like to learn with a STEM professional; what’s it like to work and think like a STEM professional; and students’ STEM identity and career interest. The changes in gifted students’ STEM career interest and its well-recognised significant contributors, for example, STEM stereotypes, self-efficacy for STEM activities, and STEM outcome expectation, were collectively examined from the pre- and post-survey using T-test. Thematic analysis was conducted for the interview records to explore how studying with STEM professional intervention can help students understand STEM careers; build STEM identity; as well as how to think and work like a STEM professional. Results indicated a significant difference in STEM career interest before and after the intervention. The influencing mechanism was also identified from the measurement of the related contributors and the analysis of drawings and interviews. The potential of off-school support programme supervised by STEM educators and professionals to develop gifted students’ STEM career interest is argued to be further unleashed in future research and practice.

Keywords: gifted students, STEM career, STEM education, STEM professionals

Procedia PDF Downloads 75
753 Hydrogeophysical Investigations And Mapping of Ingress Channels Along The Blesbokspruit Stream In The East Rand Basin Of The Witwatersrand, South Africa

Authors: Melvin Sethobya, Sithule Xanga, Sechaba Lenong, Lunga Nolakana, Gbenga Adesola

Abstract:

Mining has been the cornerstone of the South African economy for the last century. Most of the gold mining in South Africa was conducted within the Witwatersrand basin, which contributed to the rapid growth of the city of Johannesburg and capitulated the city to becoming the business and wealth capital of the country. But with gradual depletion of resources, a stoppage in the extraction of underground water from mines and other factors relating to survival of the mining operations over a lengthy period, most of the mines were abandoned and left to pollute the local waterways and groundwater with toxins, heavy metal residue and increased acid mine drainage ensued. The Department of Mineral Resources and Energy commissioned a project whose aim is to monitor, maintain, and mitigate the adverse environmental impacts of polluted water mine water flowing into local streams affecting local ecosystems and livelihoods downstream. As part of mitigation efforts, the diagnosis and monitoring of groundwater or surface water polluted sites has become important. Geophysical surveys, in particular, Resistivity and Magnetics surveys, were selected as some of most suitable techniques for investigation of local ingress points along of one the major streams cutting through the Witwatersrand basin, namely the Blesbokspruit, which is found in the eastern part of the basin. The aim of the surveys was to provide information that could be used to assist in determining possible water loss/ ingress from the Blesbokspriut stream. Modelling of geophysical surveys results offered an in-depth insight into the interaction and pathways of polluted water through mapping of possible ingress channels near the Blesbokspruit. The resistivity - depth profile of the surveyed site exhibit a three(3) layered model with low resistivity values (10 to 200 Ω.m) overburden, which is underlain by a moderate resistivity weathered layer (>300 Ω.m), which sits on a more resistive crystalline bedrock (>500 Ω.m). Two locations of potential ingress channels were mapped across the two traverses at the site. The magnetic survey conducted at the site mapped a major NE-SW trending regional linearment with a strong magnetic signature, which was modeled to depth beyond 100m, with the potential to act as a conduit for dispersion of stream water away from the stream, as it shared a similar orientation with the potential ingress channels as mapped using the resistivity method.

Keywords: eletrictrical resistivity, magnetics survey, blesbokspruit, ingress

Procedia PDF Downloads 63
752 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 302
751 Clubhouse: A Minor Rebellion against the Algorithmic Tyranny of the Majority

Authors: Vahid Asadzadeh, Amin Ataee

Abstract:

Since the advent of social media, there has been a wave of optimism among researchers and civic activists about the influence of virtual networks on the democratization process, which has gradually waned. One of the lesser-known concerns is how to increase the possibility of hearing the voices of different minorities. According to the theory of media logic, the media, using their technological capabilities, act as a structure through which events and ideas are interpreted. Social media, through the use of the learning machine and the use of algorithms, has formed a kind of structure in which the voices of minorities and less popular topics are lost among the commotion of the trends. In fact, the recommended systems and algorithms used in social media are designed to help promote trends and make popular content more popular, and content that belongs to minorities is constantly marginalized. As social networks gradually play a more active role in politics, the possibility of freely participating in the reproduction and reinterpretation of structures in general and political structures in particular (as Laclau‎ and Mouffe had in mind‎) can be considered as criteria to democracy in action. The point is that the media logic of virtual networks is shaped by the rule and even the tyranny of the majority, and this logic does not make it possible to design a self-foundation and self-revolutionary model of democracy. In other words, today's social networks, though seemingly full of variety But they are governed by the logic of homogeneity, and they do not have the possibility of multiplicity as is the case in immanent radical democracies (influenced by Gilles Deleuze). However, with the emergence and increasing popularity of Clubhouse as a new social media, there seems to be a shift in the social media space, and that is the diminishing role of algorithms and systems reconditioners as content delivery interfaces. This has led to the fact that in the Clubhouse, the voices of minorities are better heard, and the diversity of political tendencies manifests itself better. The purpose of this article is to show, first, how social networks serve the elimination of minorities in general, and second, to argue that the media logic of social networks must adapt to new interpretations of democracy that give more space to minorities and human rights. Finally, this article will show how the Clubhouse serves the new interpretations of democracy at least in a minimal way. To achieve the mentioned goals, in this article by a descriptive-analytical method, first, the relation between media logic and postmodern democracy will be inquired. The political economy popularity in social media and its conflict with democracy will be discussed. Finally, it will be explored how the Clubhouse provides a new horizon for the concepts embodied in radical democracy, a horizon that more effectively serves the rights of minorities and human rights in general.

Keywords: algorithmic tyranny, Clubhouse, minority rights, radical democracy, social media

Procedia PDF Downloads 145
750 Principles for the Realistic Determination of the in-situ Concrete Compressive Strength under Consideration of Rearrangement Effects

Authors: Rabea Sefrin, Christian Glock, Juergen Schnell

Abstract:

The preservation of existing structures is of great economic interest because it contributes to higher sustainability and resource conservation. In the case of existing buildings, in addition to repair and maintenance, modernization or reconstruction works often take place in the course of adjustments or changes in use. Since the structural framework and the associated load level are usually changed in the course of the structural measures, the stability of the structure must be verified in accordance with the currently valid regulations. The concrete compressive strength of the existing structures concrete and the derived mechanical parameters are of central importance for the recalculation and verification. However, the compressive strength of the existing concrete is usually set comparatively low and thus underestimated. The reasons for this are too small numbers, and large scatter of material properties of the drill cores, which are used for the experimental determination of the design value of the compressive strength. Within a structural component, the load is usually transferred over the area with higher stiffness and consequently with higher compressive strength. Therefore, existing strength variations within a component only play a subordinate role due to rearrangement effects. This paper deals with the experimental and numerical determination of such rearrangement effects in order to calculate the concrete compressive strength of existing structures more realistic and economical. The influence of individual parameters such as the specimen geometry (prism or cylinder) or the coefficient of variation of the concrete compressive strength is analyzed in experimental small-part tests. The coefficients of variation commonly used in practice are adjusted by dividing the test specimens into several layers consisting of different concretes, which are monolithically connected to each other. From each combination, a sufficient number of the test specimen is produced and tested to enable evaluation on a statistical basis. Based on the experimental tests, FE simulations are carried out to validate the test results. In the frame of a subsequent parameter study, a large number of combinations is considered, which had not been investigated in the experimental tests yet. Thus, the influence of individual parameters on the size and characteristic of the rearrangement effect is determined and described more detailed. Based on the parameter study and the experimental results, a calculation model for a more realistic determination of the in situ concrete compressive strength is developed and presented. By considering rearrangement effects in concrete during recalculation, a higher number of existing structures can be maintained without structural measures. The preservation of existing structures is not only decisive from an economic, sustainable, and resource-saving point of view but also represents an added value for cultural and social aspects.

Keywords: existing structures, in-situ concrete compressive strength, rearrangement effects, recalculation

Procedia PDF Downloads 118
749 Flow Links Curiosity and Creativity: The Mediating Role of Flow

Authors: Nicola S. Schutte, John M. Malouff

Abstract:

Introduction: Curiosity is a positive emotion and motivational state that consists of the desire to know. Curiosity consists of several related dimensions, including a desire for exploration, deprivation sensitivity, and stress tolerance. Creativity involves generating novel and valuable ideas or products. How curiosity may prompt greater creativity remains to be investigated. The phenomena of flow may link curiosity and creativity. Flow is characterized by intense concentration and absorption and gives rise to optimal performance. Objective of Study: The objective of the present study was to investigate whether the phenomenon of flow may link curiosity with creativity. Methods and Design: Fifty-seven individuals from Australia (45 women and 12 men, mean age of 35.33, SD=9.4) participated. Participants were asked to design a program encouraging residents in a local community to conserve water and to record the elements of their program in writing. Participants were then asked to rate their experience as they developed and wrote about their program. Participants rated their experience on the Dimensional Curiosity Measure sub-scales assessing the exploration, deprivation sensitivity, and stress tolerance facets of curiosity, and the Flow Short Scale. Reliability of the measures as assessed by Cronbach's alpha was as follows: Exploration Curiosity =.92, Deprivation Sensitivity Curiosity =.66, Stress Tolerance Curiosity =.93, and Flow=.96. Two raters independently coded each participant’s water conservation program description on creativity. The mixed-model intraclass correlation coefficient for the two sets of ratings was .73. The mean of the two ratings produced the final creativity score for each participant. Results: During the experience of designing the program, all three types of curiosity were significantly associated with the flow. Pearson r correlations were as follows: Exploration Curiosity and flow, r =.68 (higher Exploration Curiosity was associated with more flow); Deprivation Sensitivity Curiosity and flow, r =.39 (higher Deprivation Sensitivity Curiosity was associated with more flow); and Stress Tolerance Curiosity and flow, r = .44 (more stress tolerance in relation to novelty and exploration was associated with more flow). Greater experience of flow was significantly associated with greater creativity in designing the water conservation program, r =.39. The associations between dimensions of curiosity and creativity did not reach significance. Even though the direct relationships between dimensions of curiosity and creativity were not significant, indirect relationships through the mediating effect of the experience of flow between dimensions of curiosity and creativity were significant. Mediation analysis using PROCESS showed that flow linked Exploration Curiosity with creativity, standardized beta=.23, 95%CI [.02,.25] for the indirect effect; Deprivation Sensitivity Curiosity with creativity, standardized beta=.14, 95%CI [.04,.29] for the indirect effect; and Stress Tolerance Curiosity with creativity, standardized beta=.13, 95%CI [.02,.27] for the indirect effect. Conclusions: When engaging in an activity, higher levels of curiosity are associated with greater flow. More flow is associated with higher levels of creativity. Programs intended to increase flow or creativity might build on these findings and also explore causal relationships.

Keywords: creativity, curiosity, flow, motivation

Procedia PDF Downloads 183
748 Practicing Inclusion for Hard of Hearing and Deaf Students in Regular Schools in Ethiopia

Authors: Mesfin Abebe Molla

Abstract:

This research aims to examine the practices of inclusion of the hard of hearing and deaf students in regular schools. It also focuses on exploring strategies for optimal benefits of students with Hard of Hearing and Deaf (HH-D) from inclusion. Concurrent mixed methods research design was used to collect quantitative and qualitative data. The instruments used to gather data for this study were questionnaire, semi- structured interview, and observations. A total of 102 HH-D students and 42 primary and High School teachers were selected using simple random sampling technique and used as participants to collect quantitative data. Non-probability sampling technique was also employed to select 14 participants (4-school principals, 6-teachers and 4-parents of HH-D students) and they were interviewed to collect qualitative data. Descriptive and inferential statistical techniques (independent sample t-test, one way ANOVA and Multiple regressions) were employed to analyze quantitative data. Qualitative data were also analyzed qualitatively by theme analysis. The findings reported that there were individual principals’, teachers’ and parents’ strong commitment and efforts for practicing inclusion of HH-D students effectively; however, most of the core values of inclusion were missing in both schools. Most of the teachers (78.6 %) and HH-D students (75.5%) had negative attitude and considerable reservations about the feasibility of inclusion of HH-D students in both schools. Furthermore, there was a statistically significant difference of attitude toward to inclusion between the two school’s teachers and the teachers’ who had taken and had not taken additional training on IE and sign language. The study also indicated that there was a statistically significant difference of attitude toward to inclusion between hard of hearing and deaf students. However, the overall contribution of the demographic variables of teachers and HH-D students on their attitude toward inclusion is not statistically significant. The finding also showed that HH-D students did not have access to modified curriculum which would maximize their abilities and help them to learn together with their hearing peers. In addition, there is no clear and adequate direction for the medium of instruction. Poor school organization and management, lack of commitment, financial resources, collaboration and teachers’ inadequate training on Inclusive Education (IE) and sign language, large class size, inappropriate assessment procedure, lack of trained deaf adult personnel who can serve as role model for HH-D students and lack of parents and community members’ involvement were some of the major factors that affect the practicing inclusion of students HH-D. Finally, recommendations are made to improve the practices of inclusion of HH-D students and to make inclusion of HH-D students an integrated part of Ethiopian education based on the findings of the study.

Keywords: deaf, hard of hearing, inclusion, regular schools

Procedia PDF Downloads 343
747 A New Measurement for Assessing Constructivist Learning Features in Higher Education: Lifelong Learning in Applied Fields (LLAF) Tempus Project

Authors: Dorit Alt, Nirit Raichel

Abstract:

Although university teaching is claimed to have a special task to support students in adopting ways of thinking and producing new knowledge anchored in scientific inquiry practices, it is argued that students' habits of learning are still overwhelmingly skewed toward passive acquisition of knowledge from authority sources rather than from collaborative inquiry activities.This form of instruction is criticized for encouraging students to acquire inert knowledge that can be used in instructional settings at best, however cannot be transferred into real-life complex problem settings. In order to overcome this critical inadequacy between current educational goals and instructional methods, the LLAF consortium (including 16 members from 8 countries) is aimed at developing updated instructional practices that put a premium on adaptability to the emerging requirements of present society. LLAF has created a practical guide for teachers containing updated pedagogical strategies and assessment tools, based on the constructivist approach for learning that put a premium on adaptability to the emerging requirements of present society. This presentation will be limited to teachers' education only and to the contribution of the project in providing a scale designed to measure the extent to which the constructivist activities are efficiently applied in the learning environment. A mix-method approach was implemented in two phases to construct the scale: The first phase included a qualitative content analysis involving both deductive and inductive category applications of students' observations. The results foregrounded eight categories: knowledge construction, authenticity, multiple perspectives, prior knowledge, in-depth learning, teacher- student interaction, social interaction and cooperative dialogue. The students' descriptions of their classes were formulated as 36 items. The second phase employed structural equation modeling (SEM). The scale was submitted to 597 undergraduate students. The goodness of fit of the data to the structural model yielded sufficient fit results. This research elaborates the body of literature by adding a category of in-depth learning which emerged from the content analysis. Moreover, the theoretical category of social activity has been extended to include two distinctive factors: cooperative dialogue and social interaction. Implications of these findings for the LLAF project are discussed.

Keywords: constructivist learning, higher education, mix-methodology, structural equation modeling

Procedia PDF Downloads 315
746 Fine-Scale Modeling the Influencing Factors of Multi-Time Dimensions of Transit Ridership at Station Level: The Study of Guangzhou City

Authors: Dijiang Lyu, Shaoying Li, Zhangzhi Tan, Zhifeng Wu, Feng Gao

Abstract:

Nowadays, China is experiencing rapidly urban rail transit expansions in the world. The purpose of this study is to finely model factors influencing transit ridership at multi-time dimensions within transit stations’ pedestrian catchment area (PCA) in Guangzhou, China. This study was based on multi-sources spatial data, including smart card data, high spatial resolution images, points of interest (POIs), real-estate online data and building height data. Eight multiple linear regression models using backward stepwise method and Geographic Information System (GIS) were created at station-level. According to Chinese code for classification of urban land use and planning standards of development land, residential land-use were divided into three categories: first-level (e.g. villa), second-level (e.g. community) and third-level (e.g. urban villages). Finally, it concluded that: (1) four factors (CBD dummy, number of feeder bus route, number of entrance or exit and the years of station operation) were proved to be positively correlated with transit ridership, but the area of green land-use and water land-use negative correlated instead. (2) The area of education land-use, the second-level and third-level residential land-use were found to be highly connected to the average value of morning peak boarding and evening peak alighting ridership. But the area of commercial land-use and the average height of buildings, were significantly positive associated with the average value of morning peak alighting and evening peak boarding ridership. (3) The area of the second-level residential land-use was rarely correlated with ridership in other regression models. Because private car ownership is still large in Guangzhou now, and some residents living in the community around the stations go to work by transit at peak time, but others are much more willing to drive their own car at non-peak time. The area of the third-level residential land-use, like urban villages, was highly positive correlated with ridership in all models, indicating that residents who live in the third-level residential land-use are the main passenger source of the Guangzhou Metro. (4) The diversity of land-use was found to have a significant impact on the passenger flow on the weekend, but was non-related to weekday. The findings can be useful for station planning, management and policymaking.

Keywords: fine-scale modeling, Guangzhou city, multi-time dimensions, multi-sources spatial data, transit ridership

Procedia PDF Downloads 142
745 Epigenetic and Archeology: A Quest to Re-Read Humanity

Authors: Salma A. Mahmoud

Abstract:

Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.

Keywords: epigenetics, archeology, bones, chromatin, methylome

Procedia PDF Downloads 108
744 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor

Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang

Abstract:

To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.

Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel

Procedia PDF Downloads 354
743 A Concept in Addressing the Singularity of the Emerging Universe

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation

Procedia PDF Downloads 89
742 Suspended Sediment Concentration and Water Quality Monitoring Along Aswan High Dam Reservoir Using Remote Sensing

Authors: M. Aboalazayem, Essam A. Gouda, Ahmed M. Moussa, Amr E. Flifl

Abstract:

Field data collecting is considered one of the most difficult work due to the difficulty of accessing large zones such as large lakes. Also, it is well known that the cost of obtaining field data is very expensive. Remotely monitoring of lake water quality (WQ) provides an economically feasible approach comparing to field data collection. Researchers have shown that lake WQ can be properly monitored via Remote sensing (RS) analyses. Using satellite images as a method of WQ detection provides a realistic technique to measure quality parameters across huge areas. Landsat (LS) data provides full free access to often occurring and repeating satellite photos. This enables researchers to undertake large-scale temporal comparisons of parameters related to lake WQ. Satellite measurements have been extensively utilized to develop algorithms for predicting critical water quality parameters (WQPs). The goal of this paper is to use RS to derive WQ indicators in Aswan High Dam Reservoir (AHDR), which is considered Egypt's primary and strategic reservoir of freshwater. This study focuses on using Landsat8 (L-8) band surface reflectance (SR) observations to predict water-quality characteristics which are limited to Turbidity (TUR), total suspended solids (TSS), and chlorophyll-a (Chl-a). ArcGIS pro is used to retrieve L-8 SR data for the study region. Multiple linear regression analysis was used to derive new correlations between observed optical water-quality indicators in April and L-8 SR which were atmospherically corrected by values of various bands, band ratios, and or combinations. Field measurements taken in the month of May were used to validate WQP obtained from SR data of L-8 Operational Land Imager (OLI) satellite. The findings demonstrate a strong correlation between indicators of WQ and L-8 .For TUR, the best validation correlation with OLI SR bands blue, green, and red, were derived with high values of Coefficient of correlation (R2) and Root Mean Square Error (RMSE) equal 0.96 and 3.1 NTU, respectively. For TSS, Two equations were strongly correlated and verified with band ratios and combinations. A logarithm of the ratio of blue and green SR was determined to be the best performing model with values of R2 and RMSE equal to 0.9861 and 1.84 mg/l, respectively. For Chl-a, eight methods were presented for calculating its value within the study area. A mix of blue, red, shortwave infrared 1(SWR1) and panchromatic SR yielded the greatest validation results with values of R2 and RMSE equal 0.98 and 1.4 mg/l, respectively.

Keywords: remote sensing, landsat 8, nasser lake, water quality

Procedia PDF Downloads 93
741 The Effectiveness of a Six-Week Yoga Intervention on Body Awareness, Warnings of Relapse, and Emotion Regulation among Incarcerated Females

Authors: James Beauchemin

Abstract:

Introduction: The incarceration of people with mental illness and substance use disorders is a major public health issue, with social, clinical, and economic implications. Yoga participation has been associated with numerous psychological benefits; however, there is a paucity of research examining impacts of yoga with incarcerated populations. The purpose of this study was to evaluate effectiveness of a six-week yoga intervention on several mental health-related variables, including emotion regulation, body awareness, and warnings of substance relapse among incarcerated females. Methods: This study utilized a pre-post, three-arm design, with participants assigned to intervention, therapeutic community, or general population groups. A between-groups analysis of covariance (ANCOVA) was conducted across groups to assess intervention effectiveness using the Difficulties in Emotion Regulation Scale (DERS), Scale of Body Connection (SBC), and Warnings of Relapse (AWARE) Questionnaire. Results: ANCOVA results for warnings of relapse (AWARE) revealed significant between-group differences F(2, 80) = 7.15, p = .001; np2 = .152), with significant pairwise comparisons between the intervention group and both the therapeutic community (p = .001) and the general population (p = .005) groups. Similarly, significant differences were found for emotional regulation (DERS) F(2, 83) = 10.521, p = .000; np2 = .278). Pairwise comparisons indicated a significant difference between the intervention and general population (p = .01). Finally, significant differences between the intervention and control groups were found for body awareness (SBC) F(2, 84) = 3.69, p = .029; np2 = .081). Between-group differences were clarified via pairwise comparisons, indicating significant differences between the intervention group and both the therapeutic community (p = .028) and general population groups (p = .020). Implications: Study results suggest that yoga may be an effective addition to integrative mental health and substance use treatment for incarcerated women, and contributes to increasing evidence that holistic interventions may be an important component for treatment with this population. Specifically, given the prevalence of mental health and substance use disorders, findings revealed that changes in body awareness and emotion regulation may be particularly beneficial for incarcerated populations with substance use challenges as a result of yoga participation. From a systemic perspective, this proactive approach may have long-term implications for both physical and psychological well-being for the incarcerated population as a whole, thereby decreasing the need for traditional treatment. By integrating a more holistic, salutogenic model that emphasizes prevention, interventions like yoga may work to improve the wellness of this population, while providing an alternative or complementary treatment option for those with current symptoms.

Keywords: yoga, mental health, incarceration, wellness

Procedia PDF Downloads 138
740 Sweet to Bitter Perception Parageusia: Case of Posterior Inferior Cerebellar Artery Territory Diaschisis

Authors: I. S. Gandhi, D. N. Patel, M. Johnson, A. R. Hirsch

Abstract:

Although distortion of taste perception following a cerebrovascular event may seem to be a frivolous consequence of a classic stroke presentation, altered taste perception places patients at an increased risk for malnutrition, weight loss, and depression, all of which negatively impact the quality of life. Impaired taste perception can result from a wide variety of cerebrovascular lesions to various locations, including pons, insular cortices, and ventral posteromedial nucleus of the thalamus. Wallenberg syndrome, also known as a lateral medullary syndrome, has been described to impact taste; however, specific sweet to bitter taste dysgeusia from a territory infarction is an infrequent event; as such, a case is presented. One year prior to presentation, this 64-year-old right-handed woman, suffered a right posterior inferior cerebellar artery aneurysm rupture with resultant infarction, culminating in a ventriculoperitoneal shunt placement. One and half months after this event, she noticed the gradual onset of lack of ability to taste sweet, to eventually all sweet food tasting bitter. Since the onset of her chemosensory problems, the patient has lost 60-pounds. Upon gustatory testing, the patient's taste threshold showed ageusia to sucrose and hydrochloric acid, while normogeusia to sodium chloride, urea, and phenylthiocarbamide. The gustatory cortex is made in part by the right insular cortex as well as the right anterior operculum, which are primarily involved in the sensory taste modalities. In this model, sweet is localized in the posterior-most along with the rostral aspect of the right insular cortex, notably adjacent to the region responsible for bitter taste. The sweet to bitter dysgeusia in our patient suggests the presence of a lesion in this localization. Although the primary lesion in this patient was located in the right medulla of the brainstem, neurodegeneration in the rostal and posterior-most aspect, of the right insular cortex may have occurred due to diaschisis. Diaschisis has been described as neurophysiological changes that occur in remote regions to a focal brain lesion. Although hydrocephalus and vasospasm due to aneurysmal rupture may explain the distal foci of impairment, the gradual onset of dysgeusia is more indicative of diaschisis. The perception of sweet, now tasting bitter, suggests that in the absence of sweet taste reception, the intrinsic bitter taste of food is now being stimulated rather than sweet. In the evaluation and treatment of taste parageusia secondary to cerebrovascular injury, prophylactic neuroprotective measures may be worthwhile. Further investigation is warranted.

Keywords: diaschisis, dysgeusia, stroke, taste

Procedia PDF Downloads 180
739 A Brief Review on the Relationship between Pain and Sociology

Authors: Hanieh Sakha, Nader Nader, Haleh Farzin

Abstract:

Introduction: Throughout history, pain theories have been supposed by biomedicine, especially regarding its diagnosis and treatment aspects. Therefore, the feeling of pain is not only a personal experience and is affected by social background; therefore, it involves extensive systems of signals. The challenges in emotional and sentimental dimensions of pain originate from scientific medicine (i.e., the dominant theory is also referred to as the specificity theory); however, this theory has accepted some alterations by emerging physiology. Then, Von Frey suggested the theory of cutaneous senses (i.e., Muller’s concept: the common sensation of combined four major skin receptors leading to a proper sensation) 50 years after the specificity theory. The pain pathway was composed of spinothalamic tracts and thalamus with an inhibitory effect on the cortex. Pain is referred to as a series of unique experiences with various reasons and qualities. Despite the gate control theory, the biological aspect overcomes the social aspect. Vrancken provided a more extensive definition of pain and found five approaches: Somatico-technical, dualistic body-oriented, behaviorist, phenomenological, and consciousness approaches. The Western model combined physical, emotional, and existential aspects of the human body. On the other hand, Kotarba felt confused about the basic origins of chronic pain. Freund demonstrated and argued with Durkhemian about the sociological approach to emotions. Lynch provided a piece of evidence about the correlation between cardiovascular disease and emotionally life-threatening occurrences. Helman supposed a distinction between private and public pain. Conclusion: The consideration of the emotional aspect of pain could lead to effective, emotional, and social responses to pain. On the contrary, the theory of embodiment is based on the sociological view of health and illness. Social epidemiology shows an imbalanced distribution of health, illness, and disability among various social groups. The social support and socio-cultural level can result in several types of pain. It means the status of athletes might define their pain experiences. Gender is one of the important contributing factors affecting the type of pain (i.e., females are more likely to seek health services for pain relief.) Chronic non-cancer pain (CNCP) has become a serious public health issue affecting more than 70 million people globally. CNCP is a serious public health issue which is caused by the lack of awareness about chronic pain management among the general population.

Keywords: pain, sociology, sociological, body

Procedia PDF Downloads 70
738 Immuno-Protective Role of Mucosal Delivery of Lactococcus lactis Expressing Functionally Active JlpA Protein on Campylobacter jejuni Colonization in Chickens

Authors: Ankita Singh, Chandan Gorain, Amirul I. Mallick

Abstract:

Successful adherence of the mucosal epithelial cells is the key early step for Campylobacter jejuni pathogenesis (C. jejuni). A set of Surface Exposed Colonization Proteins (SECPs) are among the major factors involved in host cell adherence and invasion of C. jejuni. Among them, constitutively expressed surface-exposed lipoprotein adhesin of C. jejuni, JlpA, interacts with intestinal heat shock protein 90 (hsp90α) and contributes in disease progression by triggering pro-inflammatory response via activation of NF-κB and p38 MAP kinase pathway. Together with its ability to express in the bacterial surface, higher sequence conservation and predicted predominance of several B cells epitopes, JlpA protein reserves its potential to become an effective vaccine candidate against wide range of Campylobacter sps including C. jejuni. Given that chickens are the primary sources for C. jejuni and persistent gut colonization remain as major cause for foodborne pathogenesis to humans, present study explicitly used chickens as model to test the immune-protective efficacy of JlpA protein. Taking into account that gastrointestinal tract is the focal site for C. jejuni colonization, to extrapolate the benefit of mucosal (intragastric) delivery of JlpA protein, a food grade Nisin inducible Lactic acid producing bacteria, Lactococcus lactis (L. lactis) was engineered to express recombinant JlpA protein (rJlpA) in the surface of the bacteria. Following evaluation of optimal surface expression and functionality of recombinant JlpA protein expressed by recombinant L. lactis (rL. lactis), the immune-protective role of intragastric administration of live rL. lactis was assessed in commercial broiler chickens. In addition to the significant elevation of antigen specific mucosal immune responses in the intestine of chickens that received three doses of rL. lactis, marked upregulation of Toll-like receptor 2 (TLR2) gene expression in association with mixed pro-inflammatory responses (both Th1 and Th17 type) was observed. Furthermore, intragastric delivery of rJlpA expressed by rL. lactis, but not the injectable form, resulted in a significant reduction in C. jejuni colonization in chickens suggesting that mucosal delivery of live rL. lactis expressing JlpA serves as a promising vaccine platform to induce strong immune-protective responses against C. jejuni in chickens.

Keywords: chickens, lipoprotein adhesion of Campylobacter jejuni, immuno-protection, Lactococcus lactis, mucosal delivery

Procedia PDF Downloads 139
737 Quantifying Processes of Relating Skills in Learning: The Map of Dialogical Inquiry

Authors: Eunice Gan Ghee Wu, Marcus Goh Tian Xi, Alicia Chua Si Wen, Helen Bound, Lee Liang Ying, Albert Lee

Abstract:

The Map of Dialogical Inquiry provides a conceptual basis of learning processes. According to the Map, dialogical inquiry motivates complex thinking, dialogue, reflection, and learner agency. For instance, classrooms that incorporated dialogical inquiry enabled learners to construct more meaning in their learning, to engage in self-reflection, and to challenge their ideas with different perspectives. While the Map contributes to the psychology of learning, its qualitative approach makes it hard to track and compare learning processes over time for both teachers and learners. Qualitative approach typically relies on open-ended responses, which can be time-consuming and resource-intensive. With these concerns, the present research aimed to develop and validate a quantifiable measure for the Map. Specifically, the Map of Dialogical Inquiry reflects the eight different learning processes and perspectives employed during a learner’s experience. With a focus on interpersonal and emotional learning processes, the purpose of the present study is to construct and validate a scale to measure the “Relating” aspect of learning. According to the Map, the Relating aspect of learning contains four conceptual components: using intuition and empathy, seeking personal meaning, building relationships and meaning with others, and likes stories and metaphors. All components have been shown to benefit learning in past research. This research began with a literature review with the goal of identifying relevant scales in the literature. These scales were used as a basis for item development, guided by the four conceptual dimensions in the “Relating” aspect of learning, resulting in a pool of 47 preliminary items. Then, all items were administered to 200 American participants via an online survey along with other scales of learning. Dimensionality, reliability, and validity of the “Relating” scale was assessed. Data were submitted to a confirmatory factor analysis (CFA), revealing four distinct components and items. Items with lower factor loadings were removed in an iterative manner, resulting in 34 items in the final scale. CFA also revealed that the “Relating” scale was a four-factor model, following its four distinct components as described in the Map of Dialogical Inquiry. In sum, this research was able to develop a quantitative scale for the “Relating” aspect of the Map of Dialogical Inquiry. By representing learning as numbers, users, such as educators and learners, can better track, evaluate, and compare learning processes over time in an efficient manner. More broadly, this scale may also be used as a learning tool in lifelong learning.

Keywords: lifelong learning, scale development, dialogical inquiry, relating, social and emotional learning, socio-affective intuition, empathy, narrative identity, perspective taking, self-disclosure

Procedia PDF Downloads 142
736 Aesthetics and Semiotics in Theatre Performance

Authors: Păcurar Diana Istina

Abstract:

Structured in three chapters, the article attempts an X-ray of the theatrical aesthetics, correctly understood through the emotions generated in the intimate structure of the spectator that precedes the triggering of the viewer’s perception and not through the superposition, unfortunately common, of the notion of aesthetics with the style in which a theater show is built. The first chapter contains a brief history of the appearance of the word aesthetic, the formulation of definitions for this new term, as well as its connections with the notions of semiotics, in particular with the perception of the message transmitted. Starting with Aristotle and Plato, and reaching Magritte, their interventions should not be interpreted in the sense that the two scientific concepts can merge into one discipline. The perception that is the object of everyone’s analysis, the understanding of meaning, the decoding of the messages sent, and the triggering of feelings that culminate in pleasure, shaping the aesthetic vision, are some elements that keep semiotics and aesthetics distinct, even though they share many methods of analysis. The compositional processes of aesthetic representation and symbolic formation are analyzed in the second part of the paper from perspectives that include or do not include historical, cultural, social, and political processes. Aesthetics and the organization of its symbolic process are treated, taking into account expressive activity. The last part of the article explores the notion of aesthetics in applied theater, more specifically in the theater show. Taking the postmodern approach that aesthetics applies to the creation of an artifact and the reception of that artifact, the intervention of these elements in the theatrical system must be emphasized –that is, the analysis of the problems arising in the stages of the creation, presentation, and reception, by the public, of the theater performance. The aesthetic process is triggered involuntarily, simultaneously, or before the moment when people perceive the meaning of the messages transmitted by the work of art. The finding of this fact makes the mental process of aesthetics similar or related to that of semiotics. No matter how perceived individually, beauty, the mechanism of production can be reduced to two. The first step presents similarities to Peirce’s model, but the process between signified and signified additionally stimulates the related memory of the evaluation of beauty, adding to the meanings related to the signification itself. Then, the second step, a process of comparison, is followed, in which one examines whether the object being looked at matches the accumulated memory of beauty. Therefore, even though aesthetics is derived from the conceptual part, the judgment of beauty and, more than that, moral judgment come to be so important to the social activities of human beings that it evolves as a visible process independent of other conceptual contents.

Keywords: aesthetics, semiotics, symbolic composition, subjective joints, signifying, signified

Procedia PDF Downloads 109
735 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 227
734 The Moderating Role of Test Anxiety in the Relationships Between Self-Efficacy, Engagement, and Academic Achievement in College Math Courses

Authors: Yuqing Zou, Chunrui Zou, Yichong Cao

Abstract:

Previous research has revealed relationships between self-efficacy (SE), engagement, and academic achievement among students in Western countries, but these relationships remain unknown in college math courses among college students in China. In addition, previous research has shown that test anxiety has a direct effect on engagement and academic achievement. However, how test anxiety affects the relationships between SE, engagement, and academic achievement is still unknown. In this study, the authors aimed to explore the mediating roles of behavioral engagement (BE), emotional engagement (EE), and cognitive engagement (CE) in the association between SE and academic achievement and the moderating role of test anxiety in college math courses. Our hypotheses are that the association between SE and academic achievement was mediated by engagement and that test anxiety played a moderating role in the association. To explore the research questions, the authors collected data through self-reported surveys among 147 students at a northwestern university in China. Self-reported surveys were used to collect data. The motivated strategies for learning questionnaire (MSLQ) (Pintrich, 1991), the metacognitive strategies questionnaire (Wolters, 2004), and the engagement versus disaffection with learning scale (Skinner et al., 2008) were used to assess SE, CE, and BE and EE, respectively. R software was used to analyze the data. The main analyses used were reliability and validity analysis of scales, descriptive statistics analysis of measured variables, correlation analysis, regression analysis, and structural equation modeling (SEM) analysis and moderated mediation analysis to look at the structural relationships between variables at the same time. The SEM analysis indicated that student SE was positively related to BE, EE, and CE and academic achievement. BE, EE, and CE were all positively associated with academic achievement. That is, as the authors expected, higher levels of SE led to higher levels of BE, EE, and CE, and greater academic achievement. Higher levels of BE, EE, and CE led to greater academic achievement. In addition, the moderated mediation analysis found that the path of SE to academic achievement in the model was as significant as expected, as was the moderating effect of test anxiety in the SE-Achievement association. Specifically, test anxiety was found to moderate the association between SE and BE, the association between SE and CE, and the association between EE and Achievement. The authors investigated possible mediating effects of BE, EE, and CE in the associations between SE and academic achievement, and all indirect effects were found to be significant. As for the magnitude of mediations, behavioral engagement was the most important mediator in the SE-Achievement association. This study has implications for college teachers, educators, and students in China regarding ways to promote academic achievement in college math courses, including increasing self-efficacy and engagement and lessening test anxiety toward math.

Keywords: academic engagement, self-efficacy, test anxiety, academic achievement, college math courses, behavioral engagement, cognitive engagement, emotional engagement

Procedia PDF Downloads 93
733 Prediction of Sound Transmission Through Framed Façade Systems

Authors: Fangliang Chen, Yihe Huang, Tejav Deganyar, Anselm Boehm, Hamid Batoul

Abstract:

With growing population density and further urbanization, the average noise level in cities is increasing. Excessive noise is not only annoying but also leads to a negative impact on human health. To deal with the increasing city noise, environmental regulations bring up higher standards on acoustic comfort in buildings by mitigating the noise transmission from building envelope exterior to interior. Framed window, door and façade systems are the leading choice for modern fenestration construction, which provides demonstrated quality of weathering reliability, environmental efficiency, and installation ease. The overall sound insulation of such systems depends both on glasses and frames, where glass usually covers the majority of the exposed surfaces, thus it is the main source of sound energy transmission. While frames in modern façade systems become slimmer for aesthetic appearance, which contribute to a minimal percentage of exposed surfaces. Nevertheless, frames might provide substantial transmission paths for sound travels through because of much less mass crossing the path, thus becoming more critical in limiting the acoustic performance of the whole system. There are various methodologies and numerical programs that can accurately predict the acoustic performance of either glasses or frames. However, due to the vast variance of size and dimension between frame and glass in the same system, there is no satisfactory theoretical approach or affordable simulation tool in current practice to access the over acoustic performance of a whole façade system. For this reason, laboratory test turns out to be the only reliable source. However, laboratory test is very time consuming and high costly, moreover different lab might provide slightly different test results because of varieties of test chambers, sample mounting, and test operations, which significantly constrains the early phase design of framed façade systems. To address this dilemma, this study provides an effective analytical methodology to predict the acoustic performance of framed façade systems, based on vast amount of acoustic test results on glass, frame and the whole façade system consist of both. Further test results validate the current model is able to accurately predict the overall sound transmission loss of a framed system as long as the acoustic behavior of the frame is available. Though the presented methodology is mainly developed from façade systems with aluminum frames, it can be easily extended to systems with frames of other materials such as steel, PVC or wood.

Keywords: city noise, building facades, sound mitigation, sound transmission loss, framed façade system

Procedia PDF Downloads 61
732 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 177
731 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.

Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes

Procedia PDF Downloads 39
730 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado

Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha

Abstract:

The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.

Keywords: low-order streams, metabolism, net primary production, trophic state

Procedia PDF Downloads 258
729 An Energy Integration Study While Utilizing Heat of Flue Gas: Sponge Iron Process

Authors: Venkata Ramanaiah, Shabina Khanam

Abstract:

Enormous potential for saving energy is available in coal-based sponge iron plants as these are associated with the high percentage of energy wastage per unit sponge iron production. An energy integration option is proposed, in the present paper, to a coal based sponge iron plant of 100 tonnes per day production capacity, being operated in India using SL/RN (Stelco-Lurgi/Republic Steel-National Lead) process. It consists of the rotary kiln, rotary cooler, dust settling chamber, after burning chamber, evaporating cooler, electrostatic precipitator (ESP), wet scrapper and chimney as important equipment. Principles of process integration are used in the proposed option. It accounts for preheating kiln inlet streams like kiln feed and slinger coal up to 170ᴼC using waste gas exiting ESP. Further, kiln outlet stream is cooled from 1020ᴼC to 110ᴼC using kiln air. The working areas in the plant where energy is being lost and can be conserved are identified. Detailed material and energy balances are carried out around the sponge iron plant, and a modified model is developed, to find coal requirement of proposed option, based on hot utility, heat of reactions, kiln feed and air preheating, radiation losses, dolomite decomposition, the heat required to vaporize the coal volatiles, etc. As coal is used as utility and process stream, an iterative approach is used in solution methodology to compute coal consumption. Further, water consumption, operating cost, capital investment, waste gas generation, profit, and payback period of the modification are computed. Along with these, operational aspects of the proposed design are also discussed. To recover and integrate waste heat available in the plant, three gas-solid heat exchangers and four insulated ducts with one FD fan for each are installed additionally. Thus, the proposed option requires total capital investment of $0.84 million. Preheating of kiln feed, slinger coal and kiln air streams reduce coal consumption by 24.63% which in turn reduces waste gas generation by 25.2% in comparison to the existing process. Moreover, 96% reduction in water is also observed, which is the added advantage of the modification. Consequently, total profit is found as $2.06 million/year with payback period of 4.97 months only. The energy efficient factor (EEF), which is the % of the maximum energy that can be saved through design, is found to be 56.7%. Results of the proposed option are also compared with literature and found in good agreement.

Keywords: coal consumption, energy conservation, process integration, sponge iron plant

Procedia PDF Downloads 144