Search results for: concurrent validity
41 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles
Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo
Abstract:
Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.Keywords: HRRP, NCTI, simulated/synthetic database, SVD
Procedia PDF Downloads 35440 The Relationships between Sustainable Supply Chain Management Practices, Digital Transformation, and Enterprise Performance in Vietnam
Authors: Thi Phuong Pham
Abstract:
This paper explores the intricate relationships between Sustainable Supply Chain Management (SSCM) practices, digital transformation (DT), and enterprise performance within the context of Vietnam. Over the past two decades, there has been a paradigm shift in supply chain management, with sustainability gaining prominence due to increasing concerns about climate change, labor practices, and the environmental impact of business operations. In the ever-evolving realm of global business, sustainability and digital transformation (DT) intersecting dynamics have become pivotal catalysts for organizational success. This research investigates how integrating SSCM with DT can enhance enterprise performance, a subject of significant relevance as Vietnam undergoes rapid economic growth and digital transformation. The primary objectives of this research are twofold: (1) to examine the effects of SSCM practices on enterprise performance in three critical aspects: economic, environmental, and social performance in Vietnam and (2) to explore the mediating role of DT in this relationship. By analyzing these dynamics, the study aims to provide valuable insights for policymakers and the academic community regarding the potential benefits of aligning SSCM principles with digital technologies. To achieve these objectives, the research employs a robust mixed-method approach. The research begins with a comprehensive literature review to establish a theoretical framework that underpins the empirical analysis. Data collection was conducted through a structured survey targeting Vietnamese enterprises, with the survey instrument designed to measure SSCM practices, DT, and enterprise performance using a five-point Likert scale. The reliability and validity of the survey were ensured by pre-testing with industry practitioners and refining the questionnaire based on their feedback. For data analysis, structural equation modeling (SEM) was employed to quantify the direct effects of SSCM on enterprise performance, while mediation analysis using the PROCESS Macro 4.0 in SPSS was conducted to assess the mediating role of DT. The findings reveal that SSCM practices positively influence enterprise performance by enhancing operational efficiency, reducing costs, and improving sustainability metrics. Furthermore, DT acts as a significant mediator, amplifying the positive impacts of SSCM practices through improved data management, enhanced communication, and more agile supply chain processes. These results underscore the critical role of DT in maximizing the benefits of SSCM practices, particularly in a developing economy like Vietnam. This research contributes to the existing body of knowledge by highlighting the synergistic effects of SSCM and DT on enterprise performance. It offers practical implications for businesses that enhance their sustainability and digital capabilities, providing a roadmap for integrating these two pivotal aspects to achieve competitive advantage. The study's insights can also inform governmental policies designed to foster sustainable economic growth and digital innovation in Vietnam.Keywords: sustainable supply chain management, digital transformation, enterprise performance, Vietnam
Procedia PDF Downloads 2339 The Relationship Between Teachers’ Attachment Insecurity and Their Classroom Management Efficacy
Authors: Amber Hatch, Eric Wright, Feihong Wang
Abstract:
Research suggests that attachment in close relationships affects one’s emotional processes, mindfulness, conflict-management behaviors, and interpersonal interactions. Attachment insecurity is often associated with maladaptive social interactions and suboptimal relationship qualities. Past studies have considered how the nature of emotion regulation and mindfulness in teachers may be related to student or classroom outcomes. Still, no research has examined how the relationship between such internal experiences and classroom management outcomes may also be related to teachers’ attachment insecurity. This study examined the interrelationships between teachers’ attachment insecurity, mindfulness tendencies, emotion regulation abilities, and classroom management efficacy as indexed by students’ classroom behavior and teachers’ response effectiveness. Teachers’ attachment insecurity was evaluated using the global ECRS-SF, which measures both attachment anxiety and avoidance. The present study includes a convenient sample of 357 American elementary school teachers who responded to a survey regarding their classroom management efficacy, attachment in/security, dispositional mindfulness, emotion regulation strategies, and difficulties in emotion regulation, primarily assessed via pre-existing instruments. Good construct validity was demonstrated for all scales used in the survey. Sample demographics, including gender (94% female), race (92% White), age (M = 41.9 yrs.), years of teaching experience (M = 15.2 yrs.), and education level were similar to the population from which it was drawn, (i.e., American elementary school teachers). However, white women were slightly overrepresented in our sample. Correlational results suggest that teacher attachment insecurity is associated with poorer classroom management efficacy as indexed by students’ disruptive behavior and teachers’ response effectiveness. Attachment anxiety was a much stronger predictor of adverse student behaviors and ineffective teacher responses to adverse behaviors than attachment avoidance. Mindfulness, emotion regulation abilities, and years of teaching experience predicted positive classroom management outcomes. Attachment insecurity and mindfulness were more strongly related to frequent adverse student behaviors, while emotion regulation abilities were more strongly related to teachers’ response effectiveness. The teaching experience was negatively related to attachment insecurity and positively related to mindfulness and emotion regulation abilities. Although the data were cross-sectional, path analyses revealed that attachment insecurity is directly related to classroom management efficacy. Through two routes, this relationship is further mediated by emotion regulation and mindfulness in teachers. The first route of indirect effect suggests double mediation by teacher’s emotion regulation and then teacher mindfulness in the relationship between teacher attachment insecurity and classroom management efficacy. The second indirect effect suggests mindfulness directly mediated the relationship between attachment insecurity and classroom management efficacy, resulting in improved model fit statistics. However, this indirect effect is much smaller than the double mediation route through emotion regulation and mindfulness in teachers. Given the significant predication of teacher attachment insecurity, mindfulness, and emotion regulation on teachers’ classroom management efficacy both directly and indirectly, the authors recommend improving teachers’ classroom management efficacy via a three-pronged approach aiming at enhancing teachers’ secure attachment and supporting their learning adaptive emotion regulation strategies and mindfulness techniques.Keywords: Classroom management efficacy, student behavior, teacher attachment, teacher emotion regulation, teacher mindfulness
Procedia PDF Downloads 8538 Biophysical and Structural Characterization of Transcription Factor Rv0047c of Mycobacterium Tuberculosis H37Rv
Authors: Md. Samsuddin Ansari, Ashish Arora
Abstract:
Every year 10 million people fall ill with one of the oldest diseases known as tuberculosis, caused by Mycobacterium tuberculosis. The success of M. tuberculosis as a pathogen is because of its ability to persist in host tissues. Multidrug resistance (MDR) mycobacteria cases increase every day, which is associated with efflux pumps controlled at the level of transcription. The transcription regulators of MDR transporters in bacteria belong to one of the following four regulatory protein families: AraC, MarR, MerR, and TetR. Phenolic acid decarboxylase repressor (PadR), like a family of transcription regulators, is closely related to the MarR family. Phenolic acid decarboxylase repressor (PadR) was first identified as a transcription factor involved in the regulation of phenolic acid stress response in various microorganisms (including Mycobacterium tuberculosis H37Rv). Recently research has shown that the PadR family transcription factors are global, multifunction transcription regulators. Rv0047c is a PadR subfamily-1 protein. We are exploring the biophysical and structural characterization of Rv0047c. The Rv0047 gene was amplified by PCR using the primers containing EcoRI and HindIII restriction enzyme sites cloned in pET-NH6 vector and overexpressed in DH5α and BL21 (λDE3) cells of E. coli following purification with Ni2+-NTA column and size exclusion chromatography. We did DSC to know the thermal stability; the Tm (transition temperature) of protein is 55.29ºC, and ΔH (enthalpy change) of 6.92 kcal/mol. Circular dichroism to know the secondary structure and conformation and fluorescence spectroscopy for tertiary structure study of protein. To understand the effect of pH on the structure, function, and stability of Rv0047c we employed spectroscopy techniques such as circular dichroism, fluorescence, and absorbance measurements in a wide range of pH (from pH-2.0 to pH-12). At low and high pH, it shows drastic changes in the secondary and tertiary structure of the protein. EMSA studies showed the specific binding of Rv0047c with its own 30-bp promoter region. To determine the effect of complex formation on the secondary structure of Rv0047c, we examined the CD spectra of the complex of Rv0047c with promoter DNA of rv0047. The functional role of Rv0047c was characterized by over-expressing the Rv0047c gene under the control of hsp60 promoter in Mycobacterium tuberculosis H37Rv. We have predicted the three-dimensional structure of Rv0047c using the Swiss Model and Modeller, with validity checked by the Ramachandra plot. We did molecular docking of Rv0047c with dnaA, through PatchDock following refinement through FireDock. Through this, it is possible to easily identify the binding hot-stop of the receptor molecule with that of the ligand, the nature of the interface itself, and the conformational change undergone by the protein pattern. We are using X-crystallography to unravel the structure of Rv0047c. Overall the studies show that Rv0047c may have transcription regulation along with providing an insight into the activity of Rv0047c in the pH range of subcellular environment and helps to understand the protein-protein interaction, a novel target to kill dormant bacteria and potential strategy for tuberculosis control.Keywords: mycobacterium tuberculosis, phenolic acid decarboxylase repressor, Rv0047c, Circular dichroism, fluorescence spectroscopy, docking, protein-protein interaction
Procedia PDF Downloads 12137 Revenge: Dramaturgy and the Tragedy of Jihad
Authors: Myriam Benraad
Abstract:
On 5 July 2016, just days before the bloody terrorist attack on the Promenade des Anglais in Nice, the Al-Hayat media centre, one of the official propaganda branches of the Islamic State, broadcast a French nasheed which paid tribute to the Paris and Brussels attacks of November 2015 and March 2016. Entitled 'My Revenge', the terrorist anthem was of rare vehemence. It mentioned, sequentially, 'huddled bodies', in a reference to the civilian casualties of Western air strikes in the Iraqi-Syrian zone, 'explosive belts', 'sharp knives', 'large-calibre weapons' as well as 'localised targets'. France was accused of bearing the responsibility for the wave of attacks on its territory since the Charlie Hebdo massacre of January 2015 due to its 'ruthless war' against the Muslim world. Evoking an 'old aggression' and the 'crimes and spoliations' of which France has made itself guilty, the jihadist hymn depicted the rebirth of the caliphate as 'laudable revenge'. The notion of revenge has always been central to contemporary jihadism, understood both as a revolutionary ideology and a global militant movement. In recent years, the attacks carried out in Europe and elsewhere in the world have, for most, been claimed in its name. Whoever says jihad, says drama, yet few studies, if any, have looked at its dramatic and emotional elements, most notably its tragic vengefulness. This seems all the more astonishing that jihad is filled with drama; it could even be seen as a drama in its own right. The jihadists perform a script and take on roles inspired by their respective group’s culture (norms, values, beliefs, and symbols). The militants stage and perform such a script for a designated audience, either partisan, sympathising or hostile towards them and their cause. This research paper will examine the dramaturgy of jihadism and in particular, the genre that best characterises its violence: revenge tragedy. Theoretically, the research will rely on the tools of social movement theory and the sociology of emotions. Methodologically, it will draw from dramaturgical analysis and a combination of qualitative and quantitative tools to attain valuable observations of a number of developments, trends, and patterns. The choice has been made to focus mainly – however not exclusively – on the attacks which have taken place since 2001 in the European Union and more specific member states that have been significantly hit by jihadist terrorism. The research looks at a number of representative longitudinal samples identifying continuities and discontinuities, similarities, but also substantial differences. The preliminary findings tend to establish the relevance and validity of this approach in helping make better sense of sensitisation, mobilisation, and survival dynamics within jihadist groups, and motivations among individuals who have embraced violence. Besides, they illustrate their pertinence for counterterrorism policymakers and practitioners. Through drama, jihadist groups ensure the unceasing regeneration of their militant cause as well as their legitimation among their partisans. Without drama, and without the spectacular ideological staging of reality, they would not be able to maintain their attraction potential and power of persuasion.Keywords: Jihadism, dramaturgy, revenge, tragedy
Procedia PDF Downloads 13536 The Contemporary Format of E-Learning in Teaching Foreign Languages
Authors: Nataliya G. Olkhovik
Abstract:
Nowadays in the system of Russian higher medical education there have been undertaken initiatives that resulted in focusing on the resources of e-learning in teaching foreign languages. Obviously, the face-to-face communication in foreign languages bears much more advantages in terms of effectiveness in comparison with the potential of e-learning. Thus, we’ve faced the necessity of strengthening the capacity of e-learning via integration of active methods into the process of teaching foreign languages, such as project activity of students. Successful project activity of students should involve the following components: monitoring, control, methods of organizing the student’s activity in foreign languages, stimulating their interest in the chosen project, approaches to self-assessment and methods of raising their self-esteem. The contemporary methodology assumes the project as a specific method, which activates potential of a student’s cognitive function, emotional reaction, ability to work in the team, commitment, skills of cooperation and, consequently, their readiness to verbalize ideas, thoughts and attitudes. Verbal activity in the foreign language is a complex conception that consolidates both cognitive (involving speech) capacity and individual traits and attitudes such as initiative, empathy, devotion, responsibility etc. Once we organize the project activity by the means of e-learning within the ‘Foreign language’ discipline we have to take into consideration all mentioned above characteristics and work out an effective way to implement it into the teaching practice to boost its educational potential. We have integrated into the e-platform Moodle the module of project activity consisting of the following blocks of tasks that lead students to research, cooperate, strive to leadership, chase the goal and finally verbalize their intentions. Firstly, we introduce the project through activating self-activity of students by the tasks of the phase ‘Preparation of the project’: choose the topic and justify it; find out the problematic situation and its components; set the goals; create your team, choose the leader, distribute the roles in your team; make a written report on grounding the validity of your choices. Secondly, in the ‘Planning the project’ phase we ask students to represent the analysis of the problem in terms of reasons, ways and methods of solution and define the structure of their project (here students may choose oral or written presentation by drawing up the claim in the e-platform about their wish, whereas the teacher decides what form of presentation to prefer). Thirdly, the students have to design the visual aids, speech samples (functional phrases, introductory words, keywords, synonyms, opposites, attributive constructions) and then after checking, discussing and correcting with a teacher via the means of Moodle present it in front of the audience. And finally, we introduce the phase of self-reflection that aims to awake the inner desire of students to improve their verbal activity in a foreign language. As a result, by implementing the project activity into the e-platform and project activity, we try to widen the frameworks of a traditional lesson of foreign languages through tapping the potential of personal traits and attitudes of students.Keywords: active methods, e-learning, improving verbal activity in foreign languages, personal traits and attitudes
Procedia PDF Downloads 10535 Discourse Functions of Rhetorical Devices in Selected Roman Catholic Bishops' Pastoral Letters in the Ecclesiastical Province of Onitsha, Nigeria
Authors: Virginia Chika Okafor
Abstract:
The pastoral letter, an open letter addressed by a bishop to members of his diocese for the purpose of promoting faith and good Christian living, constitutes a persuasive religious discourse characterized by numerous rhetorical devices. Previous studies on Christian religious language have concentrated mainly on sermons, liturgy, prayers, theology, scriptures, hymns, and songs to the exclusion of the persuasive power of pastoral letters. This study, therefore, examined major rhetorical devices in selected Roman Catholic bishops’ Lenten pastoral letters in the Ecclesiastical Province of Onitsha, with a view to determining their persuasive discourse functions. Aristotelian Rhetoric was adopted as the framework because of its emphasis on persuasion through three main rhetorical appeals: logos, pathos, and ethos. Data were drawn from 10 pastoral letters of five Roman Catholic bishops in five dioceses (two letters from each) out of the seven in the Ecclesiastical of Onitsha. The five dioceses (Onitsha arch-diocese, Nnewi, Awka, Enugu, and Awgu dioceses) were chosen because pastoral letters are regularly published there. The 10 pastoral letters were published between 2000 and 2010 and range between 20 and 104 pages. They were selected, through purposive sampling, based on consistency in the publication and rhetorical content. Data were subjected to discourse analysis. Three categories of rhetorical devices were identified: those relating to logos (logical devices), those relating to pathos (pathetical devices), and those relating to ethos (ethical devices). Major logical devices deployed were: testimonial reference functioning as authority to validate messages; logical arguments appealing to the rationality of the audience; nominalization and passivation objectifying the validity of ideas; and modals of obligation/necessity appealing to the audience’s sense of responsibility and moral duty. Prominent among the pathetical devices deployed were: use of Igbo language to express solidarity with the audience; inclusive pronoun (we) to create a feeling of belonging, collectivism and oneness with them; prayers to inspire them; and positive emotion-laden words to refer to the Roman Catholic Church (RCC) to keep the audience emotionally attached to it. Finally, major ethical devices deployed were: use of first-person singular pronoun (I) and imperatives to invoke the authority of the bishops’ office; Latinisms to show learnedness; greetings and appreciation to express goodwill; and exemplary Biblical characters as models of faith, repentance, and love. The rhetorical devices were used in relation to the bishops’ messages of faith, repentance, love and loyalty to the Roman Catholic Church. Roman Catholic bishops’ pastoral letters in the Ecclesiastical Province of Onitsha are thus characterized by logos-, pathos-, and ethos-related rhetorical devices designed to persuade the audience to live according to the bishops’ messages of faith, love, repentance, and loyalty to the Roman Catholic Church. The rhetorical devices, therefore, establish the pastoral letters as a significant form of persuasive religious discourse.Keywords: ecclesiastical province of Onitsha, pastoral letters, persuasive discourse functions, rhetorical devices, Roman Catholic bishops
Procedia PDF Downloads 43834 Executive Function and Attention Control in Bilingual and Monolingual Children: A Systematic Review
Authors: Zihan Geng, L. Quentin Dixon
Abstract:
It has been proposed that early bilingual experience confers a number of advantages in the development of executive control mechanisms. Although the literature provides empirical evidence for bilingual benefits, some studies also reported null or mixed results. To make sense of these contradictory findings, the current review synthesize recent empirical studies investigating bilingual effects on children’s executive function and attention control. The publication time of the studies included in the review ranges from 2010 to 2017. The key searching terms are bilingual, bilingualism, children, executive control, executive function, and attention. The key terms were combined within each of the following databases: ERIC (EBSCO), Education Source, PsycINFO, and Social Science Citation Index. Studies involving both children and adults were also included but the analysis was based on the data generated only by the children group. The initial search yielded 137 distinct articles. Twenty-eight studies from 27 articles with a total of 3367 participants were finally included based on the selection criteria. The selective studies were then coded in terms of (a) the setting (i.e., the country where the data was collected), (b) the participants (i.e., age and languages), (c) sample size (i.e., the number of children in each group), (d) cognitive outcomes measured, (e) data collection instruments (i.e., cognitive tasks and tests), and (f) statistic analysis models (e.g., t-test, ANOVA). The results show that the majority of the studies were undertaken in western countries, mainly in the U.S., Canada, and the UK. A variety of languages such as Arabic, French, Dutch, Welsh, German, Spanish, Korean, and Cantonese were involved. In relation to cognitive outcomes, the studies examined children’s overall planning and problem-solving abilities, inhibition, cognitive complexity, working memory (WM), and sustained and selective attention. The results indicate that though bilingualism is associated with several cognitive benefits, the advantages seem to be weak, at least, for children. Additionally, the nature of the cognitive measures was found to greatly moderate the results. No significant differences are observed between bilinguals and monolinguals in overall planning and problem-solving ability, indicating that there is no bilingual benefit in the cooperation of executive function components at an early age. In terms of inhibition, the mixed results suggest that bilingual children, especially young children, may have better conceptual inhibition measured in conflict tasks, but not better response inhibition measured by delay tasks. Further, bilingual children showed better inhibitory control to bivalent displays, which resembles the process of maintaining two language systems. The null results were obtained for both cognitive complexity and WM, suggesting no bilingual advantage in these two cognitive components. Finally, findings on children’s attention system associate bilingualism with heightened attention control. Together, these findings support the hypothesis of cognitive benefits for bilingual children. Nevertheless, whether these advantages are observable appears to highly depend on the cognitive assessments. Therefore, future research should be more specific about the cognitive outcomes (e.g., the type of inhibition) and should report the validity of the cognitive measures consistently.Keywords: attention, bilingual advantage, children, executive function
Procedia PDF Downloads 18533 Becoming Vegan: The Theory of Planned Behavior and the Moderating Effect of Gender
Authors: Estela Díaz
Abstract:
This article aims to make three contributions. First, build on the literature on ethical decision-making literature by exploring factors that influence the intention of adopting veganism. Second, study the superiority of extended models of the Theory of Planned Behavior (TPB) for understanding the process involved in forming the intention of adopting veganism. Third, analyze the moderating effect of gender on TPB given that attitudes and behavior towards animals are gender-sensitive. No study, to our knowledge, has examined these questions. Veganism is not a diet but a political and moral stand that exclude, for moral reasons, the use of animals. Although there is a growing interest in studying veganism, it continues being overlooked in empirical research, especially within the domain of social psychology. TPB has been widely used to study a broad range of human behaviors, including moral issues. Nonetheless, TPB has rarely been applied to examine ethical decisions about animals and, even less, to veganism. Hence, the validity of TPB in predicting the intention of adopting veganism remains unanswered. A total of 476 non-vegan Spanish university students (55.6% female; the mean age was 23.26 years, SD= 6.1) responded to online and pencil-and-paper self-reported questionnaire based on previous studies. TPB extended models incorporated two background factors: ‘general attitudes towards humanlike-attributes ascribed to animals’ (AHA) (capacity for reason/emotions/suffer, moral consideration, and affect-towards-animals); and ‘general attitudes towards 11 uses of animals’ (AUA). SPSS 22 and SmartPLS 3.0 were used for statistical analyses. This study constructed a second-order reflective-formative model and took the multi-group analysis (MGA) approach to study gender effects. Six models of TPB (the standard and five competing) were tested. No a priori hypotheses were formulated. The results gave partial support to TPB. Attitudes (ATTV) (β = .207, p < .001), subjective norms (SNV) (β = .323, p < .001), and perceived control behavior (PCB) (β = .149, p < .001) had a significant direct effect on intentions (INTV). This model accounted for 27,9% of the variance in intention (R2Adj = .275) and had a small predictive relevance (Q2 = .261). However, findings from this study reveal that contrary to what TPB generally proposes, the effect of the background factors on intentions was not fully mediated by the proximal constructs of intentions. For instance, in the final model (Model#6), both factors had significant multiple indirect effect on INTV (β = .074, 95% C = .030, .126 [AHA:INTV]; β = .101, 95% C = .055, .155 [AUA:INTV]) and significant direct effect on INTV (β = .175, p < .001 [AHA:INTV]; β = .100, p = .003 [AUA:INTV]). Furthermore, the addition of direct paths from background factors to intentions improved the explained variance in intention (R2 = .324; R2Adj = .317) and the predictive relevance (Q2 = .300) over the base-model. This supports existing literature on the superiority of enhanced TPB models to predict ethical issues; which suggests that moral behavior may add additional complexity to decision-making. Regarding gender effect, MGA showed that gender only moderated the influence of AHA on ATTV (e.g., βWomen−βMen = .296, p < .001 [Model #6]). However, other observed gender differences (e.g. the explained variance of the model for intentions were always higher for men that for women, for instance, R2Women = .298; R2Men = .394 [Model #6]) deserve further considerations, especially for developing more effective communication strategies.Keywords: veganism, Theory of Planned Behavior, background factors, gender moderation
Procedia PDF Downloads 34732 School Students’ Career Guidance in the Context of Inclusive Education in Kazakhstan: Experience and Perspectives
Authors: Laura Butabayeva, Svetlana Ismagulova, Gulbarshin Nogaibayeva, Maiya Temirbayeva, Aidana Zhussip
Abstract:
The article presents the main results of the study conducted within the grant project «Organizational and methodological foundations for ensuring the inclusiveness of school students’ career guidance» (2022-2024). The main aim of the project is to study the issue of the absence of developed mechanisms, coordinating the activities of all stakeholders in preparing school students for conscious career choice, taking into account their individual opportunities and special educational needs. To achieve the aim of the project, according to the implementation plan, the analysis of foreign and national literature on the studied problem, as well as the study of the state of school students’ career guidance and their socialization in the context of inclusive education were conducted, the international experience on this issue was explored. The analysis of the national literature conducted by the authors has shown the State’s annual increase in the number of students with special educational needs as well as the rapid demand of labour market, influencing their professional self-determination in modern society. The participants from 5 State’s regions, including students, their parents, general secondary schools administration and educators, as well as employers, took part in the study, taking into account the geographical location: south, north, west, centre, and the cities of republican significance. To ensure the validity of the study’s results, the triangulation method was utilised, including both qualitative and quantitative methods. The data were analysed independently and compared with each other. Ethical principles were considered during all stages of the study. The characteristics of the system of career guidance in the modern school, the role and the involvement of stakeholders in the system of career guidance, the opinions of educators on school students’ preparedness for career choice, and the factors impeding the effectiveness of career guidance in schools were examined. The problem of stakeholders’ disunity and inconsistency, causing the systemic labor market distortions, the growth of low-skilled labor, and the unemployed, including people with special educational needs, were revealed. The other issue identified by the researchers was educators’ insufficient readiness for students’ career choice preparation in the context of inclusive education. To study cutting-edge experience in organizing a system of career guidance for young people and develop mechanisms coordinating the actions of all stakeholders in preparing students for career choice, the institutions of career guidance in France, Japan, and Germany were explored by the researchers. To achieve the aim of the project, the systemic contemporary model of school students’ professional self-determination, considering their individual opportunities and special educational needs, has been developed based on the study results and international experience. The main principles of this model are consistency, accessibility, inclusiveness, openness, coherence, continuity. The perspectives of students’ career guidance development in the context of inclusive education have been suggested.Keywords: career guidance, inclusive education, model of school students’ professional self-determination, psychological and pedagogical support, special educational needs
Procedia PDF Downloads 5331 Towards a Measuring Tool to Encourage Knowledge Sharing in Emerging Knowledge Organizations: The Who, the What and the How
Authors: Rachel Barker
Abstract:
The exponential velocity in the truly knowledge-intensive world today has increasingly bombarded organizations with unfathomable challenges. Hence organizations are introduced to strange lexicons of descriptors belonging to a new paradigm of who, what and how knowledge at individual and organizational levels should be managed. Although organizational knowledge has been recognized as a valuable intangible resource that holds the key to competitive advantage, little progress has been made in understanding how knowledge sharing at individual level could benefit knowledge use at collective level to ensure added value. The research problem is that a lack of research exists to measure knowledge sharing through a multi-layered structure of ideas with at its foundation, philosophical assumptions to support presuppositions and commitment which requires actual findings from measured variables to confirm observed and expected events. The purpose of this paper is to address this problem by presenting a theoretical approach to measure knowledge sharing in emerging knowledge organizations. The research question is that despite the competitive necessity of becoming a knowledge-based organization, leaders have found it difficult to transform their organizations due to a lack of knowledge on who, what and how it should be done. The main premise of this research is based on the challenge for knowledge leaders to develop an organizational culture conducive to the sharing of knowledge and where learning becomes the norm. The theoretical constructs were derived and based on the three components of the knowledge management theory, namely technical, communication and human components where it is suggested that this knowledge infrastructure could ensure effective management. While it is realised that it might be a little problematic to implement and measure all relevant concepts, this paper presents effect of eight critical success factors (CSFs) namely: organizational strategy, organizational culture, systems and infrastructure, intellectual capital, knowledge integration, organizational learning, motivation/performance measures and innovation. These CSFs have been identified based on a comprehensive literature review of existing research and tested in a new framework adapted from four perspectives of the balanced score card (BSC). Based on these CSFs and their items, an instrument was designed and tested among managers and employees of a purposefully selected engineering company in South Africa who relies on knowledge sharing to ensure their competitive advantage. Rigorous pretesting through personal interviews with executives and a number of academics took place to validate the instrument and to improve the quality of items and correct wording of issues. Through analysis of surveys collected, this research empirically models and uncovers key aspects of these dimensions based on the CSFs. Reliability of the instrument was calculated by Cronbach’s a for the two sections of the instrument on organizational and individual levels.The construct validity was confirmed by using factor analysis. The impact of the results was tested using structural equation modelling and proved to be a basis for implementing and understanding the competitive predisposition of the organization as it enters the process of knowledge management. In addition, they realised the importance to consolidate their knowledge assets to create value that is sustainable over time.Keywords: innovation, intellectual capital, knowledge sharing, performance measures
Procedia PDF Downloads 19530 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems
Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra
Abstract:
Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.Keywords: automated, biomechanics, team-sports, sprint
Procedia PDF Downloads 11929 Self-Selected Intensity and Discounting Rates of Exercise in Comparison with Food and Money in Healthy Adults
Authors: Tamam Albelwi, Robert Rogers, Hans-Peter Kubis
Abstract:
Background: Exercise is widely acknowledged as a highly important health behavior, which reduces risks related to lifestyle diseases like type 2 diabetes, cardiovascular disease. However, exercise adherence is low in high-risk groups and sedentary lifestyle is more the norm than the exception. Expressed reasons for exercise participation are often based on delayed outcomes related to health threats and benefits but also enjoyment. Whether exercise is perceived as rewarding is well established in animal literature but the evidence is sparse in humans. Additionally, the question how stable any reward is perceived with time delays is an important question influencing decision-making (in favor or against a behavior). For the modality exercise, this has not been examined before. We, therefore, investigated the discounting of pre-established self-selected exercise compared with established rewards of food and money with a computer-based discounting paradigm. We hypothesized that exercise will be discounted like an established reward (food and money); however, we expect that the discounting rate is similar to a consumable reward like food. Additionally, we expected that individuals’ characteristics like preferred intensity, physical activity and body characteristics are associated with discount rates. Methods: 71 participants took part in four sessions. The sessions were designed to let participants select their preferred exercise intensity on a treadmill. Participants were asked to adjust their speed for optimizing pleasantness over an exercise period of up to 30 minutes, heart rate and pleasantness rating was measured. In further sessions, the established exercise intensity was modified and tested on perceptual validity. In the last exercise session rates of perceived exertion was measured on the preferred intensity level. Furthermore, participants filled in questionnaires related to physical activity, mood, craving, and impulsivity and answered choice questions on a bespoke computer task to establish discounting rates of their preferred exercise (kex), their favorite food (kfood) and a value-matching amount of money (kmoney). Results: Participants self-selected preferred speed was 5.5±2.24 km/h, at a heart rate of 120.7±23.5, and perceived exertion scale of 10.13±2.06. This shows that participants preferred a light exercise intensity with low to moderate cardiovascular strain based on perceived pleasantness. Computer assessment of discounting rates revealed that exercise was quickly discounted like a consumable reward, no significant difference between kfood and kex (kfood =0.322±0.263; kex=0.223±0.203). However, kmoney (kmoney=0.080±0.02) was significantly lower than the rates of exercise and food. Moreover, significant associations were found between preferred speed and kex (r=-0.302) and between physical activity levels and preferred speed (r=0.324). Outcomes show that participants perceived and discounted self-selected exercise like an established reward (food and money) but was discounted more like consumable rewards. Moreover, exercise discounting was quicker in individuals who preferred lower speeds, being less physically active. This may show that in a choice conflict between exercise and food the delay of exercise (because of distance) might disadvantage exercise as the chosen behavior particular in sedentary people. Conclusion: exercise can be perceived as a reward and is discounted quickly in time like food. Pleasant exercise experience is connected to low to moderate cardiovascular and perceptual strain.Keywords: delay discounting, exercise, temporal discounting, time perspective
Procedia PDF Downloads 27028 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals
Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen
Abstract:
Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.Keywords: care, learning, nursing, technology
Procedia PDF Downloads 13627 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP
Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang
Abstract:
Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species
Procedia PDF Downloads 6826 Navigating the Future: Evaluating the Market Potential and Drivers for High-Definition Mapping in the Autonomous Vehicle Era
Authors: Loha Hashimy, Isabella Castillo
Abstract:
In today's rapidly evolving technological landscape, the importance of precise navigation and mapping systems cannot be understated. As various sectors undergo transformative changes, the market potential for Advanced Mapping and Management Systems (AMMS) emerges as a critical focus area. The Galileo/GNSS-Based Autonomous Mobile Mapping System (GAMMS) project, specifically targeted toward high-definition mapping (HDM), endeavours to provide insights into this market within the broader context of the geomatics and navigation fields. With the growing integration of Autonomous Vehicles (AVs) into our transportation systems, the relevance and demand for sophisticated mapping solutions like HDM have become increasingly pertinent. The research employed a meticulous, lean, stepwise, and interconnected methodology to ensure a comprehensive assessment. Beginning with the identification of pivotal project results, the study progressed into a systematic market screening. This was complemented by an exhaustive desk research phase that delved into existing literature, data, and trends. To ensure the holistic validity of the findings, extensive consultations were conducted. Academia and industry experts provided invaluable insights through interviews, questionnaires, and surveys. This multi-faceted approach facilitated a layered analysis, juxtaposing secondary data with primary inputs, ensuring that the conclusions were both accurate and actionable. Our investigation unearthed a plethora of drivers steering the HD maps landscape. These ranged from technological leaps, nuanced market demands, and influential economic factors to overarching socio-political shifts. The meteoric rise of Autonomous Vehicles (AVs) and the shift towards app-based transportation solutions, such as Uber, stood out as significant market pull factors. A nuanced PESTEL analysis further enriched our understanding, shedding light on political, economic, social, technological, environmental, and legal facets influencing the HD maps market trajectory. Simultaneously, potential roadblocks were identified. Notable among these were barriers related to high initial costs, concerns around data quality, and the challenges posed by a fragmented and evolving regulatory landscape. The GAMMS project serves as a beacon, illuminating the vast opportunities that lie ahead for the HD mapping sector. It underscores the indispensable role of HDM in enhancing navigation, ensuring safety, and providing pinpoint, accurate location services. As our world becomes more interconnected and reliant on technology, HD maps emerge as a linchpin, bridging gaps and enabling seamless experiences. The research findings accentuate the imperative for stakeholders across industries to recognize and harness the potential of HD mapping, especially as we stand on the cusp of a transportation revolution heralded by Autonomous Vehicles and advanced geomatic solutions.Keywords: high-definition mapping (HDM), autonomous vehicles, PESTEL analysis, market drivers
Procedia PDF Downloads 8425 Force Sensor for Robotic Graspers in Minimally Invasive Surgery
Authors: Naghmeh M. Bandari, Javad Dargahi, Muthukumaran Packirisamy
Abstract:
Robot-assisted minimally invasive surgery (RMIS) has been widely performed around the world during the last two decades. RMIS demonstrates significant advantages over conventional surgery, e.g., improving the accuracy and dexterity of a surgeon, providing 3D vision, motion scaling, hand-eye coordination, decreasing tremor, and reducing x-ray exposure for surgeons. Despite benefits, surgeons cannot touch the surgical site and perceive tactile information. This happens due to the remote control of robots. The literature survey identified the lack of force feedback as the riskiest limitation in the existing technology. Without the perception of tool-tissue contact force, the surgeon might apply an excessive force causing tissue laceration or insufficient force causing tissue slippage. The primary use of force sensors has been to measure the tool-tissue interaction force in real-time in-situ. Design of a tactile sensor is subjected to a set of design requirements, e.g., biocompatibility, electrical-passivity, MRI-compatibility, miniaturization, ability to measure static and dynamic force. In this study, a planar optical fiber-based sensor was proposed to mount at the surgical grasper. It was developed based on the light intensity modulation principle. The deflectable part of the sensor was a beam modeled as a cantilever Euler-Bernoulli beam on rigid substrates. A semi-cylindrical indenter was attached to the bottom surface the beam at the mid-span. An optical fiber was secured at both ends on the same rigid substrates. The indenter was in contact with the fiber. External force on the sensor caused deflection in the beam and optical fiber simultaneously. The micro-bending of the optical fiber would consequently result in light power loss. The sensor was simulated and studied using finite element methods. A laser light beam with 800nm wavelength and 5mW power was used as the input to the optical fiber. The output power was measured using a photodetector. The voltage from photodetector was calibrated to the external force for a chirp input (0.1-5Hz). The range, resolution, and hysteresis of the sensor were studied under monotonic and harmonic external forces of 0-2.0N with 0 and 5Hz, respectively. The results confirmed the validity of proposed sensing principle. Also, the sensor demonstrated an acceptable linearity (R2 > 0.9). A minimum external force was observed below which no power loss was detectable. It is postulated that this phenomenon is attributed to the critical angle of the optical fiber to observe total internal reflection. The experimental results were of negligible hysteresis (R2 > 0.9) and in fair agreement with the simulations. In conclusion, the suggested planar sensor is assessed to be a cost-effective solution, feasible, and easy to use the sensor for being miniaturized and integrated at the tip of robotic graspers. Geometrical and optical factors affecting the minimum sensible force and the working range of the sensor should be studied and optimized. This design is intrinsically scalable and meets all the design requirements. Therefore, it has a significant potential of industrialization and mass production.Keywords: force sensor, minimally invasive surgery, optical sensor, robotic surgery, tactile sensor
Procedia PDF Downloads 23024 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies
Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König
Abstract:
Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition
Procedia PDF Downloads 25723 An Interdisciplinary Maturity Model for Accompanying Sustainable Digital Transformation Processes in a Smart Residential Quarter
Authors: Wesley Preßler, Lucie Schmidt
Abstract:
Digital transformation is playing an increasingly important role in the development of smart residential quarters. In order to accompany and steer this process and ultimately make the success of the transformation efforts measurable, it is helpful to use an appropriate maturity model. However, conventional maturity models for digital transformation focus primarily on the evaluation of processes and neglect the information and power imbalances between the stakeholders, which affects the validity of the results. The Multi-Generation Smart Community (mGeSCo) research project is developing an interdisciplinary maturity model that integrates the dimensions of digital literacy, interpretive patterns, and technology acceptance to address this gap. As part of the mGeSCo project, the technological development of selected dimensions in the Smart Quarter Jena-Lobeda (Germany) is being investigated. A specific maturity model, based on Cohen's Smart Cities Wheel, evaluates the central dimensions Working, Living, Housing and Caring. To improve the reliability and relevance of the maturity assessment, the factors Digital Literacy, Interpretive Patterns and Technology Acceptance are integrated into the developed model. The digital literacy dimension examines stakeholders' skills in using digital technologies, which influence their perception and assessment of technological maturity. Digital literacy is measured by means of surveys, interviews, and participant observation, using the European Commission's Digital Literacy Framework (DigComp) as a basis. Interpretations of digital technologies provide information about how individuals perceive technologies and ascribe meaning to them. However, these are not mere assessments, prejudices, or stereotyped perceptions but collective patterns, rules, attributions of meaning and the cultural repertoire that leads to these opinions and attitudes. Understanding these interpretations helps in assessing the overarching readiness of stakeholders to digitally transform a/their neighborhood. This involves examining people's attitudes, beliefs, and values about technology adoption, as well as their perceptions of the benefits and risks associated with digital tools. These insights provide important data for a holistic view and inform the steps needed to prepare individuals in the neighborhood for a digital transformation. Technology acceptance is another crucial factor for successful digital transformation to examine the willingness of individuals to adopt and use new technologies. Surveys or questionnaires based on Davis' Technology Acceptance Model can be used to complement interpretive patterns to measure neighborhood acceptance of digital technologies. Integrating the dimensions of digital literacy, interpretive patterns and technology acceptance enables the development of a roadmap with clear prerequisites for initiating a digital transformation process in the neighborhood. During the process, maturity is measured at different points in time and compared with changes in the aforementioned dimensions to ensure sustainable transformation. Participation, co-creation, and co-production are essential concepts for a successful and inclusive digital transformation in the neighborhood context. This interdisciplinary maturity model helps to improve the assessment and monitoring of sustainable digital transformation processes in smart residential quarters. It enables a more comprehensive recording of the factors that influence the success of such processes and supports the development of targeted measures to promote digital transformation in the neighborhood context.Keywords: digital transformation, interdisciplinary, maturity model, neighborhood
Procedia PDF Downloads 7722 From Intuitive to Constructive Audit Risk Assessment: A Complementary Approach to CAATTs Adoption
Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy
Abstract:
The use of the audit risk model in auditing has faced limitations and difficulties, leading auditors to rely on a conceptual level of its application. The qualitative approach to assessing risks has resulted in different risk assessments, affecting the quality of audits and decision-making on the adoption of CAATTs. This study aims to investigate risk factors impacting the implementation of the audit risk model and propose a complementary risk-based instrument (KRIs) to form substance risk judgments and mitigate against heightened risk of material misstatement (RMM). The study addresses the question of how risk factors impact the implementation of the audit risk model, improve risk judgments, and aid in the adoption of CAATTs. The study uses a three-stage scale development procedure involving a pretest and subsequent study with two independent samples. The pretest involves an exploratory factor analysis, while the subsequent study employs confirmatory factor analysis for construct validation. Additionally, the authors test the ability of the KRIs to predict audit efforts needed to mitigate against heightened RMM. Data was collected through two independent samples involving 767 participants. The collected data was analyzed using exploratory factor analysis and confirmatory factor analysis to assess scale validity and construct validation. The suggested KRIs, comprising two risk components and seventeen risk items, are found to have high predictive power in determining audit efforts needed to reduce RMM. The study validates the suggested KRIs as an effective instrument for risk assessment and decision-making on the adoption of CAATTs. This study contributes to the existing literature by implementing a holistic approach to risk assessment and providing a quantitative expression of assessed risks. It bridges the gap between intuitive risk evaluation and the theoretical domain, clarifying the mechanism of risk assessments. It also helps improve the uniformity and quality of risk assessments, aiding audit standard-setters in issuing updated guidelines on CAATT adoption. A few limitations and recommendations for future research should be mentioned. First, the process of developing the scale was conducted in the Israeli auditing market, which follows the International Standards on Auditing (ISAs). Although ISAs are adopted in European countries, for greater generalization, future studies could focus on other countries that adopt additional or local auditing standards. Second, this study revealed risk factors that have a material impact on the assessed risk. However, there could be additional risk factors that influence the assessment of the RMM. Therefore, future research could investigate other risk segments, such as operational and financial risks, to bring a broader generalizability to our results. Third, although the sample size in this study fits acceptable scale development procedures and enables drawing conclusions from the body of research, future research may develop standardized measures based on larger samples to reduce the generation of equivocal results and suggest an extended risk model.Keywords: audit risk model, audit efforts, CAATTs adoption, key risk indicators, sustainability
Procedia PDF Downloads 7721 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 7020 Developing and Testing a Questionnaire of Music Memorization and Practice
Authors: Diana Santiago, Tania Lisboa, Sophie Lee, Alexander P. Demos, Monica C. S. Vasconcelos
Abstract:
Memorization has long been recognized as an arduous and anxiety-evoking task for musicians, and yet, it is an essential aspect of performance. Research shows that musicians are often not taught how to memorize. While memorization and practice strategies of professionals have been studied, little research has been done to examine how student musicians learn to practice and memorize music in different cultural settings. We present the process of developing and testing a questionnaire of music memorization and musical practice for student musicians in the UK and Brazil. A survey was developed for a cross-cultural research project aiming at examining how young orchestral musicians (aged 7–18 years) in different learning environments and cultures engage in instrumental practice and memorization. The questionnaire development included members of a UK/US/Brazil research team of music educators and performance science researchers. A pool of items was developed for each aspect of practice and memorization identified, based on literature, personal experiences, and adapted from existing questionnaires. Item development took the varying levels of cognitive and social development of the target populations into consideration. It also considered the diverse target learning environments. Items were initially grouped in accordance with a single underlying construct/behavior. The questionnaire comprised three sections: a demographics section, a section on practice (containing 29 items), and a section on memorization (containing 40 items). Next, the response process was considered and a 5-point Likert scale ranging from ‘always’ to ‘never’ with a verbal label and an image assigned to each response option was selected, following effective questionnaire design for children and youths. Finally, a pilot study was conducted with young orchestral musicians from diverse learning environments in Brazil and the United Kingdom. Data collection took place in either one-to-one or group settings to facilitate the participants. Cognitive interviews were utilized to establish response process validity by confirming the readability and accurate comprehension of the questionnaire items or highlighting the need for item revision. Internal reliability was investigated by measuring the consistency of the item groups using the statistical test Cronbach’s alpha. The pilot study successfully relied on the questionnaire to generate data about the engagement of young musicians of different levels and instruments, across different learning and cultural environments, in instrumental practice and memorization. Interaction analysis of the cognitive interviews undertaken with these participants, however, exposed the fact that certain items, and the response scale, could be interpreted in multiple ways. The questionnaire text was, therefore, revised accordingly. The low Cronbach’s Alpha scores of many item groups indicated another issue with the original questionnaire: its low level of internal reliability. Several reasons for each poor reliability can be suggested, including the issues with item interpretation revealed through interaction analysis of the cognitive interviews, the small number of participants (34), and the elusive nature of the construct in question. The revised questionnaire measures 78 specific behaviors or opinions. It can be seen to provide an efficient means of gathering information about the engagement of young musicians in practice and memorization on a large scale.Keywords: cross-cultural, memorization, practice, questionnaire, young musicians
Procedia PDF Downloads 12319 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model
Authors: Ichiro Takahashi
Abstract:
One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability
Procedia PDF Downloads 21018 A Proposed Treatment Protocol for the Management of Pars Interarticularis Pathology in Children and Adolescents
Authors: Paul Licina, Emma M. Johnston, David Lisle, Mark Young, Chris Brady
Abstract:
Background: Lumbar pars pathology is a common cause of pain in the growing spine. It can be seen in young athletes participating in at-risk sports and can affect sporting performance and long-term health due to its resistance to traditional management. There is a current lack of consensus of classification and treatment for pars injuries. Previous systems used CT to stage pars defects but could not assess early stress reactions. A modified classification is proposed that considers findings on MRI, significantly improving early treatment guidance. The treatment protocol is designed for patients aged 5 to 19 years. Method: Clinical screening identifies patients with a low, medium, or high index of suspicion for lumbar pars injury using patient age, sport participation and pain characteristics. MRI of the at-risk cohort enables augmentation of existing CT-based classification while avoiding ionising radiation. Patients are classified into five categories based on MRI findings. A type 0 lesion (stress reaction) is present when CT is normal and MRI shows high signal change (HSC) in the pars/pedicle on T2 images. A type 1 lesion represents the ‘early defect’ CT classification. The group previously referred to as a 'progressive stage' defect on CT can be split into 2A and 2B categories. 2As have HSC on MRI, whereas 2Bs do not. This distinction is important with regard to healing potential. Type 3 lesions are terminal stage defects on CT, characterised by pseudarthrosis. MRI shows no HSC. Results: Stress reactions (type 0) and acute fractures (1 and 2a) can heal and are treated in a custom-made hard brace for 12 weeks. It is initially worn 23 hours per day. At three weeks, patients commence basic core rehabilitation. At six weeks, in the absence of pain, the brace is removed for sleeping. Exercises are progressed to positions of daily living. Patients with continued pain remain braced 23 hours per day without exercise progression until becoming symptom-free. At nine weeks, patients commence supervised exercises out of the brace for 30 minutes each day. This allows them to re-learn muscular control without rigid support of the brace. At 12 weeks, bracing ceases and MRI is repeated. For patients with near or complete resolution of bony oedema and healing of any cortical defect, rehabilitation is focused on strength and conditioning and sport-specific exercise for the full return to activity. The length of this final stage is approximately nine weeks but depends on factors such as development and level of sports participation. If significant HSC remains on MRI, CT scan is considered to definitively assess cortical defect healing. For these patients, return to high-risk sports is delayed for up to three months. Chronic defects (2b and 3) cannot heal and are not braced, and rehabilitation follows traditional protocols. Conclusion: Appropriate clinical screening and imaging with MRI can identify pars pathology early. In those with potential for healing, we propose hard bracing and appropriate rehabilitation as part of a multidisciplinary management protocol. The validity of this protocol will be tested in future studies.Keywords: adolescents, MRI classification, pars interticularis, treatment protocol
Procedia PDF Downloads 15317 About the State of Students’ Career Guidance in the Conditions of Inclusive Education in the Republic of Kazakhstan
Authors: Laura Butabayeva, Svetlana Ismagulova, Gulbarshin Nogaibayeva, Maiya Temirbayeva, Aidana Zhussip
Abstract:
Over the years of independence, Kazakhstan has not only ratified international documents regulating the rights of children to Inclusive education, but also developed its own inclusive educational policy. Along with this, the state pays particular attention to high school students' preparedness for professional self-determination. However, a number of problematic issues in this field have been revealed, such as the lack of systemic mechanisms coordinating stakeholders’ actions in preparing schoolchildren for a conscious choice of in-demand profession, meeting their individual capabilities and special educational needs (SEN). The analysis of the state’s current situation indicates school graduates’ adaptation to the labor market does not meet existing demands of the society. According to the Ministry of Labor and Social Protection of the Population of the Republic of Kazakhstan, about 70 % of Kazakhstani school graduates find themselves difficult to choose a profession, 87 % of schoolchildren make their career choice under the influence of parents and school teachers, 90 % of schoolchildren and their parents have no idea about the most popular professions on the market. The results of the study conducted by KorlanSyzdykova in 2016 indicated the urgent need of Kazakhstani school graduates in obtaining extensive information about in- demand professions and receiving professional assistance in choosing a profession in accordance with their individual skills, abilities, and preferences. The results of the survey, conducted by Information and Analytical Center among heads of colleges in 2020, showed that despite significant steps in creating conditions for students with SEN, they face challenges in studying because of poor career guidance provided to them in schools. The results of the study, conducted by the Center for Inclusive Education of the National Academy of Education named after Y. Altynsarin in the state’s general education schools in 2021, demonstrated the lack of career guidance, pedagogical and psychological support for children with SEN. To investigate these issues, the further study was conducted to examine the state of students’ career guidance and socialization, taking into account their SEN. The hypothesis of this study proposed that to prepare school graduates for a conscious career choice, school teachers and specialists need to develop their competencies in early identification of students' interests, inclinations, SEN and ensure necessary support for them. The state’s 5 regions were involved in the study according to the geographical location. The triangulation approach was utilized to ensure the credibility and validity of research findings, including both theoretical (analysis of existing statistical data, legal documents, results of previous research) and empirical (school survey for students, interviews with parents, teachers, representatives of school administration) methods. The data were analyzed independently and compared to each other. The survey included questions related to provision of pedagogical support for school students in making their career choice. Ethical principles were observed in the process of developing the methodology, collecting, analyzing the data and distributing the results. Based on the results, methodological recommendations on students’ career guidance for school teachers and specialists were developed, taking into account the former’s individual capabilities and SEN.Keywords: career guidance, children with special educational needs, inclusive education, Kazakhstan
Procedia PDF Downloads 17216 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum
Authors: Divya Palaniappan
Abstract:
Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum
Procedia PDF Downloads 36215 Explanation of the Main Components of the Unsustainability of Cooperative Institutions in Cooperative Management Projects to Combat Desertification in South Khorasan Province
Authors: Yaser Ghasemi Aryan, Firoozeh Moghiminejad, Mohammadreza Shahraki
Abstract:
Background: The cooperative institution is considered the first and most essential pillar of strengthening social capital, whose sustainability is the main guarantee of survival and continued participation of local communities in natural resource management projects. The Village Development Group and the Microcredit Fund are two important social and economic institutions in the implementation of the International Project for the Restoration of Degraded Forest Lands (RFLDL) in Sarayan City, South Khorasan Province, which has learned positive lessons from the participation of the beneficiaries in the implementation. They have brought more effective projects to deal with desertification. However, the low activity or liquidation of some of these institutions has become one of the important challenges and concerns of project executive experts. The current research was carried out with the aim of explaining the main components of the instability of these institutions. Materials and Methods: This research is descriptive-analytical in terms of method, practical in terms of purpose, and the method of collecting information is two documentary and survey methods. The statistical population of the research included all the members of the village development groups and microcredit funds in the target villages of the RFLDL project of Sarayan city, based on the Kochran formula and matching with the Karjesi and Morgan table. Net people were selected as a statistical sample. After confirming the validity of the expert's opinions, the reliability of the questionnaire was 0.83, which shows the appropriate reliability of the researcher-made questionnaire. Data analysis was done using SPSS software. Results: The results related to the extraction of obstacles to the stability of social and economic networks were classified and prioritized in the form of 5 groups of social-cultural, economic, administrative, educational-promotional and policy-management factors. Based on this, in the socio-cultural factors, the items ‘not paying attention to the structural characteristics and composition of groups’, ‘lack of commitment and moral responsibility in some members of the group,’ and ‘lack of a clear pattern for the preservation and survival of groups’, in the disciplinary factors, The items ‘Irregularity in holding group meetings’ and ‘Irregularity of members to participate in meetings’, in the economic factors of the items "small financial capital of the fund’, ‘the low amount of loans of the fund’ and ‘the fund's inability to conclude contracts and attract capital from other sources’, in the educational-promotional factors of the items ‘non-simultaneity of job training with the granting of loans to create jobs’ and ‘insufficient training for the effective use of loans and job creation’ and in the policy-management factors of the item ‘failure to provide government facilities for support From the funds, they had the highest priority. Conclusion: In general, the results of this research show that policy-management factors and social factors, especially the structure and composition of social and economic institutions, are the most important obstacles to their sustainability. Therefore, it is suggested to form cooperative institutions based on network analysis studies in order to achieve the appropriate composition of members.Keywords: cooperative institution, social capital, network analysis, participation, Sarayan.
Procedia PDF Downloads 5514 Evaluation of Polymerisation Shrinkage of Randomly Oriented Micro-Sized Fibre Reinforced Dental Composites Using Fibre-Bragg Grating Sensors and Their Correlation with Degree of Conversion
Authors: Sonam Behl, Raju, Ginu Rajan, Paul Farrar, B. Gangadhara Prusty
Abstract:
Reinforcing dental composites with micro-sized fibres can significantly improve the physio-mechanical properties of dental composites. The short fibres can be oriented randomly within dental composites, thus providing quasi-isotropic reinforcing efficiency unlike unidirectional/bidirectional fibre reinforced composites enhancing anisotropic properties. Thus, short fibres reinforced dental composites are getting popular among practitioners. However, despite their popularity, resin-based dental composites are prone to failure on account of shrinkage during photo polymerisation. The shrinkage in the structure may lead to marginal gap formation, causing secondary caries, thus ultimately inducing failure of the restoration. The traditional methods to evaluate polymerisation shrinkage using strain gauges, density-based measurements, dilatometer, or bonded-disk focuses on average value of volumetric shrinkage. Moreover, the results obtained from traditional methods are sensitive to the specimen geometry. The present research aims to evaluate the real-time shrinkage strain at selected locations in the material with the help of optical fibre Bragg grating (FBG) sensors. Due to the miniature size (diameter 250 µm) of FBG sensors, they can be easily embedded into small samples of dental composites. Furthermore, an FBG array into the system can map the real-time shrinkage strain at different regions of the composite. The evaluation of real-time monitoring of shrinkage values may help to optimise the physio-mechanical properties of composites. Previously, FBG sensors have been able to rightfully measure polymerisation strains of anisotropic (unidirectional or bidirectional) reinforced dental composites. However, very limited study exists to establish the validity of FBG based sensors to evaluate volumetric shrinkage for randomly oriented fibres reinforced composites. The present study aims to fill this research gap and is focussed on establishing the usage of FBG based sensors for evaluating the shrinkage of dental composites reinforced with randomly oriented fibres. Three groups of specimens were prepared by mixing the resin (80% UDMA/20% TEGDMA) with 55% of silane treated BaAlSiO₂ particulate fillers or by adding 5% of micro-sized fibres of diameter 5 µm, and length 250/350 µm along with 50% of silane treated BaAlSiO₂ particulate fillers into the resin. For measurement of polymerisation shrinkage strain, an array of three fibre Bragg grating sensors was embedded at a depth of 1 mm into a circular Teflon mould of diameter 15 mm and depth 2 mm. The results obtained are compared with the traditional method for evaluation of the volumetric shrinkage using density-based measurements. Degree of conversion was measured using FTIR spectroscopy (Spotlight 400 FT-IR from PerkinElmer). It is expected that the average polymerisation shrinkage strain values for dental composites reinforced with micro-sized fibres can directly correlate with the measured degree of conversion values, implying that more C=C double bond conversion to C-C single bond values also leads to higher shrinkage strain within the composite. Moreover, it could be established the photonics approach could help assess the shrinkage at any point of interest in the material, suggesting that fibre-Bragg grating sensors are a suitable means for measuring real-time polymerisation shrinkage strain for randomly fibre reinforced dental composites as well.Keywords: dental composite, glass fibre, polymerisation shrinkage strain, fibre-Bragg grating sensors
Procedia PDF Downloads 15413 Application of Satellite Remote Sensing in Support of Water Exploration in the Arab Region
Authors: Eman Ghoneim
Abstract:
The Arabian deserts include some of the driest areas on Earth. Yet, its landforms reserved a record of past wet climates. During humid phases, the desert was green and contained permanent rivers, inland deltas and lakes. Some of their water would have seeped and replenished the groundwater aquifers. When the wet periods came to an end, several thousand years ago, the entire region transformed into an extended band of desert and its original fluvial surface was totally covered by windblown sand. In this work, radar and thermal infrared images were used to reveal numerous hidden surface/subsurface features. Radar long wavelength has the unique ability to penetrate surface dry sands and uncover buried subsurface terrain. Thermal infrared also proven to be capable of spotting cooler moist areas particularly in hot dry surfaces. Integrating Radarsat images and GIS revealed several previously unknown paleoriver and lake basins in the region. One of these systems, known as the Kufrah, is the largest yet identified river basin in the Eastern Sahara. This river basin, which straddles the border between Egypt and Libya, flowed north parallel to the adjacent Nile River with an extensive drainage area of 235,500 km2 and massive valley width of 30 km in some parts. This river was most probably served as a spillway for an overflow from Megalake Chad to the Mediterranean Sea and, thus, may have acted as a natural water corridor used by human ancestors to migrate northward across the Sahara. The Gilf-Kebir is another large paleoriver system located just east of Kufrah and emanates from the Gilf Plateau in Egypt. Both river systems terminate with vast inland deltas at the southern margin of the Great Sand Sea. The trends of their distributary channels indicate that both rivers drained to a topographic depression that was periodically occupied by a massive lake. During dry climates, the lake dried up and roofed by sand deposits, which is today forming the Great Sand Sea. The enormity of the lake basin provides explanation as to why continuous extraction of groundwater in this area is possible. A similar lake basin, delimited by former shorelines, was detected by radar space data just across the border of Sudan. This lake, called the Northern Darfur Megalake, has a massive size of 30,750 km2. These former lakes and rivers could potentially hold vast reservoirs of groundwater, oil and natural gas at depth. Similar to radar data, thermal infrared images were proven to be useful in detecting potential locations of subsurface water accumulation in desert regions. Analysis of both Aster and daily MODIS thermal channels reveal several subsurface cool moist patches in the sandy desert of the Arabian Peninsula. Analysis indicated that such evaporative cooling anomalies were resulted from the subsurface transmission of the Monsoonal rainfall from the mountains to the adjacent plain. Drilling a number of wells in several locations proved the presence of productive water aquifers confirming the validity of the used data and the adopted approaches for water exploration in dry regions.Keywords: radarsat, SRTM, MODIS, thermal infrared, near-surface water, ancient rivers, desert, Sahara, Arabian peninsula
Procedia PDF Downloads 24712 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors
Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov
Abstract:
Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model
Procedia PDF Downloads 218