Search results for: unified process model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28409

Search results for: unified process model

1079 The Rise and Effects of Social Movement on Ethnic Relations in Malaysia: The Bersih Movement as a Case Study

Authors: Nur Rafeeda Daut

Abstract:

The significance of this paper is to provide an insight on the role of social movement in building stronger ethnic relations in Malaysia. In particular, it focuses on how the BERSIH movement have been able to bring together the different ethnic groups in Malaysia to resist the present political administration that is seen to manipulate the electoral process and oppress the basic freedom of expression of Malaysians. Attention is given on how and why this group emerged and its mobilisation strategies. Malaysia which is a multi-ethnic and multi-religious society gained its independence from the British in 1957. Like many other new nations, it faces the challenges of nation building and governance. From economic issues to racial and religious tension, Malaysia is experiencing high level of corruption and income disparity among the different ethnic groups. The political parties in Malaysia are also divided along ethnic lines. BERSIH which is translated as ‘clean’ is a movement which seeks to reform the current electoral system in Malaysia to ensure equality, justice, free and fair elections. It was originally formed in 2007 as a joint committee that comprised leaders from political parties, civil society groups and NGOs. In April 2010, the coalition developed as an entirely civil society movement unaffiliated to any political party. BERSIH claimed that the electoral roll in Malaysia has been marred by fraud and other irregularities. In 2015, the BERSIH movement organised its biggest rally in Malaysia which also includes 38 other rallies held internationally. Supporters of BERSIH that participated in the demonstration were comprised of all the different ethnic groups in Malaysia. In this paper, two social movement theories are used: resource mobilization theory and political opportunity structure to explain the emergence and mobilization of the BERSIH movement in Malaysia. Based on these two theories, corruption which is believed to have contributed to the income disparity among Malaysians has generated the development of this movement. The rise of re-islamisation values propagated by certain groups in Malaysia and the shift in political leadership has also created political opportunities for this movement to emerge. In line with the political opportunity structure theory, the BERSIH movement will continue to create more opportunities for the empowerment of civil society and the unity of ethnic relations in Malaysia. Comparison is made on the degree of ethnic unity in the country before and after BERSIH was formed. This would include analysing the level of re-islamisation values and also the level of corruption in relation to economic income under the premiership of the former Prime Minister Mahathir and the present Prime Minister Najib Razak. The country has never seen such uprisings like BERSIH where ethnic groups which over the years have been divided by ethnic based political parties and economic disparity joined together with a common goal for equality and fair elections. As such, the BERSIH movement is a unique case where it illustrates the change of political landscape, ethnic relations and civil society in Malaysia.

Keywords: ethnic relations, Malaysia, political opportunity structure, resource mobilization theory and social movement

Procedia PDF Downloads 342
1078 User Experience Evaluation on the Usage of Commuter Line Train Ticket Vending Machine

Authors: Faishal Muhammad, Erlinda Muslim, Nadia Faradilla, Sayidul Fikri

Abstract:

To deal with the increase of mass transportation needs problem, PT. Kereta Commuter Jabodetabek (KCJ) implements Commuter Vending Machine (C-VIM) as the solution. For that background, C-VIM is implemented as a substitute to the conventional ticket windows with the purposes to make transaction process more efficient and to introduce self-service technology to the commuter line user. However, this implementation causing problems and long queues when the user is not accustomed to using the machine. The objective of this research is to evaluate user experience after using the commuter vending machine. The goal is to analyze the existing user experience problem and to achieve a better user experience design. The evaluation method is done by giving task scenario according to the features offered by the machine. The features are daily insured ticket sales, ticket refund, and multi-trip card top up. There 20 peoples that separated into two groups of respondents involved in this research, which consist of 5 males and 5 females each group. The experienced and inexperienced user to prove that there is a significant difference between both groups in the measurement. The user experience is measured by both quantitative and qualitative measurement. The quantitative measurement includes the user performance metrics such as task success, time on task, error, efficiency, and learnability. The qualitative measurement includes system usability scale questionnaire (SUS), questionnaire for user interface satisfaction (QUIS), and retrospective think aloud (RTA). Usability performance metrics shows that 4 out of 5 indicators are significantly different in both group. This shows that the inexperienced group is having a problem when using the C-VIM. Conventional ticket windows also show a better usability performance metrics compared to the C-VIM. From the data processing, the experienced group give the SUS score of 62 with the acceptability scale of 'marginal low', grade scale of “D”, and the adjective ratings of 'good' while the inexperienced group gives the SUS score of 51 with the acceptability scale of 'marginal low', grade scale of 'F', and the adjective ratings of 'ok'. This shows that both groups give a low score on the system usability scale. The QUIS score of the experienced group is 69,18 and the inexperienced group is 64,20. This shows the average QUIS score below 70 which indicate a problem with the user interface. RTA was done to obtain user experience issue when using C-VIM through interview protocols. The issue obtained then sorted using pareto concept and diagram. The solution of this research is interface redesign using activity relationship chart. This method resulted in a better interface with an average SUS score of 72,25, with the acceptable scale of 'acceptable', grade scale of 'B', and the adjective ratings of 'excellent'. From the time on task indicator of performance metrics also shows a significant better time by using the new interface design. Result in this study shows that C-VIM not yet have a good performance and user experience.

Keywords: activity relationship chart, commuter line vending machine, system usability scale, usability performance metrics, user experience evaluation

Procedia PDF Downloads 256
1077 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 84
1076 Developing Computational Thinking in Early Childhood Education

Authors: Kalliopi Kanaki, Michael Kalogiannakis

Abstract:

Nowadays, in the digital era, the early acquisition of basic programming skills and knowledge is encouraged, as it facilitates students’ exposure to computational thinking and empowers their creativity, problem-solving skills, and cognitive development. More and more researchers and educators investigate the introduction of computational thinking in K-12 since it is expected to be a fundamental skill for everyone by the middle of the 21st century, just like reading, writing and arithmetic are at the moment. In this paper, a doctoral research in the process is presented, which investigates the infusion of computational thinking into science curriculum in early childhood education. The whole attempt aims to develop young children’s computational thinking by introducing them to the fundamental concepts of object-oriented programming in an enjoyable, yet educational framework. The backbone of the research is the digital environment PhysGramming (an abbreviation of Physical Science Programming), which provides children the opportunity to create their own digital games, turning them from passive consumers to active creators of technology. PhysGramming deploys an innovative hybrid schema of visual and text-based programming techniques, with emphasis on object-orientation. Through PhysGramming, young students are familiarized with basic object-oriented programming concepts, such as classes, objects, and attributes, while, at the same time, get a view of object-oriented programming syntax. Nevertheless, the most noteworthy feature of PhysGramming is that children create their own digital games within the context of physical science courses, in a way that provides familiarization with the basic principles of object-oriented programming and computational thinking, even though no specific reference is made to these principles. Attuned to the ethical guidelines of educational research, interventions were conducted in two classes of second grade. The interventions were designed with respect to the thematic units of the curriculum of physical science courses, as a part of the learning activities of the class. PhysGramming was integrated into the classroom, after short introductory sessions. During the interventions, 6-7 years old children worked in pairs on computers and created their own digital games (group games, matching games, and puzzles). The authors participated in these interventions as observers in order to achieve a realistic evaluation of the proposed educational framework concerning its applicability in the classroom and its educational and pedagogical perspectives. To better examine if the objectives of the research are met, the investigation was focused on six criteria; the educational value of PhysGramming, its engaging and enjoyable characteristics, its child-friendliness, its appropriateness for the purpose that is proposed, its ability to monitor the user’s progress and its individualizing features. In this paper, the functionality of PhysGramming and the philosophy of its integration in the classroom are both described in detail. Information about the implemented interventions and the results obtained is also provided. Finally, several limitations of the research conducted that deserve attention are denoted.

Keywords: computational thinking, early childhood education, object-oriented programming, physical science courses

Procedia PDF Downloads 116
1075 Servant Leadership and Organisational Climate in South African Private Schools: A Qualitative Study

Authors: Christo Swart, Lidia Pottas, David Maree

Abstract:

Background: It is a sine qua non that the South African educational system finds itself in a profound crisis and that traditional school leadership styles are outdated and hinder quality education. New thinking is mandatory to improve the status quo and school leadership has an immense role to play to improve the current situation. It is believed that the servant leadership paradigm, when practiced by school leadership, may have a significant influence on the school environment in totality. This study investigates the private school segment in search of constructive answers to assist with the educational crises in South Africa. It is assumed that where school leadership can augment a supportive and empowering environment for teachers to constructively engage in their teaching and learning activities - then many challenges facing by school system may be subjugated in a productive manner. Aim: The aim of this study is fourfold. To outline the constructs of servant leadership which are perceived by teachers of private schools as priorities to enhance a successful school environment. To describe the constructs of organizational climate which are observed by teachers of private schools as priorities to enhance a successful school environment. To investigate whether the participants perceived a link between the constructs of servant leadership and organizational climate. To consider the process to be followed to introduce the constructs of SL and OC the school system in general as perceived by participants. Method: This study utilized a qualitative approach to explore the mediation between school leadership and the organizational climate in private schools in the search for amicable answers. The participants were purposefully selected for the study. Focus group interviews were held with participants from primary and secondary schools and a focus group discussion was conducted with principals of both primary and secondary schools. The interview data were transcribed and analyzed and identical patterns of coded data were grouped together under emerging themes. Findings: It was found that the practice of servant leadership by school leadership indeed mediates a constructive and positive school climate. It was found that the constructs of empowerment, accountability, humility and courage – interlinking with one other - are prominent of servant leadership concepts that are perceived by teachers of private schools as priorities for school leadership to enhance a successful school environment. It was confirmed that the groupings of training and development, communication, trust and work environment are perceived by teachers of private schools as prominent features of organizational climate as practiced by school leadership to augment a successful school environment. It can be concluded that the participants perceived several links between the constructs of servant leadership and organizational climate that encourage a constructive school environment and that there is a definite positive consideration and motivation that the two concepts be introduced to the school system in general. It is recommended that school leadership mentors and guides teachers to take ownership of the constructs of servant leadership as well as organizational climate and that public schools be researched and consider to implement the two paradigms. The study suggests that aspirant teachers be exposed to leadership as well as organizational paradigms during their studies at university.

Keywords: empowering environment for teachers and learners, new thinking required, organizational climate, school leadership, servant leadership

Procedia PDF Downloads 218
1074 The Intensity of Root and Soil Respiration Is Significantly Determined by the Organic Matter and Moisture Content of the Soil

Authors: Zsolt Kotroczó, Katalin Juhos, Áron Béni, Gábor Várbíró, Tamás Kocsis, István Fekete

Abstract:

Soil organic matter plays an extremely important role in the functioning and regulation processes of ecosystems. It follows that the C content of organic matter in soil is one of the most important indicators of soil fertility. Part of the carbon stored in them is returned to the atmosphere during soil respiration. Climate change and inappropriate land use can accelerate these processes. Our work aimed to determine how soil CO2 emissions change over ten years as a result of organic matter manipulation treatments. With the help of this, we were able to examine not only the effects of the different organic matter intake but also the effects of the different microclimates that occur as a result of the treatments. We carried out our investigations in the area of the Síkfőkút DIRT (Detritus Input and Removal Treatment) Project. The research area is located in the southern, hilly landscape of the Bükk Mountains, northeast of Eger (Hungary). GPS coordinates of the project: 47°55′34′′ N and 20°26′ 29′′ E, altitude 320-340 m. The soil of the area is Luvisols. The 27-hectare protected forest area is now under the supervision of the Bükki National Park. The experimental plots in Síkfőkút were established in 2000. We established six litter manipulation treatments each with three 7×7 m replicate plots established under complete canopy cover. There were two types of detritus addition treatments (Double Wood and Double Litter). In three treatments, detritus inputs were removed: No Litter No Roots plots, No Inputs, and the Controls. After the establishment of the plots, during the drier periods, the NR and NI treatments showed the highest CO2 emissions. In the first few years, the effect of this process was evident, because due to the lack of living vegetation, the amount of evapotranspiration on the NR and NI plots was much lower, and transpiration practically ceased on these plots. In the wetter periods, the NL and NI treatments showed the lowest soil respiration values, which were significantly lower compared to the Co, DW, and DL treatments. Due to the lower organic matter content and the lack of surface litter cover, the water storage capacity of these soils was significantly limited, therefore we measured the lowest average moisture content among the treatments after ten years. Soil respiration is significantly influenced by temperature values. Furthermore, the supply of nutrients to the soil microorganisms is also a determining factor, which in this case is influenced by the litter production dictated by the treatments. In the case of dry soils with a moisture content of less than 20% in the initial period, litter removal treatments showed a strong correlation with soil moisture (r=0.74). In very dry soils, a smaller increase in moisture does not cause a significant increase in soil respiration, while it does in a slightly higher moisture range. In wet soils, the temperature is the main regulating factor, above a certain moisture limit, water displaces soil air from the soil pores, which inhibits aerobic decomposition processes, and so heterotrophic soil respiration also declines.

Keywords: soil biology, organic matter, nutrition, DIRT, soil respiration

Procedia PDF Downloads 70
1073 The Two Question Challenge: Embedding the Serious Illness Conversation in Acute Care Workflows

Authors: D. M. Lewis, L. Frisby, U. Stead

Abstract:

Objective: Many patients are receiving invasive treatments in acute care or are dying in hospital without having had comprehensive goals of care conversations. Some of these treatments may not align with the patient’s wishes, may be futile, and may cause unnecessary suffering. While many staff may recognize the benefits of engaging patients and families in Serious Illness Conversations (a goal of care framework developed by Ariadne Labs in Boston), few staff feel confident and/or competent in having these conversations in acute care. Another barrier to having these conversations may be due to a lack of incorporation in the current workflow. An educational exercise, titled the Two Question Challenge, was initiated on four medical units across two Vancouver Coastal Health (VCH) hospitals in attempt to engage the entire interdisciplinary team in asking patients and families questions around goals of care and to improve the documentation of these expressed wishes and preferences. Methods: Four acute care units across two separate hospitals participated in the Two Question Challenge. On each unit, over the course of two eight-hour shifts, all members of the interdisciplinary team were asked to select at least two questions from a selection of nine goals of care questions. They were asked to pose these questions of a patient or family member throughout their shift and then asked to document their conversations in a centralized Advance Care Planning/Goals of Care discussion record in the patient’s chart. A visual representation of conversation outcomes was created to demonstrate to staff and patients the breadth of conversations that took place throughout the challenge. Staff and patients were interviewed about their experiences throughout the challenge. Two palliative approach leads remained present on the units throughout the challenge to support, guide, or role model these conversations. Results: Across four acute care medical units, 47 interdisciplinary staff participated in the Two Question Challenge, including nursing, allied health, and a physician. A total of 88 questions were asked of patients, or their families around goals of care and 50 newly documented goals of care conversations were charted. Two code statuses were changed as a result of the conversations. Patients voiced an appreciation for these conversations and staff were able to successfully incorporate these questions into their daily care. Conclusion: The Two Question Challenge proved to be an effective way of having teams explore the goals of care of patients and families in an acute care setting. Staff felt that they gained confidence and competence. Both staff and patients found these conversations to be meaningful and impactful and felt they were notably different from their usual interactions. Documentation of these conversations in a centralized location that is easily accessible to all care providers increased significantly. Application of the Two Question Challenge in non-medical units or other care settings, such as long-term care facilities or community health units, should be explored in the future.

Keywords: advance care planning, goals of care, interdisciplinary, palliative approach, serious illness conversations

Procedia PDF Downloads 99
1072 Technology in Commercial Law Enforcement: Tanzania, Canada, and Singapore Comparatively

Authors: Katarina Revocati Mteule

Abstract:

The background of this research arises from global demands for fair business opportunities. As one of responses to these demands, nations embarked on reforms in commercial laws. In 1990s Tanzania resorted to economic transformation through liberalization to attract more investments included reform in commercial laws enforcement. This research scrutinizes the effectiveness of reforms in Tanzania in comparison with Canada and Singapore and the role of technology. The methodology to be used is doctrinal legal research mixed with international comparative legal research. It involves comparative analysis of library, online, and internet resources as well as Case Laws and Statutory Laws. Tanzania, Canada and Singapore are sampled comparators basing on their distinct level of economic development. The criteria of analysis includes the nature of reforms, type of technology, technological infrastructure and human resource technical competence in each country. As the world progresses towards reforms in commercial laws, improvements in law, policy, and regulatory frameworks are paramount. Specifically, commercial laws are essential in contract enforcement and dispute resolution and how it copes with modern technologies is a concern. Harnessing the best technology is necessary to cope with the modernity in world businesses. In line with this, Tanzania is improving its business environment, including law enforcement mechanisms that are supportive to investments. Reforms such as specialized commercial law enforcement coupled with alternative dispute resolutions such as arbitration, mediation, and reconciliation are emphasized. Court technology as one of the reform tools given high priority. This research evaluates the progress and the effectiveness of the reforms in Commercial Laws towards friendly business environment in Tanzania in comparison with Canada and Singapore. The experience of Tanzania is compared with Canada and Singapore to see what to improve for each country to enhance quick and fair enforcement of commercial law. The research proposes necessary global standards of procedures and in national laws to offer a business-friendly environment and the use of appropriate technology. Solutions are proposed in tackling the challenges of delays in enforcing Commercial Laws such as case management, funding, legal and procedural hindrances, laxity among staff, and abuse of Court process among litigants, all in line with modern technology. It is the finding of the research that proper use of technology has managed to reduce case backlogs and time taken to resolve a commercial dispute, to increase court integrity by minimizing human contacts in commercial law enforcement which may lead to solicitation of favors and saving of parties’ time due to online service. Among the three countries, each one is facing a distinct challenge due to the level of poverty and remoteness from online service. How solutions are found in one country is a lesson to another. To conclude, this paper is suggesting solutions for improving the commercial law enforcement mechanisms in line with modern technology. The call for technological transformation is essential for the enforcement of commercial laws.

Keywords: commercial law, enforcement, technology

Procedia PDF Downloads 56
1071 Assessing the Outcomes of Collaboration with Students on Curriculum Development and Design on an Undergraduate Art History Module

Authors: Helen Potkin

Abstract:

This paper presents a practice-based case study of a project in which the student group designed and planned the curriculum content, classroom activities and assessment briefs in collaboration with the tutor. It focuses on the co-creation of the curriculum within a history and theory module, Researching the Contemporary, which runs for BA (Hons) Fine Art and Art History and for BA (Hons) Art Design History Practice at Kingston University, London. The paper analyses the potential of collaborative approaches to engender students’ investment in their own learning and to encourage reflective and self-conscious understandings of themselves as learners. It also addresses some of the challenges of working in this way, attending to the risks involved and feelings of uncertainty produced in experimental, fluid and open situations of learning. Alongside this, it acknowledges the tensions inherent in adopting such practices within the framework of the institution and within the wider of context of the commodification of higher education in the United Kingdom. The concept underpinning the initiative was to test out co-creation as a creative process and to explore the possibilities of altering the traditional hierarchical relationship between teacher and student in a more active, participatory environment. In other words, the project asked about: what kind of learning could be imagined if we were all in it together? It considered co-creation as producing different ways of being, or becoming, as learners, involving us reconfiguring multiple relationships: to learning, to each other, to research, to the institution and to our emotions. The project provided the opportunity for students to bring their own research and wider interests into the classroom, take ownership of sessions, collaborate with each other and to define the criteria against which they would be assessed. Drawing on students’ reflections on their experience of co-creation alongside theoretical considerations engaging with the processual nature of learning, concepts of equality and the generative qualities of the interrelationships in the classroom, the paper suggests that the dynamic nature of collaborative and participatory modes of engagement have the potential to foster relevant and significant learning experiences. The findings as a result of the project could be quantified in terms of the high level of student engagement in the project, specifically investment in the assessment, alongside the ambition and high quality of the student work produced. However, reflection on the outcomes of the experiment prompts a further set of questions about the nature of positionality in connection to learning, the ways our identities as learners are formed in and through our relationships in the classroom and the potential and productive nature of creative practice in education. Overall, the paper interrogates questions of what it means to work with students to invent and assemble the curriculum and it assesses the benefits and challenges of co-creation. Underpinning it is the argument that, particularly in the current climate of higher education, it is increasingly important to ask what it means to teach and to envisage what kinds of learning can be possible.

Keywords: co-creation, collaboration, learning, participation, risk

Procedia PDF Downloads 119
1070 Kansei Engineering Applied to the Design of Rural Primary Education Classrooms: Design-Based Learning Case

Authors: Jimena Alarcon, Andrea Llorens, Gabriel Hernandez, Maritza Palma, Lucia Navarrete

Abstract:

The research has funding from the Government of Chile and is focused on defining the design of rural primary classroom that stimulates creativity. The relevance of the study consists of its capacity to define adequate educational spaces for the implementation of the design-based learning (DBL) methodology. This methodology promotes creativity and teamwork, generating a meaningful learning experience for students, based on the appreciation of their environment and the generation of projects that contribute positively to their communities; also, is an inquiry-based form of learning that is based on the integration of design thinking and the design process into the classroom. The main goal of the study is to define the design characteristics of rural primary school classrooms, associated with the implementation of the DBL methodology. Along with the change in learning strategies, it is necessary to change the educational spaces in which they develop. The hypothesis indicates that a change in the space and equipment of the classrooms based on the emotions of the students will motivate better learning results based on the implementation of a new methodology. In this case, the pedagogical dynamics require an important interaction between the participants, as well as an environment favorable to creativity. Methodologies from Kansei engineering are used to know the emotional variables associated with their definition. The study is done to 50 students between 6 and 10 years old (average age of seven years), 48% of men and 52% women. Virtual three-dimensional scale models and semantic differential tables are used. To define the semantic differential, self-applied surveys were carried out. Each survey consists of eight separate questions in two groups: question A to find desirable emotions; question B related to emotions. Both questions have a maximum of three alternatives to answer. Data were tabulated with IBM SPSS Statistics version 19. Terms referred to emotions are grouped into twenty concepts with a higher presence in surveys. To select the values obtained as part of the implementation of Semantic Differential, a number expected of 'chi-square test (x2)' frequency calculated for classroom space is considered lower limit. All terms over the N expected a cut point, are included to prepare tables for surveys to find a relation between emotion and space. Statistic contrast (Chi-Square) represents significance level ≥ 0, indicator that frequencies appeared are not random. Then, the most representative terms depend on the variable under study: a) definition of textures and color of vertical surfaces is associated with emotions such as tranquility, attention, concentration, creativity; and, b) distribution of the equipment of the rooms, with emotions associated with happiness, distraction, creativity, freedom. The main findings are linked to the generation of classrooms according to diverse DBL team dynamics. Kansei engineering is the appropriate methodology to know the emotions that students want to feel in the classroom space.

Keywords: creativity, design-based learning, education spaces, emotions

Procedia PDF Downloads 140
1069 Childhood Adversity and Delinquency in Youth: Self-Esteem and Depression as Mediators

Authors: Yuhui Liu, Lydia Speyer, Jasmin Wertz, Ingrid Obsuth

Abstract:

Childhood adversities refer to situations where a child's basic needs for safety and support are compromised, leading to substantial disruptions in their emotional, cognitive, social, or neurobiological development. Given the prevalence of adversities (8%-39%), their impact on developmental outcomes is challenging to completely avoid. Delinquency is an important consequence of childhood adversities, given its potential causing violence and other forms of victimisation, influencing victims, delinquents, their families, and the whole of society. Studying mediators helps explain the link between childhood adversity and delinquency, which aids in designing effective intervention programs that target explanatory variables to disrupt the path and mitigate the effects of childhood adversities on delinquency. The Dimensional Model of Adversity and Psychopathology suggests that threat-based adversities influence outcomes through emotion processing, while deprivation-based adversities do so through cognitive mechanisms. Thus, considering a wide range of threat-based and deprivation-based adversities and their co-occurrence and their associations with delinquency through cognitive and emotional mechanisms is essential. This study employs the Millennium Cohort Study, tracking the development of approximately 19,000 individuals born across England, Scotland, Wales and Northern Ireland, representing a nationally representative sample. Parallel mediation models compare the mediating roles of self-esteem (cognitive) and depression (affective) in the associations between childhood adversities and delinquency. Eleven types of childhood adversities were assessed both individually and through latent class analysis, considering adversity experiences from birth to early adolescence. This approach aimed to capture how threat-based, deprived-based, or combined threat and deprived-based adversities are associated with delinquency. Eight latent classes were identified: three classes (low adversity, especially direct and indirect violence; low childhood and moderate adolescent adversities; and persistent poverty with declining bullying victimisation) were negatively associated with delinquency. In contrast, three classes (high parental alcohol misuse, overall high adversities, especially regarding household instability, and high adversity) were positively associated with delinquency. When mediators were included, all classes showed a significant association with delinquency through depression, but not through self-esteem. Among the eleven single adversities, seven were positively associated with delinquency, with five linked through depression and none through self-esteem. The results imply the importance of affective variables, not just for threat-based but also deprivation-based adversities. Academically, this suggests exploring other mechanisms linking adversities and delinquency since some adversities are linked through neither depression nor self-esteem. Clinically, intervention programs should focus on affective variables like depression to mitigate the effects of childhood adversities on delinquency.

Keywords: childhood adversity, delinquency, depression, self-esteem

Procedia PDF Downloads 27
1068 The Touch Sensation: Ageing and Gender Influences

Authors: A. Abdouni, C. Thieulin, M. Djaghloul, R. Vargiolu, H. Zahouani

Abstract:

A decline in the main sensory modalities (vision, hearing, taste, and smell) is well reported to occur with advancing age, it is expected a similar change to occur with touch sensation and perception. In this study, we have focused on the touch sensations highlighting ageing and gender influences with in vivo systems. The touch process can be divided into two main phases: The first phase is the first contact between the finger and the object, during this contact, an adhesive force has been created which is the needed force to permit an initial movement of the finger. In the second phase, the finger mechanical properties with their surface topography play an important role in the obtained sensation. In order to understand the age and gender effects on the touch sense, we develop different ideas and systems for each phase. To better characterize the contact, the mechanical properties and the surface topography of human finger, in vivo studies on the pulp of 40 subjects (20 of each gender) of four age groups of 26±3, 35+-3, 45+-2 and 58±6 have been performed. To understand the first touch phase a classical indentation system has been adapted to measure the finger contact properties. The normal force load, the indentation speed, the contact time, the penetration depth and the indenter geometry have been optimized. The penetration depth of a glass indenter is recorded as a function of the applied normal force. Main assessed parameter is the adhesive force F_ad. For the second phase, first, an innovative approach is proposed to characterize the dynamic finger mechanical properties. A contactless indentation test inspired from the techniques used in ophthalmology has been used. The test principle is to blow an air blast to the finger and measure the caused deformation by a linear laser. The advantage of this test is the real observation of the skin free return without any outside influence. Main obtained parameters are the wave propagation speed and the Young's modulus E. Second, negative silicon replicas of subject’s fingerprint have been analyzed by a probe laser defocusing. A laser diode transmits a light beam on the surface to be measured, and the reflected signal is returned to a set of four photodiodes. This technology allows reconstructing three-dimensional images. In order to study the age and gender effects on the roughness properties, a multi-scale characterization of roughness has been realized by applying continuous wavelet transform. After determining the decomposition of the surface, the method consists of quantifying the arithmetic mean of surface topographic at each scale SMA. Significant differences of the main parameters are shown with ageing and gender. The comparison between men and women groups reveals that the adhesive force is higher for women. The results of mechanical properties show a Young’s modulus higher for women and also increasing with age. The roughness analysis shows a significant difference in function of age and gender.

Keywords: ageing, finger, gender, touch

Procedia PDF Downloads 262
1067 Historical Memory and Social Representation of Violence in Latin American Cinema: A Cultural Criminology Approach

Authors: Maylen Villamanan Alba

Abstract:

Latin America is marked by its history: conquest, colonialism, and slavery left deep footprints in most Latin American countries. Also, the past century has been affected by wars, military dictatorships, and political violence, which profoundly influenced Latin American popular culture. Consequently, reminiscences of historical crimes are frequently present in daily life, media, public opinion, and arts. This legacy is remembered in novels, paintings, songs, and films. In fact, Latin American cinema has a trend which refers to the verisimilitude with reality in fiction films. These films about historical violence are narrated as fictional characters, but their stories are based on real historical contexts. Therefore, cultural criminology has considered films as a significant field to understand social representations of violence related to historical crimes. The aim of the present contribution is to analyze the legacy of past and historical memory in social representations of violence in Latin American cinema as a critical approach to historical crimes. This qualitative research is based on content analysis. The sample is seven multi-award winning films of the International Festival of New Latin American Cinema of Havana. The films selected are Kamchatka, Argentina (2002); Carandiru, Brazil (2003); Enlightened by fire, Argentina (2005); Post-mortem, Chile (2010); No, Chile (2012) Wakolda; Argentina (2013) and The Clan, Argentina (2015). Cultural criminology highlights that cinema shapes meanings of social practices such as historical crimes. Critical criminology offers a critical theory framework to interpret Latin American cinema. This analysis reveals historical conditions deeply associated with power relationships, policy, and inequality issues. As indicated by this theory, violence is characterized as a structural process based on social asymmetries. These social asymmetries are crossed by social scopes, including institutional and personal dimensions. Thus, institutions of the states are depicted through personal stories of characters involved with human conflicts. Intimacy and social background are linked in the personages who simultaneously perform roles such as soldiers, policemen, professionals or inmates and they are at the same time depict as human beings with family, gender, racial, ideological or generational issues. Social representations of violence related to past legacy are a portrait of historical crimes perpetrated against Latin American citizens. Thereby, they have contributed to political positions, social behaviors, and public opinion. The legacy of these historical crimes suggests a path that should never be taken again. It means past legacy is a reminder, a warning, and a historic lesson for Latin American people. Social representations of violence are permeated by historical memory as denunciation under a critical approach.

Keywords: Latin American cinema, historical memory, social representation, violence

Procedia PDF Downloads 145
1066 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach

Authors: Jared Beard, Ali Baheri

Abstract:

As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.

Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification

Procedia PDF Downloads 151
1065 Harnessing Sunlight for Clean Water: Scalable Approach for Silver-Loaded Titanium Dioxide Nanoparticles

Authors: Satam Alotibi, Muhammad J. Al-Zahrani, Fahd K. Al-Naqidan, Turki S. Hussein, Moteb Alotaibi, Mohammed Alyami, Mahdy M. Elmahdy, Abdellah Kaiba, Fatehia S. Alhakami, Talal F. Qahtan

Abstract:

Water pollution is a critical global challenge that demands scalable and effective solutions for water decontamination. In this captivating research, we unveil a groundbreaking strategy for harnessing solar energy to synthesize silver (Ag) clusters on stable titanium dioxide (TiO₂) nanoparticles dispersed in water, without the need for traditional stabilization agents. These Ag-loaded TiO₂ nanoparticles exhibit exceptional photocatalytic activity, surpassing that of pristine TiO₂ nanoparticles, offering a promising solution for highly efficient water decontamination under sunlight irradiation. To the best knowledge, we have developed a unique method to stabilize TiO₂ P25 nanoparticles in water without the use of stabilization agents. This breakthrough allows us to create an ideal platform for the solar-driven synthesis of Ag clusters. Under sunlight irradiation, the stable dispersion of TiO₂ P25 nanoparticles acts as a highly efficient photocatalyst, generating electron-hole pairs. The photogenerated electrons effectively reduce silver ions derived from a silver precursor, resulting in the formation of Ag clusters. The Ag clusters loaded on TiO₂ P25 nanoparticles exhibit remarkable photocatalytic activity for water decontamination under sunlight irradiation. Acting as active sites, these Ag clusters facilitate the generation of reactive oxygen species (ROS) upon exposure to sunlight. These ROS play a pivotal role in rapidly degrading organic pollutants, enabling efficient water decontamination. To confirm the success of our approach, we characterized the synthesized Ag-loaded TiO₂ P25 nanoparticles using cutting-edge analytical techniques, such as transmission electron microscopy (TEM), scanning electron microscopy (SEM), X-ray diffraction (XRD), and spectroscopic methods. These characterizations unequivocally confirm the successful synthesis of Ag clusters on stable TiO₂ P25 nanoparticles without traditional stabilization agents. Comparative studies were conducted to evaluate the superior photocatalytic performance of Ag-loaded TiO₂ P25 nanoparticles compared to pristine TiO₂ P25 nanoparticles. The Ag clusters loaded on TiO₂ P25 nanoparticles exhibit significantly enhanced photocatalytic activity, benefiting from the synergistic effect between the Ag clusters and TiO₂ nanoparticles, which promotes ROS generation for efficient water decontamination. Our scalable strategy for synthesizing Ag clusters on stable TiO₂ P25 nanoparticles without stabilization agents presents a game-changing solution for highly efficient water decontamination under sunlight irradiation. The use of commercially available TiO₂ P25 nanoparticles streamlines the synthesis process and enables practical scalability. The outstanding photocatalytic performance of Ag-loaded TiO₂ P25 nanoparticles opens up new avenues for their application in large-scale water treatment and remediation processes, addressing the urgent need for sustainable water decontamination solutions.

Keywords: water pollution, solar energy, silver clusters, TiO₂ nanoparticles, photocatalytic activity

Procedia PDF Downloads 66
1064 No-Par Shares Working in European LLCs

Authors: Agnieszka P. Regiec

Abstract:

Capital companies are based on monetary capital. In the traditional model, the capital is the sum of the nominal values of all shares issued. For a few years within the European countries, the limited liability companies’ (LLC) regulations are leaning towards liberalization of the capital structure in order to provide higher degree of autonomy regarding the intra-corporate governance. Reforms were based primarily on the legal system of the USA. In the USA, the tradition of no-par shares is well-established. Thus, as a point of reference, the American legal system is being chosen. Regulations of Germany, Great Britain, France, Netherlands, Finland, Poland and the USA will be taken into consideration. The analysis of the share capital is important for the development of science not only because the capital structure of the corporation has significant impact on the shareholders’ rights, but also it reflects on relationships between creditors of the company and the company itself. Multi-level comparative approach towards the problem will allow to present a wide range of the possible outcomes stemming from the novelization. The dogmatic method was applied. The analysis was based on the statutes, secondary sources and judicial awards. Both the substantive and the procedural aspects of the capital structure were considered. In Germany, as a result of the regulatory competition, typical for the EU, the structure of LLCs was reshaped. New LLC – Unternehmergesellschaft, which does not require a minimum share capital, was introduced. The minimum share capital for Gesellschaft mit beschrankter Haftung was lowered from 25 000 to 10 000 euro. In France the capital structure of corporations was also altered. In 2003, the minimum share capital of société à responsabilité limitée (S.A.R.L.) was repealed. In 2009, the minimum share capital of société par actions simplifiée – in the “simple” version of S.A.R.L. was also changed – there is no minimum share capital required by a statute. The company has to, however, indicate a share capital without the legislator imposing the minimum value of said capital. In Netherlands the reform of the Besloten Vennootschap met beperkte aansprakelijkheid (B.V.) was planned with the following change: repeal of the minimum share capital as the answer to the need for higher degree of autonomy for shareholders. It, however, preserved shares with nominal value. In Finland the novelization of yksityinen osakeyhtiö took place in 2006 and as a result the no-par shares were introduced. Despite the fact that the statute allows shares without face value, it still requires the minimum share capital in the amount of 2 500 euro. In Poland the proposal for the restructuration of the capital structure of the LLC has been introduced. The proposal provides among others: devaluation of the capital to 1 PLN or complete liquidation of the minimum share capital, allowing the no-par shares to be issued. In conclusion: American solutions, in particular, balance sheet test and solvency test provide better protection for creditors; European no-par shares are not the same as American and the existence of share capital in Poland is crucial.

Keywords: balance sheet test, limited liability company, nominal value of shares, no-par shares, share capital, solvency test

Procedia PDF Downloads 181
1063 Preschoolers’ Selective Trust in Moral Promises

Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu

Abstract:

Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.

Keywords: promise, trust, moral judgement, preschoolers

Procedia PDF Downloads 48
1062 The Beneficial Effects of Inhibition of Hepatic Adaptor Protein Phosphotyrosine Interacting with PH Domain and Leucine Zipper 2 on Glucose and Cholesterol Homeostasis

Authors: Xi Chen, King-Yip Cheng

Abstract:

Hypercholesterolemia, characterized by high low-density lipoprotein cholesterol (LDL-C), raises cardiovascular events in patients with type 2 diabetes (T2D). Although several drugs, such as statin and PCSK9 inhibitors, are available for the treatment of hypercholesterolemia, they exert detrimental effects on glucose metabolism and hence increase the risk of T2D. On the other hand, the drugs used to treat T2D have minimal effect on improving the lipid profile. Therefore, there is an urgent need to develop treatments that can simultaneously improve glucose and lipid homeostasis. Adaptor protein phosphotyrosine interacting with PH domain and leucine zipper 2 (APPL2) causes insulin resistance in the liver and skeletal muscle via inhibiting insulin and adiponectin actions in animal models. Single-nucleotide polymorphisms in the APPL2 gene were associated with LDL-C, non-alcoholic fatty liver disease, and coronary artery disease in humans. The aim of this project is to investigate whether APPL2 antisense oligonucleotide (ASO) can alleviate dietary-induced T2D and hypercholesterolemia. High-fat diet (HFD) was used to induce obesity and insulin resistance in mice. GalNAc-conjugated APPL2 ASO (GalNAc-APPL2-ASO) was used to silence hepatic APPL2 expression in C57/BL6J mice selectively. Glucose, lipid, and energy metabolism were monitored. Immunoblotting and quantitative PCR analysis showed that GalNAc-APPL2-ASO treatment selectively reduced APPL2 expression in the liver instead of other tissues, like adipose tissues, kidneys, muscle, and heart. The glucose tolerance test and insulin sensitivity test revealed that GalNAc-APPL2-ASO improved glucose tolerance and insulin sensitivity progressively. Blood chemistry analysis revealed that the mice treated with GalNAc-APPL2-ASO had significantly lower circulating levels of total cholesterol and LDL cholesterol. However, there was no difference in circulating levels of high-density lipoprotein (HDL) cholesterol, triglyceride, and free fatty acid between the mice treated with GalNac-APPL2-ASO and GalNAc-Control-ASO. No obvious effect on food intake, body weight, and liver injury markers after GalNAc-APPL2-ASO treatment was found, supporting its tolerability and safety. We showed that selectively silencing hepatic APPL2 alleviated insulin resistance and hypercholesterolemia and improved energy metabolism in the dietary-induced obese mouse model, indicating APPL2 as a therapeutic target for metabolic diseases.

Keywords: APPL2, antisense oligonucleotide, hypercholesterolemia, type 2 diabetes

Procedia PDF Downloads 63
1061 Monitoring of Vector Mosquitors of Diseases in Areas of Energy Employment Influence in the Amazon (Amapa State), Brazil

Authors: Ribeiro Tiago Magalhães

Abstract:

Objective: The objective of this study was to evaluate the influence of a hydroelectric power plant in the state of Amapá, and to present the results obtained by dimensioning the diversity of the main mosquito vectors involved in the transmission of pathogens that cause diseases such as malaria, dengue and leishmaniasis. Methodology: The present study was conducted on the banks of the Araguari River, in the municipalities of Porto Grande and Ferreira Gomes in the southern region of Amapá State. Nine monitoring campaigns were conducted, the first in April 2014 and the last in March 2016. The selection of the catch sites was done in order to prioritize areas with possible occurrence of the species considered of greater importance for public health and areas of contact between the wild environment and humans. Sampling efforts aimed to identify the local vector fauna and to relate it to the transmission of diseases. In this way, three phases of collection were established, covering the schedules of greater hematophageal activity. Sampling was carried out using Shannon Shack and CDC types of light traps and by means of specimen collection with the hold method. This procedure was carried out during the morning (between 08:00 and 11:00), afternoon-twilight (between 15:30 and 18:30) and night (between 18:30 and 22:00). In the specific methodology of capture with the use of the CDC equipment, the delimited times were from 18:00 until 06:00 the following day. Results: A total of 32 species of mosquitoes was identified, and a total of 2,962 specimens was taxonomically subdivided into three genera (Culicidae, Psychodidae and Simuliidae) Psorophora, Sabethes, Simulium, Uranotaenia and Wyeomyia), besides those represented by the family Psychodidae that due to the morphological complexities, allows the safe identification (without the method of diaphanization and assembly of slides for microscopy), only at the taxonomic level of subfamily (Phlebotominae). Conclusion: The nine monitoring campaigns carried out provided the basis for the design of the possible epidemiological structure in the areas of influence of the Cachoeira Caldeirão HPP, in order to point out among the points established for sampling, which would represent greater possibilities, according to the group of identified mosquitoes, of disease acquisition. However, what should be mainly considered, are the future events arising from reservoir filling. This argument is based on the fact that the reproductive success of Culicidae is intrinsically related to the aquatic environment for the development of its larvae until adulthood. From the moment that the water mirror is expanded in new environments for the formation of the reservoir, a modification in the process of development and hatching of the eggs deposited in the substrate can occur, causing a sudden explosion in the abundance of some genera, in special Anopheles, which holds preferences for denser forest environments, close to the water portions.

Keywords: Amazon, hydroelectric, power, plants

Procedia PDF Downloads 191
1060 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 496
1059 Diversity and Inclusion in Focus: Cultivating a Sense of Belonging in Higher Education

Authors: Naziema Jappie

Abstract:

South Africa is a diverse nation but with many challenges. The fundamental changes in the political, economic and educational domains in South Africa in the late 1990s affected the South African community profoundly. In higher education, experiences of discrimination and bias are detrimental to the sense of belonging of staff and students. It is therefore important to cultivate an appreciation of diversity and inclusion. To bridge common understandings with the reality of racial inequality, we must understand the ways in which senior and executive leadership at universities think about social justice issues relating to diversity and inclusion and contextualize these within the current post-democracy landscape. The position and status of social justice issues and initiatives in South African higher education is a slow process. The focus is to highlight how and to what extent initiatives or practices around campus diversity and inclusion have been considered and made part of the mainstream intellectual and academic conversations in South Africa. This involves an examination of the social and epistemological conditions of possibility for meaningful research and curriculum practices, staff and student recruitment, and student access and success in addressing the challenges posed by social diversity on campuses. Methodology: In this study, university senior and executive leadership were interviewed about their perceptions and advancement of social justice and examine the buffering effects of diverse and inclusive peer interactions and institutional commitment on the relationship between discrimination–bias and sense of belonging for staff and students at the institutions. The paper further explores diversity and inclusion initiatives at the three institutions using a Critical Race Theory approach in conjunction with a literature review on social justice with a special focus on diversity and inclusion. Findings: This paper draws on research findings that demonstrate the need to address social justice issues of diversity and inclusion in the SA higher education context. The reason for this is so that university leaders can live out their experiences and values as they work to transform students into being accountable and responsible. Documents were selected for review with the intent of illustrating how diversity and inclusion work being done across an institution can shape the experiences of previously disadvantaged persons at these institutions. The research has highlighted the need for institutional leaders to embody their own mission and vision as they frame social justice issues for the campus community. Finally, the paper provides recommendations to institutions for strengthening high-level diversity and inclusion programs/initiatives among staff, students and administrators. The conclusion stresses the importance of addressing the historical and current policies and practices that either facilitate or negate the goals of social justice, encouraging these privileged institutions to create internal committees or task forces that focus on racial and ethnic disparities in the institution.

Keywords: diversity, higher education, inclusion, social justice

Procedia PDF Downloads 119
1058 Working From Home: On the Relationship Between Place Attachment to Work Place, Extraversion and Segmentation Preference to Burnout

Authors: Diamant Irene, Shklarnik Batya

Abstract:

In on to its widespread effects on health and economic issues, Covid-19 shook the work and employment world. Among the prominent changes during the pandemic is the work-from-home trend, complete or partial, as part of social distancing. In fact, these changes accelerated an existing tendency of work flexibility already underway before the pandemic. Technology and means of advanced communications led to a re-assessment of “place of work” as a physical space in which work takes place. Today workers can remotely carry out meetings, manage projects, work in groups, and different research studies point to the fact that this type of work has no adverse effect on productivity. However, from the worker’s perspective, despite numerous advantages associated with work from home, such as convenience, flexibility, and autonomy, various drawbacks have been identified such as loneliness, reduction of commitment, home-work boundary erosion, all risk factors relating to the quality of life and burnout. Thus, a real need has arisen in exploring differences in work-from-home experiences and understanding the relationship between psychological characteristics and the prevalence of burnout. This understanding may be of significant value to organizations considering a future hybrid work model combining in-office and remote working. Based on Hobfoll’s Theory of Conservation of Resources, we hypothesized that burnout would mainly be found among workers whose physical remoteness from the workplace threatens or hinders their ability to retain significant individual resources. In the present study, we compared fully remote and partially remote workers (hybrid work), and we examined psychological characteristics and their connection to the formation of burnout. Based on the conceptualization of Place Attachment as the cognitive-emotional bond of an individual to a meaningful place and the need to maintain closeness to it, we assumed that individuals characterized with Place Attachment to the workplace would suffer more from burnout when working from home. We also assumed that extrovert individuals, characterized by the need of social interaction at the workplace and individuals with segmentationpreference – a need for separation between different life domains, would suffer more from burnout, especially among fully remote workers relative to partially remote workers. 194 workers, of which 111 worked from home in full and 83 worked partially from home, aged 19-53, from different sectors, were tested using an online questionnaire through social media. The results of the study supported our assumptions. The repercussions of these findings are discussed, relating to future occupational experience, with an emphasis on suitable occupational adjustment according to the psychological characteristics and needs of workers.

Keywords: working from home, burnout, place attachment, extraversion, segmentation preference, Covid-19

Procedia PDF Downloads 188
1057 Vieira Da Silva's Tiles at Universidade Federal Rural Do Rio de Janeiro: A Conservation and Restoration Project

Authors: Adriana Anselmo Oliveira

Abstract:

The present project showcases a tile work from the Franco-Portuguese artist Maria Helena Vieira da Silva (1908-1992). It is a set of 8 panels composed of figurative and geometric tiles, with extra tiles framing nearby doors and windows in a study room in the (UFRRJ, Universidade Federal Rural do Rio de Janeiro). The aforementioned work was created between 1942 and 1943, during the artist's 6 year exile in the Brazilian city. This one-of-a-kind tileset was designed and made by Vieira da Silva between 1942 and 1943. Over the years, several units were lost, which led to their replacement in the nineties. However, these replacements don't do justice to the original work of art. In 2007, a project was initiated to fully repair and maintain the set. Three panels are removed and restored, but the project is halted. To this day, the three fully restored panels remain in boxes. In 2016 a new restoration project is submitted by the (Faculdade de Belas Artes da Universidade de Lisboa) in collaboration with de (Fundacão Árpád Szenes-Vieira da Silva). There are many varied opinions on restoring and conserving older pieces of art, however, we have the moral duty to safeguard the original materials used by the artist along with the artists original vision and also to care for the future generations of students who will use the space in which the tile-work was inserted. Many tiles have been replaced by white tiles, tiles with a divergent colour pallet and technique, and in a few cases, the incorrect place or way around. These many factors make it increasingly difficult to maintain the artists original vision and destroy and chance of coherence within the artwork itself. The conservative technician cannot make new images to fill the empty spaces or mark the remaining images with their own creative input. with reliable photographic documentation that can provide us with the necessary vision to allow us to proceed with an accurate reconstruction, we have the obligation to proceed and return the piece of art to its true form, as in its current state, it is impossible to maintain its original glory. Using the information we have, we must find a way to differentiate the original tiles from the reconstructions in order to recreate and reclaim the original message from the artist. The objective of this project is to understand the significance of tiles in Vieira da Silva's art as well as the influence they had on the artist's pictorial language since the colour definition on tile work is vastly different from the painting process as the materials change during their merger. Another primary goal is to understand what the previous interventions achieved besides increasing the artworks durability. The main objective is to submit a proposal that can salvage the artist's visual intention and supports it for posteriority. In summary, this proposal goes further than the usual conservative interventions as it intends to recreate the original artistic worth, prioritising the aesthetics and keeping its soul alive.

Keywords: Vieira da Silva, tiles, conservation, restoration

Procedia PDF Downloads 151
1056 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 145
1055 Blister Formation Mechanisms in Hot Rolling

Authors: Rebecca Dewfall, Mark Coleman, Vladimir Basabe

Abstract:

Oxide scale growth is an inevitable byproduct of the high temperature processing of steel. Blister is a phenomenon that occurs due to oxide growth, where high temperatures result in the swelling of surface scale, producing a bubble-like feature. Blisters can subsequently become embedded in the steel substrate during hot rolling in the finishing mill. This rolled in scale defect causes havoc within industry, not only with wear on machinery but loss of customer satisfaction, poor surface finish, loss of material, and profit. Even though blister is a highly prevalent issue, there is still much that is not known or understood. The classic iron oxidation system is a complex multiphase system formed of wustite, magnetite, and hematite, producing multi-layered scales. Each phase will have independent properties such as thermal coefficients, growth rate, and mechanical properties, etc. Furthermore, each additional alloying element will have different affinities for oxygen and different mobilities in the oxide phases so that oxide morphologies are specific to alloy chemistry. Therefore, blister regimes can be unique to each steel grade resulting in a diverse range of formation mechanisms. Laboratory conditions were selected to simulate industrial hot rolling with temperature ranges approximate to the formation of secondary and tertiary scales in the finishing mills. Samples with composition: 0.15Wt% C, 0.1Wt% Si, 0.86Wt% Mn, 0.036Wt% Al, and 0.028Wt% Cr, were oxidised in a thermo-gravimetric analyser (TGA), with an air velocity of 10litresmin-1, at temperaturesof 800°C, 850°C, 900°C, 1000°C, 1100°C, and 1200°C respectively. Samples were held at temperature in an argon atmosphere for 10minutes, then oxidised in air for 600s, 60s, 30s, 15s, and 4s, respectively. Oxide morphology and Blisters were characterised using EBSD, WDX, nanoindentation, FIB, and FEG-SEM imaging. Blister was found to have both a nucleation and growth process. During nucleation, the scale detaches from the substrate and blisters after a very short period, roughly 10s. The steel substrate is then exposed inside of the blister and further oxidised in the reducing atmosphere of the blister, however, the atmosphere within the blister is highly dependent upon the porosity of the blister crown. The blister crown was found to be consistently between 35-40um for all heating regimes, which supports the theory that the blister inflates, and the oxide then subsequently grows underneath. Upon heating, two modes of blistering were identified. In Mode 1 it was ascertained that the stresses produced by oxide growth will increase with increasing oxide thickness. Therefore, in Mode 1 the incubation time for blister formation is shortened by increasing temperature. In Mode 2 increase in temperature will result in oxide with a high ductility and high oxide porosity. The high oxide ductility and/or porosity accommodates for the intrinsic stresses from oxide growth. Thus Mode 2 is the inverse of Mode 1, and incubation time is increased with temperature. A new phenomenon was reported whereby blister formed exclusively through cooling at elevated temperatures above mode 2.

Keywords: FEG-SEM, nucleation, oxide morphology, surface defect

Procedia PDF Downloads 142
1054 Keeping Education Non-Confessional While Teaching Children about Religion

Authors: Tünde Puskás, Anita Andersson

Abstract:

This study is part of a research project about whether religion is considered as part of Swedish cultural heritage in Swedish preschools. Our aim in this paper is to explore how a Swedish preschool balance between keeping the education non-confessional and at the same time teaching children about a particular tradition, Easter.The paper explores how in a Swedish preschool with a religious profile teachers balance between keeping education non-confessional and teaching about a tradition with religious roots. The point of departure for the theoretical frame of our study is that practical considerations in pedagogical situations are inherently dilemmatic. The dilemmas that are of interest for our study evolve around formalized, intellectual ideologies, such us multiculturalism and secularism that have an impact on everyday practice. Educational dilemmas may also arise in the intersections of the formalized ideology of non-confessionalism, prescribed in policy documents and the common sense understandings of what is included in what is understood as Swedish cultural heritage. In this paper, religion is treated as a human worldview that, similarly to secular ideologies, can be understood as a system of thought. We make use of Ninian Smart's theoretical framework according to which in modern Western world religious and secular ideologies, as human worldviews, can be studied from the same analytical framework. In order to be able to study the distinctive character of human worldviews Smart introduced a multi-dimensional model within which the different dimensions interact with each other in various ways and to different degrees. The data for this paper is drawn from fieldwork carried out in 2015-2016 in the form of video ethnography. The empirical material chosen consists of a video recording of a specific activity during which the preschool group took part in an Easter play performed in the local church. The analysis shows that the policy of non-confessionalism together with the idea that teaching covering religious issues must be purely informational leads in everyday practice to dilemmas about what is considered religious. At the same time what the adults actually do with religion fulfills six of seven dimensions common to religious traditions as outlined by Smart. What we can also conclude from the analysis is that whether it is religion or a cultural tradition that is thought through the performance the children watched in the church depends on how the concept of religion is defined. The analysis shows that the characters of the performance themselves understood religion as the doctrine of Jesus' resurrection from the dead. This narrow understanding of religion enabled them indirectly to teach about the traditions and narratives surrounding Easter while avoiding teaching religion as a belief system.

Keywords: non-confessional education, preschool, religion, tradition

Procedia PDF Downloads 158
1053 Making Sense of C. G. Jung’s Red Book and Black Books: Masonic Rites and Trauma

Authors: Lynn Brunet

Abstract:

In 2019 the author published a book-length study examining Jung’s Red Book. This study consisted of a close reading of each of the chapters in Liber Novus, focussing on the fantasies themselves and Jung’s accompanying paintings. It found that the plots, settings, characters and symbolism in each of these fantasies are not entirely original but remarkably similar to those found in some of the higher degrees of Continental Freemasonry. Jung was the grandson of his namesake, C.G. Jung (1794–1864), who was a Freemason and one-time Grand Master of the Swiss Masonic Lodge. The study found that the majority of Jung’s fantasies are very similar to those of the Ancient and Accepted Scottish Rite, practiced in Switzerland during the time of Jung’s childhood. It argues that the fantasies appear to be memories of a series of terrifying initiatory ordeals conducted using spurious versions of the Masonic rites. Spurious Freemasonry is a term that Masons use for the ‘irregular’ or illegitimate use of the rituals and are not sanctioned by the Order. Since the 1980s there have been multiple reports of ritual trauma amongst a wide variety of organizations, cults and religious groups that psychologists, counsellors, social workers, and forensic scientists have confirmed. The abusive use of Masonic rites features frequently in these reports. This initial study allows a reading of The Red Book that makes sense of the obscure references, bizarre scenarios and intense emotional trauma described by Jung throughout Liber Novus. It suggests that Jung appears to have undergone a cruel initiatory process as a child. The author is currently examining the extra material found in Jung’s Black Books and the results are confirming the original discoveries and demonstrating a number of aspects not covered in the first publication. These include the complex layering of ancient gods and belief systems in answer to Jung’s question, ‘In which underworld am I?’ It demonstrates that the majority of these ancient systems and their gods are discussed in a handbook for the Scottish Rite, Morals and Dogma by Albert Pike, but that the way they are presented by Philemon and his soul is intended to confuse him rather than clarify their purpose. This new study also examines Jung’s soul’s question ‘I am not a human being. What am I then?’ While further themes that emerge from the Black Books include his struggle with vanity and whether he should continue creating his ‘holy book’; and a comparison between Jung’s ‘mystery plays’ and examples from the Theatre of the Absurd. Overall, it demonstrates that Jung’s experience, while inexplicable in his own time, is now known to be the secret and abusive practice of initiation of the young found in a range of cults and religious groups in many first world countries. This paper will present a brief outline of the original study and then examine the themes that have emerged from the extra material found in the Black Books.

Keywords: C. G. Jung, the red book, the black books, masonic themes, trauma and dissociation, initiation rites, secret societies

Procedia PDF Downloads 127
1052 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna

Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov

Abstract:

This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.

Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna

Procedia PDF Downloads 281
1051 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 213
1050 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 259