Search results for: equestrian jumping tasks
67 Convergence of Strategic Tasks of Business Tourism and Hotel Industry Development: The Case of Georgia
Authors: Nana Katsitadze, Tamar Atanelishvili, Mariam Kutateladze, Alexandre Tushishvili
Abstract:
In the modern world, tourism has emerged as one of the most powerful economic sectors, and due to its high economic performance, it is attractive to the countries with various levels of economic development. The purpose of the present paper, dedicated to discussing the current problems of tourism development, is to find ways which will contribute to bringing more benefits to the country from the sector. Georgia has been successfully developing leisure tourism for the last ten years, and at the next stage of development business, tourism gains particular importance for Georgia as a means of mitigating the negative socio-economic effects caused by the seasonality of tourism and as a high-cost tourism market. Therefore, the object of the paper is to study the factors that contribute to the development of business tourism. The paper uses the research methods such as system analysis, synthesis, analogy, as well as historical, comparative, economic, and statistical methods of analysis. The information base for the research is made up of the statistics on the functioning of the tourism market of Georgia and foreign countries as well as official data provided by international organizations in the field of tourism. Based on the experience of business tourism around the world and identifying the successful start of business tourism development in Georgia and its causing factors, a business tourism development model for Georgia has been developed. The model might be useful as a methodological material for developing a business tourism development concept for the countries with limited financial resources but rich in tourism resources like Georgia. On the initial stage of development (in absence of conventional centers), the suggested concept of business tourism development involves organizing small and medium-sized meetings both in large cities and in regions by using high-class hotel infrastructure and event management services. Relocation of small meetings to the regions encourages inclusive development of the sector based on increasing the awareness of these regions as tourist sites as well as the increase in employment and sales of other tourism or consumer products. Business tourism increases the number of hotel visitors in the non-seasonal period and improves hotel performance indicators, which enhances the attractiveness of investing in the hotel business. According to the present concept of business tourism development, at the initial stage, development of business tourism is based on the existing markets, including internal market, neighboring markets and the markets of geographically relatively near countries and at the next stage, the concept involves generating tourists from other relatively distant target markets. As a result, by gaining experience in business tourism, enhancing professionalism, increasing awareness and stimulating infrastructure development, the country will prepare the basis to move to a higher stage of tourism development. In addition, the experience showed that for attracting large customers, peculiarities of the field require activation of state policy and active use of marketing mechanisms and tools of the state.Keywords: hotel industry development, MICE model, MICE strategy, MICE tourism in Georgia
Procedia PDF Downloads 15666 The Influence of Leadership Styles on Organizational Performance and Innovation: Empirical Study in Information Technology Sector in Spain
Authors: Richard Mababu Mukiur
Abstract:
Leadership is an important drive that plays a key role in the success and development of organizations, particularly in the current context of digital transformation, highly competitivity and globalization. Leaders are persons that hold a dominant and privileged position within an organization, field, or sector of activities and are able to manage, motivate and exercise a high degree of influence over other in order to achieve the institutional goals. They achieve commitment and engagement of others to embrace change, and to make good decisions. Leadership studies in higher education institutions have examined how effective leaders hold their organizations, and also to find approaches which fit best in the organizations context for its better management, transformation and improvement. Moreover, recent studies have highlighted the impact of leadership styles on organizational performance and innovation capacities, since some styles give better results than others. Effective leadership is part of learning process that take place through day-to-day tasks, responsibilities, and experiences that influence the organizational performance, innovation and engagement of employees. The adoption of appropriate leadership styles can improve organization results and encourage learning process, team skills and performance, and employees' motivation and engagement. In the case of case of Information Technology sector, leadership styles are particularly crucial since this sector is leading relevant changes and transformations in the knowledge society. In this context, the main objective of this study is to analyze managers leadership styles with their relation to organizational performance and innovation that may be mediated by learning organization process and demographic variables. Therefore, it was hypothesized that the transformational and transactional leadership will be the main style adopted in Information Technology sector and will influence organizational performance and innovation capacity. A sample of 540 participants from Information technology sector has been determined in order to achieve the objective of this study. The Multifactor Leadership Questionnaire was administered as the principal instrument, Scale of innovation and Learning Organization Questionnaire. Correlations and multiple regression analysis have been used as the main techniques of data analysis. The findings indicate that leadership styles have a relevant impact on organizational performance and innovation capacity. The transformational and transactional leadership are predominant styles in Information technology sector. The effective leadership style tend to be characterized by the capacity of generating and sharing knowledge that improve organization performance and innovation capacity. Managers are adopting and adapting their leadership styles that respond to the new organizational, social and cultural challenges and realities of contemporary society. Managers who encourage innovation, foster learning process, share experience are useful to the organization since they contribute to its development and transformation. Learning process capacity and demographic variables (age, gender, and job tenure) mediate the relationship between leadership styles, innovation capacity and organizational performance. The transformational and transactional leadership tend to enhance the organizational performance due to their significant impact on team-building, employees' engagement and satisfaction. Some practical implications and future lines of research have been proposed.Keywords: leadership styles, tranformational leadership, organisational performance, organisational innovation
Procedia PDF Downloads 21865 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing
Authors: Yohann R. J. Thomas, Sébastien Solan
Abstract:
Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes
Procedia PDF Downloads 25164 The Positive Effects of Top-Sharing: A Case Study
Authors: Maike Andresen, Georg Dochtmann
Abstract:
Due to political, social, and societal changes in labor organization, top-sharing, defined as job-sharing in leading positions, becomes more important in HRM. German companies are looking for practical and economically meaningful solutions that allow to enduringly increase women’s ratio in management, not only because of a recently implemented quota. Furthermore, supporting employees in achieving work-life balance is perceived as an important goal for a sustainable HRM to gain competitive advantage. Top-sharing is seen as being suitable to reach both goals. To evaluate determinants leading to effective top-sharing, a case study of a newly implemented top-sharing tandem in a large German enterprise was conducted over a period of 15 months. In this company, a full leadership position was split into two 60%-part-time positions held by an experienced female leader in her late career and a female college who took over her first leadership position (mid-career). We assumed a person-person fit in terms of a match of the top sharing partners’ personality profiles (Big Five) and their leadership motivations to be important prerequisites for an effective collaboration between them. We evaluated the person-person fit variables once before the tandem started to work. Both leaders were expected to learn from each other (mentoring, competency development). On an operational level, they were supposed to lead together the same employees in an effective manner (leader-member exchange), presupposing an effective cooperation between both (handing over information). To see developments over time, these processes were evaluated three times over the span of the project. Top-Sharing and the underlined processes are expected to positively influence the tandem’s performance which has been evaluated twice, at the beginning and the end of the project, to assess its development over time as well. The evaluation of the personality and the basic motives suggests that both executives can be a successful top-sharing tandem. The competency evaluations (supervisor as well as self-assessment) increased over the time span. Although the top sharing tandem worked on equal terms, they implemented rather classical than peer-mentoring due to different career ambitions of the tandem partners. Thus, opportunities were not used completely. Team-member exchange scores proved the good cooperation between the top-sharers. Although the employees did not evaluate the leader-member-exchange between them and the two leaders of the tandem homogeneously, the top-sharing tandem itself did not have the impression that the employees’ task performance depended on whom of the tandem was responsible for the task. Furthermore, top-sharing did not negatively influence the performance of both leaders. During qualitative interviews with the top-sharers and their team, we found that the top-sharers could focus more easily on their tasks. The results suggest positive outcomes of top-sharing (e.g. competency improvement, learning from each other through mentoring). Top-Sharing does not hamper performance. Thus, further research and practical implementations are suggested. As part-time jobs are still more often a female solution to increase their work-life- and work-family-balance, top-sharing may be a suitable solution to increase the woman’s ratio in leadership positions as well as to sustainable increase work-life-balance of executives.Keywords: mentoring, part-time leadership, top-sharing, work-life-balance
Procedia PDF Downloads 26563 Wood Dust and Nanoparticle Exposure among Workers during a New Building Construction
Authors: Atin Adhikari, Aniruddha Mitra, Abbas Rashidi, Imaobong Ekpo, Jefferson Doehling, Alexis Pawlak, Shane Lewis, Jacob Schwartz
Abstract:
Building constructions in the US involve numerous wooden structures. Woods are routinely used in walls, framing floors, framing stairs, and making of landings in building constructions. Cross-laminated timbers are currently being used as construction materials for tall buildings. Numerous workers are involved in these timber based constructions, and wood dust is one of the most common occupational exposures for them. Wood dust is a complex substance composed of cellulose, polyoses and other substances. According to US OSHA, exposure to wood dust is associated with a variety of adverse health effects among workers, including dermatitis, allergic respiratory effects, mucosal and nonallergic respiratory effects, and cancers. The amount and size of particles released as wood dust differ according to the operations performed on woods. For example, shattering of wood during sanding operations produces finer particles than does chipping in sawing and milling industries. To our knowledge, how shattering, cutting and sanding of woods and wood slabs during new building construction release fine particles and nanoparticles are largely unknown. General belief is that the dust generated during timber cutting and sanding tasks are mostly large particles. Consequently, little attention has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor and conventional particle counters. This study was conducted in a large new building construction site in southern Georgia primarily during the framing of wooden side walls, inner partition walls, and landings. Exposure levels of nanoparticles (n = 10) were measured by a newly developed nanoparticle counter (TSI NanoScan SMPS Model 3910) at four different distances (5, 10, 15, and 30 m) from the work location. Other airborne particles (number of particles/m3) including PM2.5 and PM10 were monitored using a 6-channel (0.3, 0.5, 1.0, 2.5, 5.0 and 10 µm) particle counter at 15 m, 30 m, and 75 m distances at both upwind and downwind directions. Mass concentration of PM2.5 and PM10 (µg/m³) were measured by using a DustTrak Aerosol Monitor. Temperature and relative humidity levels were recorded. Wind velocity was measured by a hot wire anemometer. Concentration ranges of nanoparticles of 13 particle sizes were: 11.5 nm: 221 – 816/cm³; 15.4 nm: 696 – 1735/cm³; 20.5 nm: 879 – 1957/cm³; 27.4 nm: 1164 – 2903/cm³; 36.5 nm: 1138 – 2640/cm³; 48.7 nm: 938 – 1650/cm³; 64.9 nm: 759 – 1284/cm³; 86.6 nm: 705 – 1019/cm³; 115.5 nm: 494 – 1031/cm³; 154 nm: 417 – 806/cm³; 205.4 nm: 240 – 471/cm³; 273.8 nm: 45 – 92/cm³; and 365.2 nm:62 Implementation of Project-Based Learning with Peer Assessment in Large Classes under Consideration of Faculty’s Scare Resources
Authors: Margit Kastner
Abstract:
To overcome the negative consequences associated with large class sizes and to support students in developing the necessary competences (e.g., critical thinking, problem-solving, or team-work skills) a marketing course has been redesigned by implementing project-based learning with peer assessment (PBL&PA). This means that students can voluntarily take advantage of this supplementary offer and explore -in addition to attending the lecture where clicker questions are asked- a real-world problem, find a solution, and assess the results of peers while working in small collaborative groups. In order to handle this with little further effort, the process is technically supported by the university’s e-learning system in such a way that students upload their solution in form of an assignment which is then automatically distributed to peer groups who have to assess the work of three other groups. Finally, students’ work is graded automatically considering both, students’ contribution to the project and the conformity of the peer assessment. The purpose of this study is to evaluate students’ perception of PBL&PA using an online-questionnaire to collect the data. More specifically, it aims to discover students’ motivations for (not) working on a project and the benefits and problems students encounter. In addition to the survey, students’ performance was analyzed by comparing the final grades of those who participated in PBL&PA with those who did not participate. Among the 260 students who filled out the questionnaire, 47% participated in PBL&PA. Besides extrinsic motivations (bonus credits), students’ participation was often motivated by learning and social benefits. Reasons for not working on a project were connected to students’ organization and management of their studies (e.g., time constraints, no/wrong information) and teamwork concerns (e.g., missing engagement of peers, prior negative experiences). In addition, high workload and insufficient extrinsic motivation (bonus credits) were mentioned. With regards to benefits and problems students encountered during the project, students provided more positive than negative comments. Positive aspects most often stated were learning and social benefits while negative ones were mainly attached to the technical implementation. Interestingly, bonus credits were hardly named as a positive aspect meaning that intrinsic motivations have become more important when working on the project. Team aspects generated mixed feelings. In addition, students who voluntarily participated in PBL&PA were, in general, more active and utilized further course offers such as clicker questions. Examining students’ performance at the final exam revealed that students without participating in any of the offered active learning tasks performed poorest in the exam while students who used all activities were best. In conclusion, the goals of the implementation were met in terms of students’ perceived benefits and the positive impact on students’ exam performance. Since the comparison of the automatic grading with faculty grading showed valid results, it is possible to rely only on automatic grading in the future. That way, the additional workload for faculty will be within limits. Thus, the implementation of project-based learning with peer assessment can be recommended for large classes.Keywords: automated grading, large classes, peer assessment, project-based learning
Procedia PDF Downloads 16561 Gender, Agency, and Health: An Exploratory Study Using an Ethnographic Material for Illustrative Reasons
Authors: S. Gustafsson
Abstract:
The aim of this paper is to explore the connection between gender, agency, and health on personal and social levels over time. The use of gender as an analytical tool for health research has been shown to be useful to explore thoughts and ideas that are taken for granted, which have relevance for health. The paper highlights the following three issues. There are multiple forms of femininity and masculinity. Agency and social structure are closely related and referred to in this paper as 'gender agency'. Gender is illuminated as a product of history but also treated as a social factor and a producer of history. As a prominent social factor in the process of shaping living conditions, gender is highlighted as being significant for understanding health. To make health explicit as a dynamic and complex concept and not merely the opposite of disease requires a broader alliance with feminist theory and a post-Bourdieusian framework. A personal story, included with other ethnographic material about women’s networking in rural Sweden, is used as an empirical illustration. Ethnographic material was chosen for its ability to illustrate historical, local, and cultural ways of doing gendered and capitalized health. New concepts characterize ethnography, exemplified in this study by 'processes of transformation'. The semi-structured interviews followed an interview guide drafted with reference to the background theory of gender. The interviews lasted about an hour and were recorded and transcribed verbatim. The transcribed interviews and the author’s field notes formed the basis for the writing up of this paper. Initially, the participants' interests in weaving, sewing, and various handicrafts became obvious foci for networking activities and seemed at first to shape compliance with patriarchy, which generally does the opposite of promoting health. However, a significant event disrupted the stability of this phenomenon. What was permissible for the women began to crack and new spaces opened up. By exploiting these new spaces, the participants found opportunities to try out alternatives to emphasized femininity. Over time, they began combining feminized activities with degrees of masculinity, as leadership became part of the activities. In response to this, masculine enactment was gradually transformed and became increasingly gender neutral. As the tasks became more gender neutral the activities assumed a more formal character and the women stretched the limits of their capacity by enacting gender agency, a process the participants referred to as 'personal growth' and described as health promotion. What was described in terms of 'personal growth' can be interpreted as the effects of a raised status. Participation in women’s networking strengthened the participants’ structural position. More specifically, it was the gender-neutral position that was rewarded. To clarify the connection between gender, agency, and health on personal and social levels over time the concept processes of transformation is used. This concept is suggested as a dynamic equivalent to habitus. Health is thus seen as resulting from situational access to social recognition, prestige, capital assets and not least, meanings of gender.Keywords: a cross-gender bodily hexis, gender agency, gender as analytical tool, processes of transformation
Procedia PDF Downloads 15860 Phonological Processing and Its Role in Pseudo-Word Decoding in Children Learning to Read Kannada Language between 5.6 to 8.6 Years
Authors: Vangmayee. V. Subban, Somashekara H. S, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: Phonological processing is critical in learning to read alphabetical and non-alphabetical languages. However, its role in learning to read Kannada an alphasyllabary is equivocal. The literature has focused on the developmental role of phonological awareness on reading. To the best of authors knowledge, the role of phonological memory and phonological naming has not been addressed in alphasyllabary Kannada language. Therefore, there is a need to evaluate the comprehensive role of the phonological processing skills in Kannada on word decoding skills during the early years of schooling. Aim and Objectives: The present study aimed to explore the phonological processing abilities and their role in learning to decode pseudowords in children learning to read the Kannada language during initial years of formal schooling between 5.6 to 8.6 years. Method: In this cross sectional study, 60 typically developing Kannada speaking children, 20 each from Grade I, Grade II, and Grade III between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. Phonological processing abilities were assessed using an assessment tool specifically developed to address the objectives of the present research. The assessment tool was content validated by subject experts and had good inter and intra-subject reliability. Phonological awareness was assessed at syllable level using syllable segmentation, blending, and syllable stripping at initial, medial and final position. Phonological memory was assessed using pseudoword repetition task and phonological naming was assessed using rapid automatized naming of objects. Both phonological awareneness and phonological memory measures were scored for the accuracy of the response, whereas Rapid Automatized Naming (RAN) was scored for total naming speed. Results: The mean scores comparison using one-way ANOVA revealed a significant difference (p ≤ 0.05) between the groups on all the measures of phonological awareness, pseudoword repetition, rapid automatized naming, and pseudoword reading. Subsequent post-hoc grade wise comparison using Bonferroni test revealed significant differences (p ≤ 0.05) between each of the grades for all the tasks except (p ≥ 0.05) for syllable blending, syllable stripping, and pseudoword repetition between Grade II and Grade III. The Pearson correlations revealed a highly significant positive correlation (p=0.000) between all the variables except phonological naming which had significant negative correlations. However, the correlation co-efficient was higher for phonological awareness measures compared to others. Hence, phonological awareness was chosen a first independent variable to enter in the hierarchical regression equation followed by rapid automatized naming and finally, pseudoword repetition. The regression analysis revealed syllable awareness as a single most significant predictor of pseudoword reading by explaining the unique variance of 74% and there was no significant change in R² when RAN and pseudoword repetition were added subsequently to the regression equation. Conclusion: Present study concluded that syllable awareness matures completely by Grade II, whereas the phonological memory and phonological naming continue to develop beyond Grade III. Amongst phonological processing skills, phonological awareness, especially syllable awareness is crucial for word decoding than phonological memory and naming during initial years of schooling.Keywords: phonological awareness, phonological memory, phonological naming, phonological processing, pseudo-word decoding
Procedia PDF Downloads 17559 The Relations between Language Diversity and Similarity and Adults' Collaborative Creative Problem Solving
Authors: Z. M. T. Lim, W. Q. Yow
Abstract:
Diversity in individual problem-solving approaches, culture and nationality have been shown to have positive effects on collaborative creative processes in organizational and scholastic settings. For example, diverse graduate and organizational teams consisting of members with both structured and unstructured problem-solving styles were found to have more creative ideas on a collaborative idea generation task than teams that comprised solely of members with either structured or unstructured problem-solving styles. However, being different may not always provide benefits to the collaborative creative process. In particular, speaking different languages may hinder mutual engagement through impaired communication and thus collaboration. Instead, sharing similar languages may have facilitative effects on mutual engagement in collaborative tasks. However, no studies have explored the relations between language diversity and adults’ collaborative creative problem solving. Sixty-four Singaporean English-speaking bilingual undergraduates were paired up into similar or dissimilar language pairs based on the second language they spoke (e.g., for similar language pairs, both participants spoke English-Mandarin; for dissimilar language pairs, one participant spoke English-Mandarin and the other spoke English-Korean). Each participant completed the Ravens Progressive Matrices Task individually. Next, they worked in pairs to complete a collaborative divergent thinking task where they used mind-mapping techniques to brainstorm ideas on a given problem together (e.g., how to keep insects out of the house). Lastly, the pairs worked on a collaborative insight problem-solving task (Triangle of Coins puzzle) where they needed to flip a triangle of ten coins around by moving only three coins. Pairs who had prior knowledge of the Triangle of Coins puzzle were asked to complete an equivalent Matchstick task instead, where they needed to make seven squares by moving only two matchsticks based on a given array of matchsticks. Results showed that, after controlling for intelligence, similar language pairs completed the collaborative insight problem-solving task faster than dissimilar language pairs. Intelligence also moderated these relations. Among adults of lower intelligence, similar language pairs solved the insight problem-solving task faster than dissimilar language pairs. These differences in speed were not found in adults with higher intelligence. No differences were found in the number of ideas generated in the collaborative divergent thinking task between similar language and dissimilar language pairs. In conclusion, sharing similar languages seem to enrich collaborative creative processes. These effects were especially pertinent to pairs with lower intelligence. This provides guidelines for the formation of groups based on shared languages in collaborative creative processes. However, the positive effects of shared languages appear to be limited to the insight problem-solving task and not the divergent thinking task. This could be due to the facilitative effects of other factors of diversity as found in previous literature. Background diversity, for example, may have a larger facilitative effect on the divergent thinking task as compared to the insight problem-solving task due to the varied experiences individuals bring to the task. In conclusion, this study contributes to the understanding of the effects of language diversity in collaborative creative processes and challenges the general positive effects that diversity has on these processes.Keywords: bilingualism, diversity, creativity, collaboration
Procedia PDF Downloads 31758 The Effects of Aging on Visuomotor Behaviors in Reaching
Authors: Mengjiao Fan, Thomson W. L. Wong
Abstract:
It is unavoidable that older adults may have to deal with aging-related motor problems. Aging is highly likely to affect motor learning and control as well. For example, older adults may suffer from poor motor function and quality of life due to age-related eye changes. These adverse changes in vision results in impairment of movement automaticity. Reaching is a fundamental component of various complex movements, which is therefore beneficial to explore the changes and adaptation in visuomotor behaviors. The current study aims to explore how aging affects visuomotor behaviors by comparing motor performance and gaze behaviors between two age groups (i.e., young and older adults). Visuomotor behaviors in reaching under providing or blocking online visual feedback (simulated visual deficiency) conditions were investigated in 60 healthy young adults (Mean age=24.49 years, SD=2.12) and 37 older adults (Mean age=70.07 years, SD=2.37) with normal or corrected-to-normal vision. Participants in each group were randomly allocated into two subgroups. Subgroup 1 was provided with online visual feedback of the hand-controlled mouse cursor. However, in subgroup 2, visual feedback was blocked to simulate visual deficiency. The experimental task required participants to complete 20 times of reaching to a target by controlling the mouse cursor on the computer screen. Among all the 20 trials, start position was upright in the center of the screen and target appeared at a randomly selected position by the tailor-made computer program. Primary outcomes of motor performance and gaze behaviours data were recorded by the EyeLink II (SR Research, Canada). The results suggested that aging seems to affect the performance of reaching tasks significantly in both visual feedback conditions. In both age groups, blocking online visual feedback of the cursor in reaching resulted in longer hand movement time (p < .001), longer reaching distance away from the target center (p<.001) and poorer reaching motor accuracy (p < .001). Concerning gaze behaviors, blocking online visual feedback increased the first fixation duration time in young adults (p<.001) but decreased it in older adults (p < .001). Besides, under the condition of providing online visual feedback of the cursor, older adults conducted a longer fixation dwell time on target throughout reaching than the young adults (p < .001) although the effect was not significant under blocking online visual feedback condition (p=.215). Therefore, the results suggested that different levels of visual feedback during movement execution can affect gaze behaviors differently in older and young adults. Differential effects by aging on visuomotor behaviors appear on two visual feedback patterns (i.e., blocking or providing online visual feedback of hand-controlled cursor in reaching). Several specific gaze behaviors among the older adults were found, which imply that blocking of visual feedback may act as a stimulus to seduce extra perceptive load in movement execution and age-related visual degeneration might further deteriorate the situation. It indeed provides us with insight for the future development of potential rehabilitative training method (e.g., well-designed errorless training) in enhancing visuomotor adaptation for our aging population in the context of improving their movement automaticity by facilitating their compensation of visual degeneration.Keywords: aging effect, movement automaticity, reaching, visuomotor behaviors, visual degeneration
Procedia PDF Downloads 31257 Hydration Evaluation In A Working Population in Greece
Authors: Aikaterini-Melpomeni Papadopoulou, Kyriaki Apergi, Margarita-Vasiliki Panagopoulou, Olga Malisova
Abstract:
Introduction: Adequate hydration is a vital factor that enhances concentration, memory, and decision-making abilities throughout the workday. Various factors may affect hydration status in workplace settings, and many variables, such as age, gender and activity level affect hydration needs. Employees frequently overlook their hydration needs amid busy schedules and demanding tasks, leading to dehydration that can negatively affect cognitive function, productivity, and overall well-being In addition, dietary habits, including fluid intake and food choices, can either support or hinder optimal hydration. However, factors that affect hydration balance among workers in Greece have not been adequately studied. Objective: This study aims to evaluate the hydration status of the working population in Greece and investigate the various factors that impact hydration status in workplace settings, considering demographic, dietary, and occupational influences in a Greek sample of employees from diverse working environments Materials & Methods: The study included 212 participants (46.2% women) from the working population in Greece. Water intake from both solid and liquid foods was recorded using a semi-quantified drinking frequency questionnaire the validated Water Balance Questionnaire was used to evaluate hydration status. The calculation of water from solid and liquid foods was based on data from the USDA National Nutrient Database. Water balance was calculated subtracting the total fluid loss from the total fluid intake in the body. Furthermore, the questionnaire including additional questions on drinking habits and work-related factors.volunteers answered questions of different categories such as a) demographic socio-economic b) work style characteristics c) health, d) physical activity, e) food and fluid intake, f) fluid excretion and g) trends on fluid and water intake. Individual and multivariate regression analyses were performed to assess the relationships between demographic, work-related factors, and hydration balance. Results: Analysis showed that demographic factors like gender, age, and BMI, as well as certain work-related factors, had a weak and statistically non-significant effect on hydration balance. However, the use of a bottle or water container during work hours (b = 944.93, p < 0.001) and engaging in intense physical activity outside of work (b = -226.28, p < 0.001) were found to have a significant impact. Additionally, the consumption of beverages other than water (b = -416.14, p = 0.059) could negatively impact hydration balance. On average, the total consumption of the sample is 3410 ml of water daily, with men consuming approximately 440 ml / day more water (3470 ml / day) compared to women (3030 ml / day) with this difference also being statistically significant. Finally, the water balance, defined as the difference between water intake and water excretion, was found to be negative on average for the entire sample. Conclusions: This study is among the first to explore hydration status within the Greek working population. Findings indicate that awareness of adequate hydration and individual actions, such as using a water bottle during work, may influence hydration balance.Keywords: hydration, working population, water balance, workplace behavior
Procedia PDF Downloads 1156 The System-Dynamic Model of Sustainable Development Based on the Energy Flow Analysis Approach
Authors: Inese Trusina, Elita Jermolajeva, Viktors Gopejenko, Viktor Abramov
Abstract:
Global challenges require a transition from the existing linear economic model to a model that will consider nature as a life support system for the development of the way to social well-being in the frame of the ecological economics paradigm. The objective of the article is to present the results of the analysis of socio-economic systems in the context of sustainable development using the systems power (energy flows) changes analyzing method and structural Kaldor's model of GDP. In accordance with the principles of life's development and the ecological concept was formalized the tasks of sustainable development of the open, non-equilibrium, stable socio-economic systems were formalized using the energy flows analysis method. The methodology of monitoring sustainable development and level of life were considered during the research of interactions in the system ‘human - society - nature’ and using the theory of a unified system of space-time measurements. Based on the results of the analysis, the time series consumption energy and economic structural model were formulated for the level, degree and tendencies of sustainable development of the system and formalized the conditions of growth, degrowth and stationarity. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. During the research, the authors calculated and used a system of universal indicators of sustainable development in the invariant coordinate system in energy units. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. In the context of the proposed approach and methods, universal sustainable development indicators were calculated as models of development for the USA and China. The calculations used data from the World Bank database for the period from 1960 to 2019. Main results: 1) In accordance with the proposed approach, the heterogeneous energy resources of countries were reduced to universal power units, summarized and expressed as a unified number. 2) The values of universal indicators of the life’s level were obtained and compared with generally accepted similar indicators.3) The system of indicators in accordance with the requirements of sustainable development can be considered as a basis for monitoring development trends. This work can make a significant contribution to overcoming the difficulties of forming socio-economic policy, which is largely due to the lack of information that allows one to have an idea of the course and trends of socio-economic processes. The existing methods for the monitoring of the change do not fully meet this requirement since indicators have different units of measurement from different areas and, as a rule, are the reaction of socio-economic systems to actions already taken and, moreover, with a time shift. Currently, the inconsistency or inconsistency of measures of heterogeneous social, economic, environmental, and other systems is the reason that social systems are managed in isolation from the general laws of living systems, which can ultimately lead to a systemic crisis.Keywords: sustainability, system dynamic, power, energy flows, development
Procedia PDF Downloads 5855 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 16254 An Exploration of Special Education Teachers’ Practices in a Preschool Intellectual Disability Centre in Saudi Arabia
Authors: Faris Algahtani
Abstract:
Background: In Saudi Arabia, it is essential to know what practices are employed and considered effective by special education teachers working with preschool children with intellectual disabilities, as a prerequisite for identifying areas for improvement. Preschool provision for these children is expanding through a network of Intellectual Disability Centres while, in primary schools, a policy of inclusion is pursued and, in mainstream preschools, pilots have been aimed at enhancing learning in readiness for primary schooling. This potentially widens the attainment gap between preschool children with and without intellectual disabilities, and influences the scope for improvement. Goal: The aim of the study was to explore special education teachers’ practices and perceived perceptions of those practices for preschool children with intellectual disabilities in Saudi Arabia Method: A qualitative interpretive approach was adopted in order to gain a detailed understanding of how special education teachers in an IDC operate in the classroom. Fifteen semi-structured interviews were conducted with experienced and qualified teachers. Data were analysed using thematic analysis, based on themes identified from the literature review together with new themes emerging from the data. Findings: American methods strongly influenced teaching practices, in particular TEACCH (Treatment and Education of Autistic and Communication related handicapped Children), which emphasises structure, schedules and specific methods of teaching tasks and skills; and ABA (Applied Behaviour Analysis), which aims to improve behaviours and skills by concentrating on detailed breakdown and teaching of task components and rewarding desired behaviours with positive reinforcement. The Islamic concept of education strongly influenced which teaching techniques were used and considered effective, and how they were applied. Tensions were identified between the Islamic approach to disability, which accepts differences between human beings as created by Allah in order for people to learn to help and love each other, and the continuing stigmatisation of disability in many Arabic cultures, which means that parents who bring their children to an IDC often hope and expect that their children will be ‘cured’. Teaching methods were geared to reducing behavioural problems and social deficits rather than to developing the potential of the individual child, with some teachers recognizing the child’s need for greater freedom. Relationships with parents could in many instances be improved. Teachers considered both initial teacher education and professional development to be inadequate for their needs and the needs of the children they teach. This can be partly attributed to the separation of training and development of special education teachers from that of general teachers. Conclusion: Based on the findings, teachers’ practices could be improved by the inclusion of general teaching strategies, parent-teacher relationships and practical teaching experience in both initial teacher education and professional development. Coaching and mentoring support from carefully chosen special education teachers could assist the process, as could the presence of a second teacher or teaching assistant in the classroom.Keywords: special education, intellectual disabilities, early intervention , early childhood
Procedia PDF Downloads 13753 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza
Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue
Abstract:
Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.Keywords: COVID-19, Fastai, influenza, transfer network
Procedia PDF Downloads 14252 A Flipped Learning Experience in an Introductory Course of Information and Communication Technology in Two Bachelor's Degrees: Combining the Best of Online and Face-to-Face Teaching
Authors: Begona del Pino, Beatriz Prieto, Alberto Prieto
Abstract:
Two opposite approaches to teaching can be considered: in-class learning (teacher-oriented) versus virtual learning (student-oriented). The most known example of the latter is Massive Online Open Courses (MOOCs). Both methodologies have pros and cons. Nowadays there is an increasing trend towards combining both of them. Blending learning is considered a valuable tool for improving learning since it combines student-centred interactive e-learning and face to face instruction. The aim of this contribution is to exchange and share the experience and research results of a blended-learning project that took place in the University of Granada (Spain). The research objective was to prove how combining didactic resources of a MOOC with in-class teaching, interacting directly with students, can substantially improve academic results, as well as student acceptance. The proposed methodology is based on the use of flipped learning technics applied to the subject ‘Fundamentals of Computer Science’ of the first course of two degrees: Telecommunications Engineering, and Industrial Electronics. In this proposal, students acquire the theoretical knowledges at home through a MOOC platform, where they watch video-lectures, do self-evaluation tests, and use other academic multimedia online resources. Afterwards, they have to attend to in-class teaching where they do other activities in order to interact with teachers and the rest of students (discussing of the videos, solving of doubts and practical exercises, etc.), trying to overcome the disadvantages of self-regulated learning. The results are obtained through the grades of the students and their assessment of the blended experience, based on an opinion survey conducted at the end of the course. The major findings of the study are the following: The percentage of students passing the subject has grown from 53% (average from 2011 to 2014 using traditional learning methodology) to 76% (average from 2015 to 2018 using blended methodology). The average grade has improved from 5.20±1.99 to 6.38±1.66. The results of the opinion survey indicate that most students preferred blended methodology to traditional approaches, and positively valued both courses. In fact, 69% of students felt ‘quite’ or ‘very’ satisfied with the classroom activities; 65% of students preferred the flipped classroom methodology to traditional in-class lectures, and finally, 79% said they were ‘quite’ or ‘very’ satisfied with the course in general. The main conclusions of the experience are the improvement in academic results, as well as the highly satisfactory assessments obtained in the opinion surveys. The results confirm the huge potential of combining MOOCs in formal undergraduate studies with on-campus learning activities. Nevertheless, the results in terms of students’ participation and follow-up have a wide margin for improvement. The method is highly demanding for both students and teachers. As a recommendation, students must perform the assigned tasks with perseverance, every week, in order to take advantage of the face-to-face classes. This perseverance is precisely what needs to be promoted among students because it clearly brings about an improvement in learning.Keywords: blended learning, educational paradigm, flipped classroom, flipped learning technologies, lessons learned, massive online open course, MOOC, teacher roles through technology
Procedia PDF Downloads 18051 Effectiveness of Participatory Ergonomic Education on Pain Due to Work Related Musculoskeletal Disorders in Food Processing Industrial Workers
Authors: Salima Bijapuri, Shweta Bhatbolan, Sejalben Patel
Abstract:
Ergonomics concerns the fitting of the environment and the equipment to the worker. Ergonomic principles can be employed in different dimensions of the industrial sector. Participation of all the stakeholders is the key to the formulation of a multifaceted and comprehensive approach to lessen the burden of occupational hazards. Taking responsibility for one’s own work activities by acquiring sufficient knowledge and potential to influence the practices and outcomes is the basis of participatory ergonomics and even hastens the process to identify workplace hazards. The study was aimed to check how participatory ergonomics can be effective in the management of work-related musculoskeletal disorders. Method: A mega kitchen was identified in a twin city of Karnataka, India. Consent was taken, and the screening of workers was done using observation methods. Kitchen work was structured to include different tasks, which included preparation, cooking, distributing, and serving food, packing food to be delivered to schools, dishwashing, cleaning and maintenance of kitchen and equipment, and receiving and storing raw material. Total 100 workers attended the education session on participatory ergonomics and its role in implementing the correct ergonomic practices, thus preventing WRMSDs. Demographic details and baseline data on related musculoskeletal pain and discomfort were collected using the Nordic pain questionnaire and VAS score pre- and post-study. Monthly visits were made, and the education sessions were reiterated on each visit, thus reminding, correcting, and problem-solving of each worker. After 9 months with a total of 4 such education session, the post education data was collected. The software SPSS 20 was used to analyse the collected data. Results: The majority of them (78%), depending on the availability and feasibility, participated in the intervention workshops were arranged four times. The average age of the participants was 39 years. The percentage of female participants was 79.49%, and 20.51% of participants comprised of males. The Nordic Musculoskeletal Questionnaire (NMQ) showed that knee pain was the most commonly reported complaint (62%) from the last 12 months with a mean VAS of 6.27, followed by low back pain. Post intervention, the mean VAS Score was reduced significantly to 2.38. The comparison of pre-post scores was made using Wilcoxon matched pairs test. Upon enquiring, it was found that, the participants learned the importance of applying ergonomics at their workplace which inturn was beneficial for them to handle any problems arising at their workplace on their own with self confidence. Conclusion: The participatory ergonomics proved effective with workers of mega kitchen, and it is a feasible and practical approach. The advantage of the given study area was that it had a sophisticated and ergonomically designed workstation; thus it was the lack of education and practical knowledge to use these stations was of utmost need. There was a significant reduction in VAS scores with the implementation of changes in the working style, and the knowledge of ergonomics helped to decrease physical load and improve musculoskeletal health.Keywords: ergonomic awareness session, mega kitchen, participatory ergonomics, work related musculoskeletal disorders
Procedia PDF Downloads 13850 Multi-Modality Brain Stimulation: A Treatment Protocol for Tinnitus
Authors: Prajakta Patil, Yash Huzurbazar, Abhijeet Shinde
Abstract:
Aim: To develop a treatment protocol for the management of tinnitus through multi-modality brain stimulation. Methodology: Present study included 33 adults with unilateral (31 subjects) and bilateral (2 subjects) chronic tinnitus with and/or without hearing loss independent of their etiology. The Treatment protocol included 5 consecutive sessions with follow-up of 6 months. Each session was divided into 3 parts: • Pre-treatment: a) Informed consent b) Pitch and loudness matching. • Treatment: Bimanual paper pen task with tinnitus masking for 30 minutes. • Post-treatment: a) Pitch and loudness matching b) Directive counseling and obtaining feedback. Paper-pen task is to be performed bimanually that included carrying out two different writing activities in different context. The level of difficulty of the activities was increased in successive sessions. Narrowband noise of a frequency same as that of tinnitus was presented at 10 dBSL of tinnitus for 30 minutes simultaneously in the ear with tinnitus. Result: The perception of tinnitus was no longer present in 4 subjects while in remaining subjects it reduced to an intensity that its perception no longer troubled them without causing residual facilitation. In all subjects, the intensity of tinnitus decreased by an extent of 45 dB at an average. However, in few subjects, the intensity of tinnitus also decreased by more than 45 dB. The approach resulted in statistically significant reductions in Tinnitus Functional Index and Tinnitus Handicap Inventory scores. The results correlate with pre and post treatment score of Tinnitus Handicap Inventory that dropped from 90% to 0%. Discussion: Brain mapping(qEEG) Studies report that there is multiple parallel overlapping of neural subnetworks in the non-auditory areas of the brain which exhibits abnormal, constant and spontaneous neural activity involved in the perception of tinnitus with each subnetwork and area reflecting a specific aspect of tinnitus percept. The paper pen task and directive counseling are designed and delivered respectively in a way that is assumed to induce normal, rhythmically constant and premeditated neural activity and mask the abnormal, constant and spontaneous neural activity in the above-mentioned subnetworks and the specific non-auditory area. Counseling was focused on breaking the vicious cycle causing and maintaining the presence of tinnitus. Diverting auditory attention alone is insufficient to reduce the perception of tinnitus. Conscious awareness of tinnitus can be suppressed when individuals engage in cognitively demanding tasks of non-auditory nature as the paper pen task used in the present study. To carry out this task selective, divided, sustained, simultaneous and split attention act cumulatively. Bimanual paper pen task represents a top-down activity which underlies brain’s ability to selectively attend to the bimanual written activity as a relevant stimulus and to ignore tinnitus that is the irrelevant stimuli in the present study. Conclusion: The study suggests that this novel treatment approach is cost effective, time saving and efficient to vanish the tinnitus or to reduce the intensity of tinnitus to a negligible level and thereby eliminating the negative reactions towards tinnitus.Keywords: multi-modality brain stimulation, neural subnetworks, non-auditory areas, paper-pen task, top-down activity
Procedia PDF Downloads 14749 Cultivating Concentration and Flow: Evaluation of a Strategy for Mitigating Digital Distractions in University Education
Authors: Vera G. Dianova, Lori P. Montross, Charles M. Burke
Abstract:
In the digital age, the widespread and frequently excessive use of mobile phones amongst university students is recognized as a significant distractor which interferes with their ability to enter a deep state of concentration during studies and diminishes their prospects of experiencing the enjoyable and instrumental state of flow, as defined and described by psychologist M. Csikszentmihalyi. This study has targeted 50 university students with the aim of teaching them to cultivate their ability to engage in deep work and to attain the state of flow, fostering more effective and enjoyable learning experiences. Prior to the start of the intervention, all participating students completed a comprehensive survey based on a variety of validated scales assessing their inclination toward lifelong learning, frequency of flow experiences during study, frustration tolerance, sense of agency, as well as their love of learning and daily time devoted to non-academic mobile phone activities. Several days after this initial assessment, students received a 90-minute lecture on the principles of flow and deep work, accompanied by a critical discourse on the detrimental effects of excessive mobile phone usage. They were encouraged to practice deep work and strive for frequent flow states throughout the semester. Subsequently, students submitted weekly surveys, including the 10-item CORE Dispositional Flow Scale, a 3-item agency scale and furthermore disclosed their average daily hours spent on non-academic mobile phone usage. As a final step, at the end of the semester students engaged in reflective report writing, sharing their experiences and evaluating the intervention's effectiveness. They considered alterations in their love of learning, reflected on the implications of their mobile phone usage, contemplated improvements in their tolerance for boredom and perseverance in complex tasks, and pondered the concept of lifelong learning. Additionally, students assessed whether they actively took steps towards managing their recreational phone usage and towards improving their commitment to becoming lifelong learners. Employing a mixed-methods approach our study offers insights into the dynamics of concentration, flow, mobile phone usage and attitudes towards learning among undergraduate and graduate university students. The findings of this study aim to promote profound contemplation, on the part of both students and instructors, on the rapidly evolving digital-age higher education environment. In an era defined by digital and AI advancements, the ability to concentrate, to experience the state of flow, and to love learning has never been more crucial. This study underscores the significance of addressing mobile phone distractions and providing strategies for cultivating deep concentration. The insights gained can guide educators in shaping effective learning strategies for the digital age. By nurturing a love for learning and encouraging lifelong learning, educational institutions can better prepare students for a rapidly changing labor market, where adaptability and continuous learning are paramount for success in a dynamic career landscape.Keywords: deep work, flow, higher education, lifelong learning, love of learning
Procedia PDF Downloads 6848 Resilience-Based Emergency Bridge Inspection Routing and Repair Scheduling under Uncertainty
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway network systems play a vital role in disaster response for disaster-damaged areas. Damaged bridges in such network systems can impede disaster response by disrupting transportation of rescue teams or humanitarian supplies. Therefore, emergency inspection and repair of bridges to quickly collect damage information of bridges and recover the functionality of highway networks is of paramount importance to disaster response. A widely used measure of a network’s capability to recover from disasters is resilience. To enhance highway network resilience, plenty of studies have developed various repair scheduling methods for the prioritization of bridge-repair tasks. These methods assume that repair activities are performed after the damage to a highway network is fully understood via inspection, although inspecting all bridges in a regional highway network may take days, leading to the significant delay in repairing bridges. In reality, emergency repair activities can be commenced as soon as the damage data of some bridges that are crucial to emergency response are obtained. Given that emergency bridge inspection and repair (EBIR) activities are executed simultaneously in the response phase, the real-time interactions between these activities can occur – the blockage of highways due to repair activities can affect inspection routes which in turn have an impact on emergency repair scheduling by providing real-time information on bridge damages. However, the impact of such interactions on the optimal emergency inspection routes (EIR) and emergency repair schedules (ERS) has not been discussed in prior studies. To overcome the aforementioned deficiencies, this study develops a routing and scheduling model for EBIR while accounting for real-time inspection-repair interactions to maximize highway network resilience. A stochastic, time-dependent integer program is proposed for the complex and real-time interacting EBIR problem given multiple inspection and repair teams at locations as set post-disaster. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. Computational tests are performed using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that the simultaneous implementation of bridge inspection and repair activities can significantly improve the highway network resilience. Moreover, the deployment of inspection and repair teams should match each other, and the network resilience will not be improved once the unilateral increase in inspection teams or repair teams exceeds a certain level. This study contributes to both knowledge and practice. First, the developed mathematical model makes it possible for capturing the impact of real-time inspection-repair interactions on inspection routing and repair scheduling and efficiently deriving optimal EIR and ERS on a large and complex highway network. Moreover, this study contributes to the organizational dimension of highway network resilience by providing optimal strategies for highway bridge management. With the decision support tool, disaster managers are able to identify the most critical bridges for disaster management and make decisions on proper inspection and repair strategies to improve highway network resilience.Keywords: disaster management, emergency bridge inspection and repair, highway network, resilience, uncertainty
Procedia PDF Downloads 10947 Fully Autonomous Vertical Farm to Increase Crop Production
Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek
Abstract:
New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.Keywords: automation, vertical farming, robot, artificial intelligence, vision, control
Procedia PDF Downloads 3946 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals
Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen
Abstract:
Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.Keywords: care, learning, nursing, technology
Procedia PDF Downloads 13645 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 38644 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context
Authors: Nicole Merkle, Stefan Zander
Abstract:
Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.Keywords: ambient intelligence, machine learning, semantic web, software agents
Procedia PDF Downloads 28143 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core
Authors: Yashas Bedre Raghavendra, Pim Vullers
Abstract:
This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction
Procedia PDF Downloads 7042 A Framework for Automated Nuclear Waste Classification
Authors: Seonaid Hume, Gordon Dobie, Graeme West
Abstract:
Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.Keywords: nuclear decommissioning, radiation detection, object detection, waste classification
Procedia PDF Downloads 20041 A Randomised Simulation Study to Assess the Impact of a Focussed Crew Resource Management Course on UK Medical Students
Authors: S. MacDougall-Davis, S. Wysling, R. Willmore
Abstract:
Background: The application of good non-technical skills, also known as crew resource management (CRM), is central to the delivery of safe, effective healthcare. The authors have been running remote trauma courses for over 10 years, primarily focussing on developing participants’ CRM in time-critical, high-stress clinical situations. The course has undergone an iterative process over the past 10 years. We employ a number of experiential learning techniques for improving CRM, including small group workshops, military command tasks, high fidelity simulations with reflective debriefs, and a ‘flipped classroom’, where participants are asked to create their own simulations and assess and debrief their colleagues’ CRM. We created a randomised simulation study to assess the impact of our course on UK medical students’ CRM, both at an individual and a teams level. Methods: Sixteen students took part. Four clinical scenarios were devised, designed to be of similar urgency and complexity. Professional moulage effects and experienced clinical actors were used to increase fidelity and to further simulate high-stress environments. Participants were block randomised into teams of 4; each team was randomly assigned to one pre-course simulation. They then underwent our 5 day remote trauma CRM course. Post-course, students were re-randomised into four new teams; each was randomly assigned to a post-course simulation. All simulations were videoed. The footage was reviewed by two independent CRM-trained assessors, who were blinded to the before/after the status of the simulations. Assessors used the internationally validated team emergency assessment measure (TEAM) to evaluate key areas of team performance, as well as a global outcome rating. Prior to the study, assessors had scored two unrelated scenarios using the same assessment tool, demonstrating 89% concordance. Participants also completed pre- and post-course questionnaires. Likert scales were used to rate individuals’ perceived NTS ability and their confidence to work in a team in time-critical, high-stress situations. Results: Following participation in the course, a significant improvement in CRM was observed in all areas of team performance. Furthermore, the global outcome rating for team performance was markedly improved (40-70%; mean 55%), thus demonstrating an impact at Level 4 of Kirkpatrick’s hierarchy. At an individual level, participants’ self-perceived CRM improved markedly after the course (35-70% absolute improvement; mean 55%), as did their confidence to work in a team in high-stress situations. Conclusion: Our study demonstrates that with a short, cost-effective course, using easily reproducible teaching sessions, it is possible to significantly improve participants’ CRM skills, both at an individual and, perhaps more importantly, at a teams level. The successful functioning of multi-disciplinary teams is vital in a healthcare setting, particularly in high-stress, time-critical situations. Good CRM is of paramount importance in these scenarios. The authors believe that these concepts should be introduced from the earliest stages of medical education, thus promoting a culture of effective CRM and embedding an early appreciation of the importance of these skills in enabling safe and effective healthcare.Keywords: crew resource management, non-technical skills, training, simulation
Procedia PDF Downloads 13440 Development of Knowledge Discovery Based Interactive Decision Support System on Web Platform for Maternal and Child Health System Strengthening
Authors: Partha Saha, Uttam Kumar Banerjee
Abstract:
Maternal and Child Healthcare (MCH) has always been regarded as one of the important issues globally. Reduction of maternal and child mortality rates and increase of healthcare service coverage were declared as one of the targets in Millennium Development Goals till 2015 and thereafter as an important component of the Sustainable Development Goals. Over the last decade, worldwide MCH indicators have improved but could not match the expected levels. Progress of both maternal and child mortality rates have been monitored by several researchers. Each of the studies has stated that only less than 26% of low-income and middle income countries (LMICs) were on track to achieve targets as prescribed by MDG4. Average worldwide annual rate of reduction of under-five mortality rate and maternal mortality rate were 2.2% and 1.9% as on 2011 respectively whereas rates should be minimum 4.4% and 5.5% annually to achieve targets. In spite of having proven healthcare interventions for both mothers and children, those could not be scaled up to the required volume due to fragmented health systems, especially in the developing and under-developed countries. In this research, a knowledge discovery based interactive Decision Support System (DSS) has been developed on web platform which would assist healthcare policy makers to develop evidence-based policies. To achieve desirable results in MCH, efficient resource planning is very much required. In maximum LMICs, resources are big constraint. Knowledge, generated through this system, would help healthcare managers to develop strategic resource planning for combatting with issues like huge inequity and less coverage in MCH. This system would help healthcare managers to accomplish following four tasks. Those are a) comprehending region wise conditions of variables related with MCH, b) identifying relationships within variables, c) segmenting regions based on variables status, and d) finding out segment wise key influential variables which have major impact on healthcare indicators. Whole system development process has been divided into three phases. Those were i) identifying contemporary issues related with MCH services and policy making; ii) development of the system; and iii) verification and validation of the system. More than 90 variables under three categories, such as a) educational, social, and economic parameters; b) MCH interventions; and c) health system building blocks have been included into this web-based DSS and five separate modules have been developed under the system. First module has been designed for analysing current healthcare scenario. Second module would help healthcare managers to understand correlations among variables. Third module would reveal frequently-occurring incidents along with different MCH interventions. Fourth module would segment regions based on previously mentioned three categories and in fifth module, segment-wise key influential interventions will be identified. India has been considered as case study area in this research. Data of 601 districts of India has been used for inspecting effectiveness of those developed modules. This system has been developed by importing different statistical and data mining techniques on Web platform. Policy makers would be able to generate different scenarios from the system before drawing any inference, aided by its interactive capability.Keywords: maternal and child heathcare, decision support systems, data mining techniques, low and middle income countries
Procedia PDF Downloads 25839 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 33938 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality
Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan
Abstract:
Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application
Procedia PDF Downloads 74