Search results for: computer aided teaching
215 Intensive Neurophysiological Rehabilitation System: New Approach for Treatment of Children with Autism
Authors: V. I. Kozyavkin, L. F. Shestopalova, T. B. Voloshyn
Abstract:
Introduction: Rehabilitation of children with Autism is the issue of the day in psychiatry and neurology. It is attributed to constantly increasing quantity of autistic children - Autistic Spectrum Disorders (ASD) Existing rehabilitation approaches in treatment of children with Autism improve their medico- social and social- psychological adjustment. Experience of treatment for different kinds of Autistic disorders in International Clinic of Rehabilitation (ICR) reveals the necessity of complex intensive approach for healing this malady and wider implementation of a Kozyavkin method for treatment of children with ASD. Methods: 19 children aged from 3 to 14 years were examined. They were diagnosed ‘Autism’ (F84.0) with comorbid neurological pathology (from pyramidal insufficiency to para- and tetraplegia). All patients underwent rehabilitation in ICR during two weeks, where INRS approach was used. INRS included methods like biomechanical correction of the spine, massage, physical therapy, joint mobilization, wax-paraffin applications. They were supplemented by art- therapy, ergotherapy, rhythmical group exercises, computer game therapy, team Olympic games and other methods for improvement of motivation and social integration of the child. Estimation of efficacy was conducted using parent’s questioning and done twice- on the onset of INRS rehabilitation course and two weeks afterward. For efficacy assessment of rehabilitation of autistic children in ICR standardized tool was used, namely Autism Treatment Evaluation Checklist (ATEC). This scale was selected because any rehabilitation approaches for the child with Autism can be assessed using it. Results: Before the onset of INRS treatment mean score according to ATEC scale was 64,75±9,23, it reveals occurrence in examined children severe communication, speech, socialization and behavioral impairments. After the end of the rehabilitation course, the mean score was 56,5±6,7, what indicates positive dynamics in comparison to the onset of rehabilitation. Generally, improvement of psychoemotional state occurred in 90% of cases. Most significant changes occurred in the scope of speech (16,5 before and 14,5 after the treatment), socialization (15.1 before and 12,5 after) and behavior (20,1 before and 17.4 after). Conclusion: As a result of INRS rehabilitation course reduction of autistic symptoms was noted. Particularly improvements in speech were observed (children began to spell out new syllables, words), there was some decrease in signs of destructiveness, quality of contact with the surrounding people improved, new skills of self-service appeared. The prospect of the study is further, according to evidence- based medicine standards, deeper examination of INRS and assessment of its usefulness in treatment for Autism and ASD.Keywords: intensive neurophysiological rehabilitation system (INRS), international clinic od rehabilitation, ASD, rehabilitation
Procedia PDF Downloads 169214 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours
Authors: Fikret Yalcinkaya, Hamza Unsal
Abstract:
To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models
Procedia PDF Downloads 183213 Improving Student Retention: Enhancing the First Year Experience through Group Work, Research and Presentation Workshops
Authors: Eric Bates
Abstract:
Higher education is recognised as being of critical importance in Ireland and has been linked as a vital factor to national well-being. Statistics show that Ireland has one of the highest rates of higher education participation in Europe. However, student retention and progression, especially in Institutes of Technology, is becoming an issue as rates on non-completion rise. Both within Ireland and across Europe student retention is seen as a key performance indicator for higher education and with these increasing rates the Irish higher education system needs to be flexible and adapt to the situation it now faces. The author is a Programme Chair on a Level 6 full time undergraduate programme and experience to date has shown that the first year undergraduate students take some time to identify themselves as a group within the setting of a higher education institute. Despite being part of a distinct class on a specific programme some individuals can feel isolated as he or she take the first step into higher education. Such feelings can contribute to students eventually dropping out. This paper reports on an ongoing initiative that aims to accelerate the bonding experience of a distinct group of first year undergraduates on a programme which has a high rate of non-completion. This research sought to engage the students in dynamic interactions with their peers to quickly evolve a group sense of coherence. Two separate modules – a Research Module and a Communications module - delivered by the researcher were linked across two semesters. Students were allocated into random groups and each group was given a topic to be researched. There were six topics – essentially the six sub-headings on the DIT Graduate Attribute Statement. The research took place in a computer lab and students also used the library. The output from this was a document that formed part of the submission for the Research Module. In the second semester the groups then had to make a presentation of their findings where each student spoke for a minimum amount of time. Presentation workshops formed part of that module and students were given the opportunity to practice their presentation skills. These presentations were video recorded to enable feedback to be given. Although this was a small scale study preliminary results found a strong sense of coherence among this particular cohort and feedback from the students was very positive. Other findings indicate that spreading the initiative across two semesters may have been an inhibitor. Future challenges include spreading such Initiatives College wide and indeed sector wide.Keywords: first year experience, student retention, group work, presentation workshops
Procedia PDF Downloads 229212 Examining Employee Social Intrapreneurial Behaviour (ESIB) in Kuwait: Pilot Study
Authors: Ardita Malaj, Ahmad R. Alsaber, Bedour Alboloushi, Anwaar Alkandari
Abstract:
Organizations worldwide, particularly in Kuwait, are concerned with implementing a progressive workplace culture and fostering social innovation behaviours. The main aim of this research is to examine and establish a thorough comprehension of the relationship between an inventive organizational culture, employee intrapreneurial behaviour, authentic leadership, employee job satisfaction, and employee job commitment in the manufacturing sector of Kuwait, which is a developed economy. Literature reviews analyse the core concepts and their related areas by scrutinizing their definitions, dimensions, and importance to uncover any deficiencies in existing research. The examination of relevant research uncovered major gaps in understanding. This study examines the reliability and validity of a newly developed questionnaire designed to identify the appropriate applications for a large-scale investigation. A preliminary investigation was carried out, determining a sample size of 36 respondents selected randomly from a pool of 223 samples. SPSS was utilized to calculate the percentages of the demographic characteristics for the participants, assess the credibility of the measurements, evaluate the internal consistency, validate all agreements, and determine Pearson's correlation. The study's results indicated that the majority of participants were male (66.7%), aged between 35 and 44 (38.9%), and possessed a bachelor's degree (58.3%). Approximately 94.4% of the participants were employed full-time. 72.2% of the participants are employed in the electrical, computer, and ICT sector, whilst 8.3% work in the metal industry. Out of all the departments, the human resource department had the highest level of engagement, making up 13.9% of the total. Most participants (36.1%) possessed intermediate or advanced levels of experience, whilst 21% were classified as entry-level. Furthermore, 8.3% of individuals were categorized as first-level management, 22.2% were categorized as middle management, and 16.7% were categorized as executive or senior management. Around 19.4% of the participants have over a decade of professional experience. The Pearson's correlation coefficient for all 5 components varies between 0.4009 to 0.7183. The results indicate that all elements of the questionnaire were effectively verified, with a Cronbach alpha factor predominantly exceeding 0.6, which is the criterion commonly accepted by researchers. Therefore, the work on the larger scope of testing and analysis could continue.Keywords: pilot study, ESIB, innovative organizational culture, Kuwait, validation
Procedia PDF Downloads 34211 Pushover Analysis of Masonry Infilled Reinforced Concrete Frames for Performance Based Design for near Field Earthquakes
Authors: Alok Madan, Ashok Gupta, Arshad K. Hashmi
Abstract:
Non-linear dynamic time history analysis is considered as the most advanced and comprehensive analytical method for evaluating the seismic response and performance of multi-degree-of-freedom building structures under the influence of earthquake ground motions. However, effective and accurate application of the method requires the implementation of advanced hysteretic constitutive models of the various structural components including masonry infill panels. Sophisticated computational research tools that incorporate realistic hysteresis models for non-linear dynamic time-history analysis are not popular among the professional engineers as they are not only difficult to access but also complex and time-consuming to use. And, commercial computer programs for structural analysis and design that are acceptable to practicing engineers do not generally integrate advanced hysteretic models which can accurately simulate the hysteresis behavior of structural elements with a realistic representation of strength degradation, stiffness deterioration, energy dissipation and ‘pinching’ under cyclic load reversals in the inelastic range of behavior. In this scenario, push-over or non-linear static analysis methods have gained significant popularity, as they can be employed to assess the seismic performance of building structures while avoiding the complexities and difficulties associated with non-linear dynamic time-history analysis. “Push-over” or non-linear static analysis offers a practical and efficient alternative to non-linear dynamic time-history analysis for rationally evaluating the seismic demands. The present paper is based on the analytical investigation of the effect of distribution of masonry infill panels over the elevation of planar masonry infilled reinforced concrete (R/C) frames on the seismic demands using the capacity spectrum procedures implementing nonlinear static analysis (pushover analysis) in conjunction with the response spectrum concept. An important objective of the present study is to numerically evaluate the adequacy of the capacity spectrum method using pushover analysis for performance based design of masonry infilled R/C frames for near-field earthquake ground motions.Keywords: nonlinear analysis, capacity spectrum method, response spectrum, seismic demand, near-field earthquakes
Procedia PDF Downloads 405210 Predicting Personality and Psychological Distress Using Natural Language Processing
Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi
Abstract:
Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality
Procedia PDF Downloads 79209 Social Skills as a Significant Aspect of a Successful Start of Compulsory Education
Authors: Eva Šmelová, Alena Berčíková
Abstract:
The issue of school maturity and readiness of a child for a successful start of compulsory education is one of the long-term monitored areas, especially in the context of education and psychology. In the context of the curricular reform in the Czech Republic, the issue has recently gained importance. Analyses of research in this area suggest a lack of a broader overview of indicators informing about the current level of children’s school maturity and school readiness. Instead, various studies address partial issues. Between 2009 and 2013 a research study was performed at the Faculty of Education, Palacký University Olomouc (Czech Republic) focusing on children’s maturity and readiness for compulsory education. In this study, social skills were of marginal interest; the main focus was on the mental area. This previous research is smoothly linked with the present study, the objective of which is to identify the level of school maturity and school readiness in selected characteristics of social skills as part of the adaptation process after enrolment in compulsory education. In this context, the following research question has been formulated: During the process of adaptation to the school environment, which social skills are weakened? The method applied was observation, for the purposes of which the authors developed a research tool – record sheet with 11 items – social skills that a child should have by the end of preschool education. The items were assessed by first-grade teachers at the beginning of the school year. The degree of achievement and intensity of the skills were assessed for each child using an assessment scale. In the research, the authors monitored a total of three independent variables (gender, postponement of school attendance, participation in inclusive education). The effect of these independent variables was monitored using 11 dependent variables. These variables are represented by the results achieved in selected social skills. Statistical data processing was assisted by the Computer Centre of Palacký University Olomouc. Statistical calculations were performed using SPSS v. 12.0 for Windows and STATISTICA: StatSoft STATISTICA CR, Cz (software system for data analysis). The research sample comprised 115 children. In their paper, the authors present the results of the research and at the same time point to possible areas of further investigation. They also highlight possible risks associated with weakened social skills.Keywords: compulsory education, curricular reform, educational diagnostics, pupil, school curriculum, school maturity, school readiness, social skills
Procedia PDF Downloads 251208 Seismic Impact and Design on Buried Pipelines
Authors: T. Schmitt, J. Rosin, C. Butenweg
Abstract:
Seismic design of buried pipeline systems for energy and water supply is not only important for plant and operational safety, but in particular for the maintenance of supply infrastructure after an earthquake. Past earthquakes have shown the vulnerability of pipeline systems. After the Kobe earthquake in Japan in 1995 for instance, in some regions the water supply was interrupted for almost two months. The present paper shows special issues of the seismic wave impacts on buried pipelines, describes calculation methods, proposes approaches and gives calculation examples. Buried pipelines are exposed to different effects of seismic impacts. This paper regards the effects of transient displacement differences and resulting tensions within the pipeline due to the wave propagation of the earthquake. Other effects are permanent displacements due to fault rupture displacements at the surface, soil liquefaction, landslides and seismic soil compaction. The presented model can also be used to calculate fault rupture induced displacements. Based on a three-dimensional Finite Element Model parameter studies are performed to show the influence of several parameters such as incoming wave angle, wave velocity, soil depth and selected displacement time histories. In the computer model, the interaction between the pipeline and the surrounding soil is modeled with non-linear soil springs. A propagating wave is simulated affecting the pipeline punctually independently in time and space. The resulting stresses mainly are caused by displacement differences of neighboring pipeline segments and by soil-structure interaction. The calculation examples focus on pipeline bends as the most critical parts. Special attention is given to the calculation of long-distance heat pipeline systems. Here, in regular distances expansion bends are arranged to ensure movements of the pipeline due to high temperature. Such expansion bends are usually designed with small bending radii, which in the event of an earthquake lead to high bending stresses at the cross-section of the pipeline. Therefore, Karman's elasticity factors, as well as the stress intensity factors for curved pipe sections, must be taken into account. The seismic verification of the pipeline for wave propagation in the soil can be achieved by observing normative strain criteria. Finally, an interpretation of the results and recommendations are given taking into account the most critical parameters.Keywords: buried pipeline, earthquake, seismic impact, transient displacement
Procedia PDF Downloads 188207 An Analysis of Gamification in the Post-Secondary Classroom
Authors: F. Saccucci
Abstract:
Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.Keywords: gamification, inexpensive, non-quantitative advantages, post-secondary
Procedia PDF Downloads 212206 Consumer Knowledge and Behavior in the Aspect of Food Waste
Authors: Katarzyna Neffe-Skocinska, Marzena Tomaszewska, Beata Bilska, Dorota Zielinska, Monika Trzaskowska, Anna Lepecka, Danuta Kolozyn-Krajewska
Abstract:
The aim of the study was to assess Polish consumer behavior towards food waste, including knowledge of information on food labels. The survey was carried out using the CAPI (computer assisted personal interview) method, which involves interviewing the respondent using mobile devices. The research group was a representative sample for Poland due to demographic variables: gender, age, place of residence. A total of 1.115 respondents participated in the study (51.1% were women and 48.9% were men). The questionnaire included questions on five thematic aspects: 1. General knowledge and sources of information on the phenomenon of food waste; 2. Consumption of food after the date of minimum durability; 3. The meanings of the phrase 'best before ...'; 4. Indication of the difference between the meaning of the words 'best before ...' and 'use by'; 5. Indications products marked with the phrase 'best before ...'. It was found that every second surveyed Pole met with the topic of food waste (54.8%). Among the respondents, the most popular source of information related to the research topic was television (89.4%), radio (26%) and the Internet (24%). Over a third of respondents declared that they consume food after the date of minimum durability. Only every tenth (9.8%) respondent does not pay attention to the expiry date and type of consumed products (durable and perishable products). Correctly 39.8% of respondents answered the question: How do you understand the phrase 'best before ...'? In the opinion of 42.8% of respondents, the statements 'best before ...' and 'use by' mean the same thing, while 36% of them think differently. In addition, more than one-fifth of respondents could not respond to the questions. In the case of products of the indication information 'best before ...', more than 40% of the respondents chosen perishable products, e.g., yoghurts and durable, e.g., groats. A slightly lower percentage of indications was recorded for flour (35.1%), sausage (32.8%), canned corn (31.8%), and eggs (25.0%). Based on the assessment of the behavior of Polish consumers towards the phenomenon of food waste, it can be concluded that respondents have elementary knowledge of the study subject. Noteworthy is the good conduct of most respondents in terms of compliance with shelf life and dates of minimum durability of food products. The publication was financed on the basis of an agreement with the National Center for Research and Development No. Gospostrateg 1/385753/1/NCBR/2018 for the implementation and financing of the project under the strategic research and development program social and economic development of Poland in the conditions of globalizing markets – GOSPOSTRATEG - acronym PROM.Keywords: food waste, shelf life, dates of durability, consumer knowledge and behavior
Procedia PDF Downloads 175205 Virtual Team Performance: A Transactive Memory System Perspective
Authors: Belbaly Nassim
Abstract:
Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination
Procedia PDF Downloads 174204 Effects of Ubiquitous 360° Learning Environment on Clinical Histotechnology Competence
Authors: Mari A. Virtanen, Elina Haavisto, Eeva Liikanen, Maria Kääriäinen
Abstract:
Rapid technological development and digitalization has affected also on higher education. During last twenty years multiple of electronic and mobile learning (e-learning, m-learning) platforms have been developed and have become prevalent in many universities and in the all fields of education. Ubiquitous learning (u-learning) is not that widely known or used. Ubiquitous learning environments (ULE) are the new era of computer-assisted learning. They are based on ubiquitous technology and computing that fuses the learner seamlessly into learning process by using sensing technology as tags, badges or barcodes and smart devices like smartphones and tablets. ULE combines real-life learning situations into virtual aspects and can be flexible used in anytime and anyplace. The aim of this study was to assess the effects of ubiquitous 360 o learning environment on higher education students’ clinical histotechnology competence. A quasi-experimental study design was used. 57 students in biomedical laboratory science degree program was assigned voluntarily to experiment (n=29) and to control group (n=28). Experimental group studied via ubiquitous 360o learning environment and control group via traditional web-based learning environment (WLE) in a 8-week educational intervention. Ubiquitous 360o learning environment (ULE) combined authentic learning environment (histotechnology laboratory), digital environment (virtual laboratory), virtual microscope, multimedia learning content, interactive communication tools, electronic library and quick response barcodes placed into authentic laboratory. Web-based learning environment contained equal content and components with the exception of the use of mobile device, interactive communication tools and quick response barcodes. Competence of clinical histotechnology was assessed by using knowledge test and self-report developed for this study. Data was collected electronically before and after clinical histotechnology course and analysed by using descriptive statistics. Differences among groups were identified by using Wilcoxon test and differences between groups by using Mann-Whitney U-test. Statistically significant differences among groups were identified in both groups (p<0.001). Competence scores in post-test were higher in both groups, than in pre-test. Differences between groups were very small and not statistically significant. In this study the learning environment have developed based on 360o technology and successfully implemented into higher education context. And students’ competence increases when ubiquitous learning environment were used. In the future, ULE can be used as a learning management system for any learning situation in health sciences. More studies are needed to show differences between ULE and WLE.Keywords: competence, higher education, histotechnology, ubiquitous learning, u-learning, 360o
Procedia PDF Downloads 286203 21st Century Business Dynamics: Acting Local and Thinking Global through Extensive Business Reporting Language (XBRL)
Authors: Samuel Faboyede, Obiamaka Nwobu, Samuel Fakile, Dickson Mukoro
Abstract:
In the present dynamic business environment of corporate governance and regulations, financial reporting is an inevitable and extremely significant process for every business enterprise. Several financial elements such as Annual Reports, Quarterly Reports, ad-hoc filing, and other statutory/regulatory reports provide vital information to the investors and regulators, and establish trust and rapport between the internal and external stakeholders of an organization. Investors today are very demanding, and emphasize greatly on authenticity, accuracy, and reliability of financial data. For many companies, the Internet plays a key role in communicating business information, internally to management and externally to stakeholders. Despite high prominence being attached to external reporting, it is disconnected in most companies, who generate their external financial documents manually, resulting in high degree of errors and prolonged cycle times. Chief Executive Officers and Chief Financial Officers are increasingly susceptible to endorsing error-laden reports, late filing of reports, and non-compliance with regulatory acts. There is a lack of common platform to manage the sensitive information – internally and externally – in financial reports. The Internet financial reporting language known as eXtensible Business Reporting Language (XBRL) continues to develop in the face of challenges and has now reached the point where much of its promised benefits are available. This paper looks at the emergence of this revolutionary twenty-first century language of digital reporting. It posits that today, the world is on the brink of an Internet revolution that will redefine the ‘business reporting’ paradigm. The new Internet technology, eXtensible Business Reporting Language (XBRL), is already being deployed and used across the world. It finds that XBRL is an eXtensible Markup Language (XML) based information format that places self-describing tags around discrete pieces of business information. Once tags are assigned, it is possible to extract only desired information, rather than having to download or print an entire document. XBRL is platform-independent and it will work on any current or recent-year operating system, or any computer and interface with virtually any software. The paper concludes that corporate stakeholders and the government cannot afford to ignore the XBRL. It therefore recommends that all must act locally and think globally now via the adoption of XBRL that is changing the face of worldwide business reporting.Keywords: XBRL, financial reporting, internet, internal and external reports
Procedia PDF Downloads 288202 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction
Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun
Abstract:
The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.Keywords: usability, qualitative data, text-processing algorithm, natural language processing
Procedia PDF Downloads 285201 Discourse Analysis: Where Cognition Meets Communication
Authors: Iryna Biskub
Abstract:
The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.Keywords: cognition, communication, discourse, strategy
Procedia PDF Downloads 255200 Parental Education on Early Childhood Development Using Mobile App and Website in China
Authors: Margo O'Sullivan, Xuefeng Chen, Qi Zhao, J. Jiang, Ning Fu
Abstract:
Early childhood development, or ECD, is about the 'whole child' – the physical, social and emotional, cognitive thinking and language progression of each young individual. Overwhelming evidence is now available to support investment in Early Childhood Development internationally, attendance at ECD leads to: improved learning outcomes; improved completion and reduced less dropout rates; and most notably, Professor Heckman, Nobel Laureate’s, findings that for every dollar invested, there is an economic return of up to 17%. Notably, ECD has been included in the 2015-2030 Sustainable Development Goals. The Government of China (GOC) has embraced this research and in 2010, State Council, announced focus on ECD setting a target to provide access to ECD for 85% of 3-6 year olds by 2020; to date, the target has surpassed expectations and reached 70.4%. GoC is also increasingly focusing on the even more critical 0-3 age group, when the plasticity of the brain is at its peak and neurons form connections as fast as 1,000 per second. Key to ECD are parents and caregivers of young children, with parental education critical to fully exploiting the significant potential of the early years of children. In China, with such vast numbers, one in seven pre-school age children in the world live in China, the Ministry of Education (MoE) and the National Centre for Education Technology, explored how to best provide parental education and provide key child developmental related knowledge to parents and caregivers. In response, MoE and UNICEF created a resource for parenting information that began with a computer website in 2012, followed by piloting a kiosk service in 2013 for parents in remote areas without access to the internet, and then a mobile phone application in 2014. The resource includes 269 ECD messages and 200 micro-videos covering critical issues of early childhood development from birth to age 6 years: daily care, nutrition and feeding, disease prevention, immunization, development and education, and safety and protection. To date, there have been 397,599 unique views on the website, and data for the mobile app currently being analysed (Links: http://yuer.cbern.gov.cn/; App: https://appsto.re/cn/OiKPZ.i). This paper will explore the development of this resource, its use by parents and the public, efforts to assess the effectiveness in improving parenting and child development, and future plans to roll an updated version in 2016 to all parents.Keywords: early childhood development, mobile apps for education, parental education, China
Procedia PDF Downloads 227199 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 134198 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 59197 Cricket Injury Surveillence by Mobile Application Technology on Smartphones
Authors: Najeebullah Soomro, Habib Noorbhai, Mariam Soomro, Ross Sanders
Abstract:
The demands on cricketers are increasing with more matches being played in a shorter period of time with a greater intensity. A ten year report on injury incidence for Australian elite cricketers between the 2000- 2011 seasons revealed an injury incidence rate of 17.4%.1. In the 2009–10 season, 24 % of Australian fast bowlers missed matches through injury. 1 Injury rates are even higher in junior cricketers with an injury incidence of 25% or 2.9 injuries per 100 player hours reported. 2 Traditionally, injury surveillance has relied on the use of paper based forms or complex computer software. 3,4 This makes injury reporting laborious for the staff involved. The purpose of this presentation is to describe a smartphone based mobile application as a means of improving injury surveillance in cricket. Methods: The researchers developed CricPredict mobile App for the Android platforms, the world’s most widely used smartphone platform. It uses Qt SDK (Software Development Kit) as IDE (Integrated Development Environment). C++ was used as the programming language with the Qt framework, which provides us with cross-platform abilities that will allow this app to be ported to other operating systems (iOS, Mac, Windows) in the future. The wireframes (graphic user interface) were developed using Justinmind Prototyper Pro Edition Version (Ver. 6.1.0). CricPredict enables recording of injury and training status conveniently and immediately. When an injury is reported automated follow-up questions include site of injury, nature of injury, mechanism of injury, initial treatment, referral and action taken after injury. Direct communication with the player then enables assessment of severity and diagnosis. CricPredict also allows the coach to maintain and track each player’s attendance at matches and training session. Workload data can also be recorded by either the player or coach by recording the number of balls bowled or played in a day. This is helpful in formulating injury rates and time lost due to injuries. All the data are stored at a secured password protected data server. Outcomes and Significance: Use of CricPredit offers a simple, user friendly tool for the coaching or medical staff associated with teams to predict, record and report injuries. This system will assist teams to capture injury data with ease thus allowing better understanding of injuries associated with cricket and potentially optimize the performance of such cricketers.Keywords: injury, cricket, surveillance, smartphones, mobile
Procedia PDF Downloads 459196 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve
Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick
Abstract:
Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin
Procedia PDF Downloads 152195 Robust Processing of Antenna Array Signals under Local Scattering Environments
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch
Procedia PDF Downloads 113194 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 99193 Investigation of the Relationship between Digital Game Playing, Internet Addiction and Perceived Stress Levels in University Students
Authors: Sevim Ugur, Cemile Kutmec Yilmaz, Omer Us, Sevdenur Koksaldi
Abstract:
Aim: This study aims to investigate the effect of digital game playing and Internet addiction on perceived stress levels in university students. Method: The descriptive study was conducted through face-to-face interview method with a total of 364 university students studying at Aksaray University between November 15 and December 30, 2017. The research data were collected using personal information form, a questionnaire to determine the characteristics of playing digital game, the Internet addiction scale and the perceived stress scale. In the evaluation of the data, Mann-Whitney U test was used for two-group comparison of the sample with non-normal distribution, Kruskal-Wallis H-test was used in the comparison of more than two groups, and the Spearman correlation test was used to determine the relationship between Internet addiction and the perceived stress level. Results: It was determined that the mean age of the students participated in the study was 20.13 ± 1.7 years, 67.6% was female, 35.7% was sophomore, and 62.1% had an income 500 TL or less. It was found that 83.5% of the students use the Internet every day and 70.6% uses the Internet for 5 hours or less per day. Of the students, 12.4% prefers digital games instead of spending time outdoors, 8% plays a game as the first activity in leisure time, 12.4% plays all day, 15.7% feels anger when he/she is prevented from playing, 14.8% prefers playing games to get away from his/her problems, 23.4% had his/her school achievement affected negatively because of game playing, and 8% argues with family members due to the time spent for gaming. Students who play games on the computer for a long time were found to feel back pain (30.8%), headache (28.6%), insomnia (26.9%), dryness and pain in the eyes (26.6%), pain in the wrist (21.2%), feeling excessive tension and anger (16.2%), humpback (12.9), vision loss (9.6%) and pain in the wrist and fingers (7.4%). In our study, students' Internet addiction scale mean score was found to be 45.47 ± 16.1 and mean perceived stress scale score was 28.56 ± 2.7. A significant and negative correlation (p=0.037) was found between the total score of the Internet addiction scale and the total score of the perceived stress scale (r=-0.110). Conclusion: It was found in the study that Internet addiction and perceived stress of the students were at a moderate level and that there was a negative correlation between Internet addiction and perceived stress levels. Internet addiction was found to increase with the increasing perceived stress levels of students, and students were found to have health problems such as back pain, dryness in the eyes, pain, insomnia, headache, and humpback. Therefore, it is recommended to inform students about different coping methods other than spending time on the Internet to cope with the stress they perceive.Keywords: digital game, internet addiction, student, stress level
Procedia PDF Downloads 288192 Digitalization, Economic Growth and Financial Sector Development in Africa
Authors: Abdul Ganiyu Iddrisu
Abstract:
Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.Keywords: digitalization, economic growth, financial sector development, Africa
Procedia PDF Downloads 103191 Measurement of in-situ Horizontal Root Tensile Strength of Herbaceous Vegetation for Improved Evaluation of Slope Stability in the Alps
Authors: Michael T. Lobmann, Camilla Wellstein, Stefan Zerbe
Abstract:
Vegetation plays an important role for the stabilization of slopes against erosion processes, such as shallow erosion and landslides. Plant roots reinforce the soil, increase soil cohesion and often cross possible shear planes. Hence, plant roots reduce the risk of slope failure. Generally, shrub and tree roots penetrate deeper into the soil vertically, while roots of forbs and grasses are concentrated horizontally in the topsoil and organic layer. Therefore, shrubs and trees have a higher potential for stabilization of slopes with deep soil layers than forbs and grasses. Consequently, research mainly focused on the vertical root effects of shrubs and trees. Nevertheless, a better understanding of the stabilizing effects of grasses and forbs is needed for better evaluation of the stability of natural and artificial slopes with herbaceous vegetation. Despite the importance of vertical root effects, field observations indicate that horizontal root effects also play an important role for slope stabilization. Not only forbs and grasses, but also some shrubs and trees form tight horizontal networks of fine and coarse roots and rhizomes in the topsoil. These root networks increase soil cohesion and horizontal tensile strength. Available methods for physical measurements, such as shear-box tests, pullout tests and singular root tensile strength measurement can only provide a detailed picture of vertical effects of roots on slope stabilization. However, the assessment of horizontal root effects is largely limited to computer modeling. Here, a method for measurement of in-situ cumulative horizontal root tensile strength is presented. A traction machine was developed that allows fixation of rectangular grass sods (max. 30x60cm) on the short ends with a 30x30cm measurement zone in the middle. On two alpine grass slopes in South Tyrol (northern Italy), 30x60cm grass sods were cut out (max. depth 20cm). Grass sods were pulled apart measuring the horizontal tensile strength over 30cm width over the time. The horizontal tensile strength of the sods was measured and compared for different soil depths, hydrological conditions, and root physiological properties. The results improve our understanding of horizontal root effects on slope stabilization and can be used for improved evaluation of grass slope stability.Keywords: grassland, horizontal root effect, landslide, mountain, pasture, shallow erosion
Procedia PDF Downloads 168190 Analysis of Metamaterial Permeability on the Performance of Loosely Coupled Coils
Authors: Icaro V. Soares, Guilherme L. F. Brandao, Ursula D. C. Resende, Glaucio L. Siqueira
Abstract:
Electrical energy can be wirelessly transmitted through resonant coupled coils that operate in the near-field region. Once in this region, the field has evanescent character, the efficiency of Resonant Wireless Power Transfer (RWPT) systems decreases proportionally with the inverse cube of distance between the transmitter and receiver coils. The commercially available RWPT systems are restricted to short and mid-range applications in which the distance between coils is lesser or equal to the coil size. An alternative to overcome this limitation is applying metamaterial structures to enhance the coupling between coils, thus reducing the field decay along the distance between them. Metamaterials can be conceived as composite materials with periodic or non-periodic structure whose unconventional electromagnetic behaviour is due to its unit cell disposition and chemical composition. This new kind of material has been used in frequency selective surfaces, invisibility cloaks, leaky-wave antennas, among other applications. However, for RWPT it is mainly applied as superlenses which are lenses that can overcome the optical limitation and are made of left-handed media, that is, a medium with negative magnetic permeability and electric permittivity. As RWPT systems usually operate at wavelengths of hundreds of meters, the metamaterial unit cell size is much smaller than the wavelength. In this case, electric and magnetic field are decoupled, therefore the double negative condition for superlenses are not required and the negative magnetic permeability is enough to produce an artificial magnetic medium. In this work, the influence of the magnetic permeability of a metamaterial slab inserted between two loosely coupled coils is studied in order to find the condition that leads to the maximum transmission efficiency. The metamaterial used is formed by a subwavelength unit cell that consist of a capacitor-loaded split ring with an inner spiral that is designed and optimized using the software Computer Simulation Technology. The unit cell permeability is experimentally characterized by the ratio of the transmission parameters between coils measured with and without the presence of the metamaterial slab. Early measurements results show that the transmission coefficient at the resonant frequency after the inclusion of the metamaterial is about three times higher than with just the two coils, which confirms the enhancement that this structure brings to RWPT systems.Keywords: electromagnetic lens, loosely coupled coils, magnetic permeability, metamaterials, resonant wireless power transfer, subwavelength unit cells
Procedia PDF Downloads 146189 Psychophysiological Adaptive Automation Based on Fuzzy Controller
Authors: Liliana Villavicencio, Yohn Garcia, Pallavi Singh, Luis Fernando Cruz, Wilfrido Moreno
Abstract:
Psychophysiological adaptive automation is a concept that combines human physiological data and computer algorithms to create personalized interfaces and experiences for users. This approach aims to enhance human learning by adapting to individual needs and preferences and optimizing the interaction between humans and machines. According to neurosciences, the working memory demand during the student learning process is modified when the student is learning a new subject or topic, managing and/or fulfilling a specific task goal. A sudden increase in working memory demand modifies the level of students’ attention, engagement, and cognitive load. The proposed psychophysiological adaptive automation system will adapt the task requirements to optimize cognitive load, the process output variable, by monitoring the student's brain activity. Cognitive load changes according to the student’s previous knowledge, the type of task, the difficulty level of the task, and the overall psychophysiological state of the student. Scaling the measured cognitive load as low, medium, or high; the system will assign a task difficulty level to the next task according to the ratio between the previous-task difficulty level and student stress. For instance, if a student becomes stressed or overwhelmed during a particular task, the system detects this through signal measurements such as brain waves, heart rate variability, or any other psychophysiological variables analyzed to adjust the task difficulty level. The control of engagement and stress are considered internal variables for the hypermedia system which selects between three different types of instructional material. This work assesses the feasibility of a fuzzy controller to track a student's physiological responses and adjust the learning content and pace accordingly. Using an industrial automation approach, the proposed fuzzy logic controller is based on linguistic rules that complement the instrumentation of the system to monitor and control the delivery of instructional material to the students. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the delivery of academic content based on the working memory demand without compromising students’ health. This work has a potential application in the instructional design of virtual reality environments for training and education.Keywords: fuzzy logic controller, hypermedia control system, personalized education, psychophysiological adaptive automation
Procedia PDF Downloads 82188 Post-Traumatic Stress Disorder and Problem Alcohol Use in Women: Systematic Analysis
Authors: Neringa Bagdonaite
Abstract:
Study Aims: The current study aimed to systematically analyse various research done in the area of female post-traumatic stress disorder (PTSD) and alcohol abuse, and to critically review these results on the basis of theoretical models as well as answer following questions: (I) What is the reciprocal relationship between PTSD and problem alcohol use among females; (II) What are the moderating/mediating factors of this relationship? Methods: The computer bibliographic databases Ebsco, Scopus, Springer, Web of Science, Medline, Science Direct were used to search for scientific articles. Systematic analyses sample consisted of peer-reviewed, English written articles addressing mixed gender and female PTSD and alcohol abuse issues from Jan 2012 to May 2017. Results: Total of 1011 articles were found in scientific databases related to searched keywords of which 29 met the selection criteria and were analysed. The results of longitudinal studies indicate that (I) various trauma, especially interpersonal trauma exposure in childhood is linked with increased risk of revictimization in later life and problem alcohol use; (II) revictimization in adolescence or adulthood, rather than victimization in childhood has a greater impact on the onset and progression of problematic alcohol use in adulthood. Cross-sectional and epidemiological studies also support significant relationships between female PTSD and problem alcohol use. Regards to the negative impact of alcohol use on PTSD symptoms results are yet controversial; some evidence suggests that alcohol does not exacerbate symptoms of PTSD over time, while others argue that problem alcohol use worsens PTSD symptoms and is linked to chronicity of both disorders, especially among women with previous alcohol use problems. Analysis of moderating/mediating factors of PTSD and problem alcohol use revealed, that higher motives/expectancies, specifically distress coping motives for alcohol use significantly moderates the relationship between PTSD and problematic alcohol use. Whereas negative affective states mediate relationship between symptoms of PTSD and alcohol use, but only among woman with alcohol use problems already developed. Conclusions: Interpersonal trauma experience, especially in childhood and its reappearance in lifetime is linked with PTSD symptoms and problem drinking among women. Moreover, problem alcohol use can be both a cause and a consequence of trauma and PTSD, and if used for coping it, increases the likelihood of chronicity of both disorders. In order to effectively treat both disorders, it’s worthwhile taking into account this dynamic interplay of women's PTSD symptoms and problem drinking.Keywords: female, trauma, post-traumatic stress disorder, problem alcohol use, systemic analysis
Procedia PDF Downloads 184187 Morphological Process of Villi Detachment Assessed by Computer-Assisted 3D Reconstruction of Intestinal Crypt from Serial Ultrathin Sections of Rat Duodenum Mucosa
Authors: Lise P. Labéjof, Ivna Mororó, Raquel G. Bastos, Maria Isabel G. Severo, Arno H. de Oliveira
Abstract:
This work presents an alternative mode of intestine mucosa renewal that may allow to better understand the total loss of villi after irradiation. It was tested a morphological method of 3d reconstruction using micrographs of serial sections of rat duodenum. We used hundreds of sections of each specimen of duodenum placed on glass slides and examined under a light microscope. Those containing the detachment, approximately a dozen, were chosen for observation under a transmission electron microscope (TEM). Each of these sections was glued on a block of epon resin and recut into a hundred of 60 nm-thick sections. Ribbons of these ultrathin sections were distributed on a series of copper grids in the same order of appearance than during the process of microstomia. They were then stained by solutions of uranyl and lead salts and observed under a TEM. The sections were pictured and the electron micrographs showing signs of cells detachment were transferred into two softwares, ImageJ to align the cellular structures and Reconstruct to realize the 3d reconstruction. It has been detected epithelial cells that exhibited all signs of programmed cell death and localized at the villus-crypt junction. Their nucleus was irregular in shape with a condensed chromatin in clumps. Their cytoplasm was darker than that of neighboring cells, containing many swollen mitochondria. In some places of the sections, we could see intercellular spaces enlarged by the presence of shrunk cells which displayed a plasma membrane with an irregular shape in thermowell as if the cell interdigitations would distant from each other. The three-dimensional reconstruction of the crypts has allowed observe gradual loss of intercellular contacts of crypt cells in the longitudinal plan of the duodenal mucosa. In the transverse direction, there was a gradual increase of the intercellular space as if these cells moved away from one another. This observation allows assume that the gradual remoteness of the cells at the villus-crypt junction is the beginning of the mucosa detachment. Thus, the shrinking of cells due to apoptosis is the way that they detach from the mucosa and progressively the villi also. These results are in agreement with our initial hypothesis and thus have demonstrated that the villi become detached from the mucosa at the villus-crypt junction by the programmed cell death process. This type of loss of entire villus helps explain the rapid denudation of the intestinal mucosa in case of irradiation.Keywords: 3dr, transmission electron microscopy, ionizing radiations, rat small intestine, apoptosis
Procedia PDF Downloads 379186 A Risk-Based Modeling Approach for Successful Adoption of CAATTs in Audits: An Exploratory Study Applied to Israeli Accountancy Firms
Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy
Abstract:
Technology adoption models are extensively used in the literature to explore drivers and inhibitors affecting the adoption of Computer Assisted Audit Techniques and Tools (CAATTs). Further studies from recent years suggested additional factors that may affect technology adoption by CPA firms. However, the adoption of CAATTs by financial auditors differs from the adoption of technologies in other industries. This is a result of the unique characteristics of the auditing process, which are expressed in the audit risk elements and the risk-based auditing approach, as encoded in the auditing standards. Since these audit risk factors are not part of the existing models that are used to explain technology adoption, these models do not fully correspond to the specific needs and requirements of the auditing domain. The overarching objective of this qualitative research is to fill the gap in the literature, which exists as a result of using generic technology adoption models. Followed by a pretest and based on semi-structured in-depth interviews with 16 Israeli CPA firms of different sizes, this study aims to reveal determinants related to audit risk factors that influence the adoption of CAATTs in audits and proposes a new modeling approach for the successful adoption of CAATTs. The findings emphasize several important aspects: (1) while large CPA firms developed their own inner guidelines to assess the audit risk components, other CPA firms do not follow a formal and validated methodology to evaluate these risks; (2) large firms incorporate a variety of CAATTs, including self-developed advanced tools. On the other hand, small and mid-sized CPA firms incorporate standard CAATTs and still need to catch up to better understand what CAATTs can offer and how they can contribute to the quality of the audit; (3) the top management of mid-sized and small CPA firms should be more proactive and updated about CAATTs capabilities and contributions to audits; and (4) All CPA firms consider professionalism as a major challenge that must be constantly managed to ensure an optimal CAATTs operation. The study extends the existing knowledge of CAATTs adoption by looking at it from a risk-based auditing approach. It suggests a new model for CAATTs adoption by incorporating influencing audit risk factors that auditors should examine when considering CAATTs adoption. Since the model can be used in various audited scenarios and supports strategic, risk-based decisions, it maximizes the great potential of CAATTs on the quality of the audits. The results and insights can be useful to CPA firms, internal auditors, CAATTs developers and regulators. Moreover, it may motivate audit standard-setters to issue updated guidelines regarding CAATTs adoption in audits.Keywords: audit risk, CAATTs, financial auditing, information technology, technology adoption models
Procedia PDF Downloads 69