Search results for: computer course unit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4388

Search results for: computer course unit

218 An Analysis of Gamification in the Post-Secondary Classroom

Authors: F. Saccucci

Abstract:

Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.

Keywords: gamification, inexpensive, non-quantitative advantages, post-secondary

Procedia PDF Downloads 185
217 Consumer Knowledge and Behavior in the Aspect of Food Waste

Authors: Katarzyna Neffe-Skocinska, Marzena Tomaszewska, Beata Bilska, Dorota Zielinska, Monika Trzaskowska, Anna Lepecka, Danuta Kolozyn-Krajewska

Abstract:

The aim of the study was to assess Polish consumer behavior towards food waste, including knowledge of information on food labels. The survey was carried out using the CAPI (computer assisted personal interview) method, which involves interviewing the respondent using mobile devices. The research group was a representative sample for Poland due to demographic variables: gender, age, place of residence. A total of 1.115 respondents participated in the study (51.1% were women and 48.9% were men). The questionnaire included questions on five thematic aspects: 1. General knowledge and sources of information on the phenomenon of food waste; 2. Consumption of food after the date of minimum durability; 3. The meanings of the phrase 'best before ...'; 4. Indication of the difference between the meaning of the words 'best before ...' and 'use by'; 5. Indications products marked with the phrase 'best before ...'. It was found that every second surveyed Pole met with the topic of food waste (54.8%). Among the respondents, the most popular source of information related to the research topic was television (89.4%), radio (26%) and the Internet (24%). Over a third of respondents declared that they consume food after the date of minimum durability. Only every tenth (9.8%) respondent does not pay attention to the expiry date and type of consumed products (durable and perishable products). Correctly 39.8% of respondents answered the question: How do you understand the phrase 'best before ...'? In the opinion of 42.8% of respondents, the statements 'best before ...' and 'use by' mean the same thing, while 36% of them think differently. In addition, more than one-fifth of respondents could not respond to the questions. In the case of products of the indication information 'best before ...', more than 40% of the respondents chosen perishable products, e.g., yoghurts and durable, e.g., groats. A slightly lower percentage of indications was recorded for flour (35.1%), sausage (32.8%), canned corn (31.8%), and eggs (25.0%). Based on the assessment of the behavior of Polish consumers towards the phenomenon of food waste, it can be concluded that respondents have elementary knowledge of the study subject. Noteworthy is the good conduct of most respondents in terms of compliance with shelf life and dates of minimum durability of food products. The publication was financed on the basis of an agreement with the National Center for Research and Development No. Gospostrateg 1/385753/1/NCBR/2018 for the implementation and financing of the project under the strategic research and development program social and economic development of Poland in the conditions of globalizing markets – GOSPOSTRATEG - acronym PROM.

Keywords: food waste, shelf life, dates of durability, consumer knowledge and behavior

Procedia PDF Downloads 147
216 Virtual Team Performance: A Transactive Memory System Perspective

Authors: Belbaly Nassim

Abstract:

Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.

Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination

Procedia PDF Downloads 141
215 Exploring Teachers’ Beliefs about Diagnostic Language Assessment Practices in a Large-Scale Assessment Program

Authors: Oluwaseun Ijiwade, Chris Davison, Kelvin Gregory

Abstract:

In Australia, like other parts of the world, the debate on how to enhance teachers using assessment data to inform teaching and learning of English as an Additional Language (EAL, Australia) or English as a Foreign Language (EFL, United States) have occupied the centre of academic scholarship. Traditionally, this approach was conceptualised as ‘Formative Assessment’ and, in recent times, ‘Assessment for Learning (AfL)’. The central problem is that teacher-made tests are limited in providing data that can inform teaching and learning due to variability of classroom assessments, which are hindered by teachers’ characteristics and assessment literacy. To address this concern, scholars in language education and testing have proposed a uniformed large-scale computer-based assessment program to meet the needs of teachers and promote AfL in language education. In Australia, for instance, the Victoria state government commissioned a large-scale project called 'Tools to Enhance Assessment Literacy (TEAL) for Teachers of English as an additional language'. As part of the TEAL project, a tool called ‘Reading and Vocabulary assessment for English as an Additional Language (RVEAL)’, as a diagnostic language assessment (DLA), was developed by language experts at the University of New South Wales for teachers in Victorian schools to guide EAL pedagogy in the classroom. Therefore, this study aims to provide qualitative evidence for understanding beliefs about the diagnostic language assessment (DLA) among EAL teachers in primary and secondary schools in Victoria, Australia. To realize this goal, this study raises the following questions: (a) How do teachers use large-scale assessment data for diagnostic purposes? (b) What skills do language teachers think are necessary for using assessment data for instruction in the classroom? and (c) What factors, if any, contribute to teachers’ beliefs about diagnostic assessment in a large-scale assessment? Semi-structured interview method was used to collect data from at least 15 professional teachers who were selected through a purposeful sampling. The findings from the resulting data analysis (thematic analysis) provide an understanding of teachers’ beliefs about DLA in a classroom context and identify how these beliefs are crystallised in language teachers. The discussion shows how the findings can be used to inform professional development processes for language teachers as well as informing important factor of teacher cognition in the pedagogic processes of language assessment. This, hopefully, will help test developers and testing organisations to align the outcome of this study with their test development processes to design assessment that can enhance AfL in language education.

Keywords: beliefs, diagnostic language assessment, English as an additional language, teacher cognition

Procedia PDF Downloads 172
214 Effects of Ubiquitous 360° Learning Environment on Clinical Histotechnology Competence

Authors: Mari A. Virtanen, Elina Haavisto, Eeva Liikanen, Maria Kääriäinen

Abstract:

Rapid technological development and digitalization has affected also on higher education. During last twenty years multiple of electronic and mobile learning (e-learning, m-learning) platforms have been developed and have become prevalent in many universities and in the all fields of education. Ubiquitous learning (u-learning) is not that widely known or used. Ubiquitous learning environments (ULE) are the new era of computer-assisted learning. They are based on ubiquitous technology and computing that fuses the learner seamlessly into learning process by using sensing technology as tags, badges or barcodes and smart devices like smartphones and tablets. ULE combines real-life learning situations into virtual aspects and can be flexible used in anytime and anyplace. The aim of this study was to assess the effects of ubiquitous 360 o learning environment on higher education students’ clinical histotechnology competence. A quasi-experimental study design was used. 57 students in biomedical laboratory science degree program was assigned voluntarily to experiment (n=29) and to control group (n=28). Experimental group studied via ubiquitous 360o learning environment and control group via traditional web-based learning environment (WLE) in a 8-week educational intervention. Ubiquitous 360o learning environment (ULE) combined authentic learning environment (histotechnology laboratory), digital environment (virtual laboratory), virtual microscope, multimedia learning content, interactive communication tools, electronic library and quick response barcodes placed into authentic laboratory. Web-based learning environment contained equal content and components with the exception of the use of mobile device, interactive communication tools and quick response barcodes. Competence of clinical histotechnology was assessed by using knowledge test and self-report developed for this study. Data was collected electronically before and after clinical histotechnology course and analysed by using descriptive statistics. Differences among groups were identified by using Wilcoxon test and differences between groups by using Mann-Whitney U-test. Statistically significant differences among groups were identified in both groups (p<0.001). Competence scores in post-test were higher in both groups, than in pre-test. Differences between groups were very small and not statistically significant. In this study the learning environment have developed based on 360o technology and successfully implemented into higher education context. And students’ competence increases when ubiquitous learning environment were used. In the future, ULE can be used as a learning management system for any learning situation in health sciences. More studies are needed to show differences between ULE and WLE.

Keywords: competence, higher education, histotechnology, ubiquitous learning, u-learning, 360o

Procedia PDF Downloads 264
213 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 214
212 Blended Cloud Based Learning Approach in Information Technology Skills Training and Paperless Assessment: Case Study of University of Cape Coast

Authors: David Ofosu-Hamilton, John K. E. Edumadze

Abstract:

Universities have come to recognize the role Information and Communication Technology (ICT) skills plays in the daily activities of tertiary students. The ability to use ICT – essentially, computers and their diverse applications – are important resources that influence an individual’s economic and social participation and human capital development. Our society now increasingly relies on the Internet, and the Cloud as a means to communicate and disseminate information. The educated individual should, therefore, be able to use ICT to create and share knowledge that will improve society. It is, therefore, important that universities require incoming students to demonstrate a level of computer proficiency or trained to do so at a minimal cost by deploying advanced educational technologies. The training and standardized assessment of all in-coming first-year students of the University of Cape Coast in Information Technology Skills (ITS) have become a necessity as students’ most often than not highly overestimate their digital skill and digital ignorance is costly to any economy. The one-semester course is targeted at fresh students and aimed at enhancing the productivity and software skills of students. In this respect, emphasis is placed on skills that will enable students to be proficient in using Microsoft Office and Google Apps for Education for their academic work and future professional work whiles using emerging digital multimedia technologies in a safe, ethical, responsible, and legal manner. The course is delivered in blended mode - online and self-paced (student centered) using Alison’s free cloud-based tutorial (Moodle) of Microsoft Office videos. Online support is provided via discussion forums on the University’s Moodle platform and tutor-directed and assisted at the ICT Centre and Google E-learning laboratory. All students are required to register for the ITS course during either the first or second semester of the first year and must participate and complete it within a semester. Assessment focuses on Alison online assessment on Microsoft Office, Alison online assessment on ALISON ABC IT, Peer assessment on e-portfolio created using Google Apps/Office 365 and an End of Semester’s online assessment at the ICT Centre whenever the student was ready in the cause of the semester. This paper, therefore, focuses on the digital culture approach of hybrid teaching, learning and paperless examinations and the possible adoption by other courses or programs at the University of Cape Coast.

Keywords: assessment, blended, cloud, paperless

Procedia PDF Downloads 220
211 21st Century Business Dynamics: Acting Local and Thinking Global through Extensive Business Reporting Language (XBRL)

Authors: Samuel Faboyede, Obiamaka Nwobu, Samuel Fakile, Dickson Mukoro

Abstract:

In the present dynamic business environment of corporate governance and regulations, financial reporting is an inevitable and extremely significant process for every business enterprise. Several financial elements such as Annual Reports, Quarterly Reports, ad-hoc filing, and other statutory/regulatory reports provide vital information to the investors and regulators, and establish trust and rapport between the internal and external stakeholders of an organization. Investors today are very demanding, and emphasize greatly on authenticity, accuracy, and reliability of financial data. For many companies, the Internet plays a key role in communicating business information, internally to management and externally to stakeholders. Despite high prominence being attached to external reporting, it is disconnected in most companies, who generate their external financial documents manually, resulting in high degree of errors and prolonged cycle times. Chief Executive Officers and Chief Financial Officers are increasingly susceptible to endorsing error-laden reports, late filing of reports, and non-compliance with regulatory acts. There is a lack of common platform to manage the sensitive information – internally and externally – in financial reports. The Internet financial reporting language known as eXtensible Business Reporting Language (XBRL) continues to develop in the face of challenges and has now reached the point where much of its promised benefits are available. This paper looks at the emergence of this revolutionary twenty-first century language of digital reporting. It posits that today, the world is on the brink of an Internet revolution that will redefine the ‘business reporting’ paradigm. The new Internet technology, eXtensible Business Reporting Language (XBRL), is already being deployed and used across the world. It finds that XBRL is an eXtensible Markup Language (XML) based information format that places self-describing tags around discrete pieces of business information. Once tags are assigned, it is possible to extract only desired information, rather than having to download or print an entire document. XBRL is platform-independent and it will work on any current or recent-year operating system, or any computer and interface with virtually any software. The paper concludes that corporate stakeholders and the government cannot afford to ignore the XBRL. It therefore recommends that all must act locally and think globally now via the adoption of XBRL that is changing the face of worldwide business reporting.

Keywords: XBRL, financial reporting, internet, internal and external reports

Procedia PDF Downloads 253
210 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction

Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun

Abstract:

The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.

Keywords: usability, qualitative data, text-processing algorithm, natural language processing

Procedia PDF Downloads 253
209 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 223
208 Parental Education on Early Childhood Development Using Mobile App and Website in China

Authors: Margo O'Sullivan, Xuefeng Chen, Qi Zhao, J. Jiang, Ning Fu

Abstract:

Early childhood development, or ECD, is about the 'whole child' – the physical, social and emotional, cognitive thinking and language progression of each young individual. Overwhelming evidence is now available to support investment in Early Childhood Development internationally, attendance at ECD leads to: improved learning outcomes; improved completion and reduced less dropout rates; and most notably, Professor Heckman, Nobel Laureate’s, findings that for every dollar invested, there is an economic return of up to 17%. Notably, ECD has been included in the 2015-2030 Sustainable Development Goals. The Government of China (GOC) has embraced this research and in 2010, State Council, announced focus on ECD setting a target to provide access to ECD for 85% of 3-6 year olds by 2020; to date, the target has surpassed expectations and reached 70.4%. GoC is also increasingly focusing on the even more critical 0-3 age group, when the plasticity of the brain is at its peak and neurons form connections as fast as 1,000 per second. Key to ECD are parents and caregivers of young children, with parental education critical to fully exploiting the significant potential of the early years of children. In China, with such vast numbers, one in seven pre-school age children in the world live in China, the Ministry of Education (MoE) and the National Centre for Education Technology, explored how to best provide parental education and provide key child developmental related knowledge to parents and caregivers. In response, MoE and UNICEF created a resource for parenting information that began with a computer website in 2012, followed by piloting a kiosk service in 2013 for parents in remote areas without access to the internet, and then a mobile phone application in 2014. The resource includes 269 ECD messages and 200 micro-videos covering critical issues of early childhood development from birth to age 6 years: daily care, nutrition and feeding, disease prevention, immunization, development and education, and safety and protection. To date, there have been 397,599 unique views on the website, and data for the mobile app currently being analysed (Links: http://yuer.cbern.gov.cn/; App: https://appsto.re/cn/OiKPZ.i). This paper will explore the development of this resource, its use by parents and the public, efforts to assess the effectiveness in improving parenting and child development, and future plans to roll an updated version in 2016 to all parents.

Keywords: early childhood development, mobile apps for education, parental education, China

Procedia PDF Downloads 205
207 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.

Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy

Procedia PDF Downloads 114
206 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust

Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin

Abstract:

The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.

Keywords: acoustic impedance, engine exhaust system, FEM model, test stand

Procedia PDF Downloads 25
205 Cricket Injury Surveillence by Mobile Application Technology on Smartphones

Authors: Najeebullah Soomro, Habib Noorbhai, Mariam Soomro, Ross Sanders

Abstract:

The demands on cricketers are increasing with more matches being played in a shorter period of time with a greater intensity. A ten year report on injury incidence for Australian elite cricketers between the 2000- 2011 seasons revealed an injury incidence rate of 17.4%.1. In the 2009–10 season, 24 % of Australian fast bowlers missed matches through injury. 1 Injury rates are even higher in junior cricketers with an injury incidence of 25% or 2.9 injuries per 100 player hours reported. 2 Traditionally, injury surveillance has relied on the use of paper based forms or complex computer software. 3,4 This makes injury reporting laborious for the staff involved. The purpose of this presentation is to describe a smartphone based mobile application as a means of improving injury surveillance in cricket. Methods: The researchers developed CricPredict mobile App for the Android platforms, the world’s most widely used smartphone platform. It uses Qt SDK (Software Development Kit) as IDE (Integrated Development Environment). C++ was used as the programming language with the Qt framework, which provides us with cross-platform abilities that will allow this app to be ported to other operating systems (iOS, Mac, Windows) in the future. The wireframes (graphic user interface) were developed using Justinmind Prototyper Pro Edition Version (Ver. 6.1.0). CricPredict enables recording of injury and training status conveniently and immediately. When an injury is reported automated follow-up questions include site of injury, nature of injury, mechanism of injury, initial treatment, referral and action taken after injury. Direct communication with the player then enables assessment of severity and diagnosis. CricPredict also allows the coach to maintain and track each player’s attendance at matches and training session. Workload data can also be recorded by either the player or coach by recording the number of balls bowled or played in a day. This is helpful in formulating injury rates and time lost due to injuries. All the data are stored at a secured password protected data server. Outcomes and Significance: Use of CricPredit offers a simple, user friendly tool for the coaching or medical staff associated with teams to predict, record and report injuries. This system will assist teams to capture injury data with ease thus allowing better understanding of injuries associated with cricket and potentially optimize the performance of such cricketers.

Keywords: injury, cricket, surveillance, smartphones, mobile

Procedia PDF Downloads 437
204 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve

Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick

Abstract:

Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.

Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin

Procedia PDF Downloads 132
203 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 82
202 Magnetic Navigation in Underwater Networks

Authors: Kumar Divyendra

Abstract:

Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.

Keywords: clustering, deep learning, network backbone, parallel computing

Procedia PDF Downloads 65
201 Investigation of the Relationship between Digital Game Playing, Internet Addiction and Perceived Stress Levels in University Students

Authors: Sevim Ugur, Cemile Kutmec Yilmaz, Omer Us, Sevdenur Koksaldi

Abstract:

Aim: This study aims to investigate the effect of digital game playing and Internet addiction on perceived stress levels in university students. Method: The descriptive study was conducted through face-to-face interview method with a total of 364 university students studying at Aksaray University between November 15 and December 30, 2017. The research data were collected using personal information form, a questionnaire to determine the characteristics of playing digital game, the Internet addiction scale and the perceived stress scale. In the evaluation of the data, Mann-Whitney U test was used for two-group comparison of the sample with non-normal distribution, Kruskal-Wallis H-test was used in the comparison of more than two groups, and the Spearman correlation test was used to determine the relationship between Internet addiction and the perceived stress level. Results: It was determined that the mean age of the students participated in the study was 20.13 ± 1.7 years, 67.6% was female, 35.7% was sophomore, and 62.1% had an income 500 TL or less. It was found that 83.5% of the students use the Internet every day and 70.6% uses the Internet for 5 hours or less per day. Of the students, 12.4% prefers digital games instead of spending time outdoors, 8% plays a game as the first activity in leisure time, 12.4% plays all day, 15.7% feels anger when he/she is prevented from playing, 14.8% prefers playing games to get away from his/her problems, 23.4% had his/her school achievement affected negatively because of game playing, and 8% argues with family members due to the time spent for gaming. Students who play games on the computer for a long time were found to feel back pain (30.8%), headache (28.6%), insomnia (26.9%), dryness and pain in the eyes (26.6%), pain in the wrist (21.2%), feeling excessive tension and anger (16.2%), humpback (12.9), vision loss (9.6%) and pain in the wrist and fingers (7.4%). In our study, students' Internet addiction scale mean score was found to be 45.47 ± 16.1 and mean perceived stress scale score was 28.56 ± 2.7. A significant and negative correlation (p=0.037) was found between the total score of the Internet addiction scale and the total score of the perceived stress scale (r=-0.110). Conclusion: It was found in the study that Internet addiction and perceived stress of the students were at a moderate level and that there was a negative correlation between Internet addiction and perceived stress levels. Internet addiction was found to increase with the increasing perceived stress levels of students, and students were found to have health problems such as back pain, dryness in the eyes, pain, insomnia, headache, and humpback. Therefore, it is recommended to inform students about different coping methods other than spending time on the Internet to cope with the stress they perceive.

Keywords: digital game, internet addiction, student, stress level

Procedia PDF Downloads 254
200 Digitalization, Economic Growth and Financial Sector Development in Africa

Authors: Abdul Ganiyu Iddrisu

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, economic growth, financial sector development, Africa

Procedia PDF Downloads 73
199 Measurement of in-situ Horizontal Root Tensile Strength of Herbaceous Vegetation for Improved Evaluation of Slope Stability in the Alps

Authors: Michael T. Lobmann, Camilla Wellstein, Stefan Zerbe

Abstract:

Vegetation plays an important role for the stabilization of slopes against erosion processes, such as shallow erosion and landslides. Plant roots reinforce the soil, increase soil cohesion and often cross possible shear planes. Hence, plant roots reduce the risk of slope failure. Generally, shrub and tree roots penetrate deeper into the soil vertically, while roots of forbs and grasses are concentrated horizontally in the topsoil and organic layer. Therefore, shrubs and trees have a higher potential for stabilization of slopes with deep soil layers than forbs and grasses. Consequently, research mainly focused on the vertical root effects of shrubs and trees. Nevertheless, a better understanding of the stabilizing effects of grasses and forbs is needed for better evaluation of the stability of natural and artificial slopes with herbaceous vegetation. Despite the importance of vertical root effects, field observations indicate that horizontal root effects also play an important role for slope stabilization. Not only forbs and grasses, but also some shrubs and trees form tight horizontal networks of fine and coarse roots and rhizomes in the topsoil. These root networks increase soil cohesion and horizontal tensile strength. Available methods for physical measurements, such as shear-box tests, pullout tests and singular root tensile strength measurement can only provide a detailed picture of vertical effects of roots on slope stabilization. However, the assessment of horizontal root effects is largely limited to computer modeling. Here, a method for measurement of in-situ cumulative horizontal root tensile strength is presented. A traction machine was developed that allows fixation of rectangular grass sods (max. 30x60cm) on the short ends with a 30x30cm measurement zone in the middle. On two alpine grass slopes in South Tyrol (northern Italy), 30x60cm grass sods were cut out (max. depth 20cm). Grass sods were pulled apart measuring the horizontal tensile strength over 30cm width over the time. The horizontal tensile strength of the sods was measured and compared for different soil depths, hydrological conditions, and root physiological properties. The results improve our understanding of horizontal root effects on slope stabilization and can be used for improved evaluation of grass slope stability.

Keywords: grassland, horizontal root effect, landslide, mountain, pasture, shallow erosion

Procedia PDF Downloads 137
198 Psychophysiological Adaptive Automation Based on Fuzzy Controller

Authors: Liliana Villavicencio, Yohn Garcia, Pallavi Singh, Luis Fernando Cruz, Wilfrido Moreno

Abstract:

Psychophysiological adaptive automation is a concept that combines human physiological data and computer algorithms to create personalized interfaces and experiences for users. This approach aims to enhance human learning by adapting to individual needs and preferences and optimizing the interaction between humans and machines. According to neurosciences, the working memory demand during the student learning process is modified when the student is learning a new subject or topic, managing and/or fulfilling a specific task goal. A sudden increase in working memory demand modifies the level of students’ attention, engagement, and cognitive load. The proposed psychophysiological adaptive automation system will adapt the task requirements to optimize cognitive load, the process output variable, by monitoring the student's brain activity. Cognitive load changes according to the student’s previous knowledge, the type of task, the difficulty level of the task, and the overall psychophysiological state of the student. Scaling the measured cognitive load as low, medium, or high; the system will assign a task difficulty level to the next task according to the ratio between the previous-task difficulty level and student stress. For instance, if a student becomes stressed or overwhelmed during a particular task, the system detects this through signal measurements such as brain waves, heart rate variability, or any other psychophysiological variables analyzed to adjust the task difficulty level. The control of engagement and stress are considered internal variables for the hypermedia system which selects between three different types of instructional material. This work assesses the feasibility of a fuzzy controller to track a student's physiological responses and adjust the learning content and pace accordingly. Using an industrial automation approach, the proposed fuzzy logic controller is based on linguistic rules that complement the instrumentation of the system to monitor and control the delivery of instructional material to the students. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the delivery of academic content based on the working memory demand without compromising students’ health. This work has a potential application in the instructional design of virtual reality environments for training and education.

Keywords: fuzzy logic controller, hypermedia control system, personalized education, psychophysiological adaptive automation

Procedia PDF Downloads 54
197 Post-Traumatic Stress Disorder and Problem Alcohol Use in Women: Systematic Analysis

Authors: Neringa Bagdonaite

Abstract:

Study Aims: The current study aimed to systematically analyse various research done in the area of female post-traumatic stress disorder (PTSD) and alcohol abuse, and to critically review these results on the basis of theoretical models as well as answer following questions: (I) What is the reciprocal relationship between PTSD and problem alcohol use among females; (II) What are the moderating/mediating factors of this relationship? Methods: The computer bibliographic databases Ebsco, Scopus, Springer, Web of Science, Medline, Science Direct were used to search for scientific articles. Systematic analyses sample consisted of peer-reviewed, English written articles addressing mixed gender and female PTSD and alcohol abuse issues from Jan 2012 to May 2017. Results: Total of 1011 articles were found in scientific databases related to searched keywords of which 29 met the selection criteria and were analysed. The results of longitudinal studies indicate that (I) various trauma, especially interpersonal trauma exposure in childhood is linked with increased risk of revictimization in later life and problem alcohol use; (II) revictimization in adolescence or adulthood, rather than victimization in childhood has a greater impact on the onset and progression of problematic alcohol use in adulthood. Cross-sectional and epidemiological studies also support significant relationships between female PTSD and problem alcohol use. Regards to the negative impact of alcohol use on PTSD symptoms results are yet controversial; some evidence suggests that alcohol does not exacerbate symptoms of PTSD over time, while others argue that problem alcohol use worsens PTSD symptoms and is linked to chronicity of both disorders, especially among women with previous alcohol use problems. Analysis of moderating/mediating factors of PTSD and problem alcohol use revealed, that higher motives/expectancies, specifically distress coping motives for alcohol use significantly moderates the relationship between PTSD and problematic alcohol use. Whereas negative affective states mediate relationship between symptoms of PTSD and alcohol use, but only among woman with alcohol use problems already developed. Conclusions: Interpersonal trauma experience, especially in childhood and its reappearance in lifetime is linked with PTSD symptoms and problem drinking among women. Moreover, problem alcohol use can be both a cause and a consequence of trauma and PTSD, and if used for coping it, increases the likelihood of chronicity of both disorders. In order to effectively treat both disorders, it’s worthwhile taking into account this dynamic interplay of women's PTSD symptoms and problem drinking.

Keywords: female, trauma, post-traumatic stress disorder, problem alcohol use, systemic analysis

Procedia PDF Downloads 156
196 Morphological Process of Villi Detachment Assessed by Computer-Assisted 3D Reconstruction of Intestinal Crypt from Serial Ultrathin Sections of Rat Duodenum Mucosa

Authors: Lise P. Labéjof, Ivna Mororó, Raquel G. Bastos, Maria Isabel G. Severo, Arno H. de Oliveira

Abstract:

This work presents an alternative mode of intestine mucosa renewal that may allow to better understand the total loss of villi after irradiation. It was tested a morphological method of 3d reconstruction using micrographs of serial sections of rat duodenum. We used hundreds of sections of each specimen of duodenum placed on glass slides and examined under a light microscope. Those containing the detachment, approximately a dozen, were chosen for observation under a transmission electron microscope (TEM). Each of these sections was glued on a block of epon resin and recut into a hundred of 60 nm-thick sections. Ribbons of these ultrathin sections were distributed on a series of copper grids in the same order of appearance than during the process of microstomia. They were then stained by solutions of uranyl and lead salts and observed under a TEM. The sections were pictured and the electron micrographs showing signs of cells detachment were transferred into two softwares, ImageJ to align the cellular structures and Reconstruct to realize the 3d reconstruction. It has been detected epithelial cells that exhibited all signs of programmed cell death and localized at the villus-crypt junction. Their nucleus was irregular in shape with a condensed chromatin in clumps. Their cytoplasm was darker than that of neighboring cells, containing many swollen mitochondria. In some places of the sections, we could see intercellular spaces enlarged by the presence of shrunk cells which displayed a plasma membrane with an irregular shape in thermowell as if the cell interdigitations would distant from each other. The three-dimensional reconstruction of the crypts has allowed observe gradual loss of intercellular contacts of crypt cells in the longitudinal plan of the duodenal mucosa. In the transverse direction, there was a gradual increase of the intercellular space as if these cells moved away from one another. This observation allows assume that the gradual remoteness of the cells at the villus-crypt junction is the beginning of the mucosa detachment. Thus, the shrinking of cells due to apoptosis is the way that they detach from the mucosa and progressively the villi also. These results are in agreement with our initial hypothesis and thus have demonstrated that the villi become detached from the mucosa at the villus-crypt junction by the programmed cell death process. This type of loss of entire villus helps explain the rapid denudation of the intestinal mucosa in case of irradiation.

Keywords: 3dr, transmission electron microscopy, ionizing radiations, rat small intestine, apoptosis

Procedia PDF Downloads 357
195 A Risk-Based Modeling Approach for Successful Adoption of CAATTs in Audits: An Exploratory Study Applied to Israeli Accountancy Firms

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

Technology adoption models are extensively used in the literature to explore drivers and inhibitors affecting the adoption of Computer Assisted Audit Techniques and Tools (CAATTs). Further studies from recent years suggested additional factors that may affect technology adoption by CPA firms. However, the adoption of CAATTs by financial auditors differs from the adoption of technologies in other industries. This is a result of the unique characteristics of the auditing process, which are expressed in the audit risk elements and the risk-based auditing approach, as encoded in the auditing standards. Since these audit risk factors are not part of the existing models that are used to explain technology adoption, these models do not fully correspond to the specific needs and requirements of the auditing domain. The overarching objective of this qualitative research is to fill the gap in the literature, which exists as a result of using generic technology adoption models. Followed by a pretest and based on semi-structured in-depth interviews with 16 Israeli CPA firms of different sizes, this study aims to reveal determinants related to audit risk factors that influence the adoption of CAATTs in audits and proposes a new modeling approach for the successful adoption of CAATTs. The findings emphasize several important aspects: (1) while large CPA firms developed their own inner guidelines to assess the audit risk components, other CPA firms do not follow a formal and validated methodology to evaluate these risks; (2) large firms incorporate a variety of CAATTs, including self-developed advanced tools. On the other hand, small and mid-sized CPA firms incorporate standard CAATTs and still need to catch up to better understand what CAATTs can offer and how they can contribute to the quality of the audit; (3) the top management of mid-sized and small CPA firms should be more proactive and updated about CAATTs capabilities and contributions to audits; and (4) All CPA firms consider professionalism as a major challenge that must be constantly managed to ensure an optimal CAATTs operation. The study extends the existing knowledge of CAATTs adoption by looking at it from a risk-based auditing approach. It suggests a new model for CAATTs adoption by incorporating influencing audit risk factors that auditors should examine when considering CAATTs adoption. Since the model can be used in various audited scenarios and supports strategic, risk-based decisions, it maximizes the great potential of CAATTs on the quality of the audits. The results and insights can be useful to CPA firms, internal auditors, CAATTs developers and regulators. Moreover, it may motivate audit standard-setters to issue updated guidelines regarding CAATTs adoption in audits.

Keywords: audit risk, CAATTs, financial auditing, information technology, technology adoption models

Procedia PDF Downloads 44
194 Bringing German History to Tourists

Authors: Gudrun Görlitz, Christian Schölzel, Alexander Vollmar

Abstract:

Sites of Jewish Life in Berlin 1933-1945. Between Persecution and Self-assertion” was realized in a project funded by the European Regional Development Fund. A smartphone app, and a associated web site enable tourists and other participants of this educational offer to learn in a serious way more about the life of Jews in the German capital during the Nazi era. Texts, photos, video and audio recordings communicate the historical content. Interactive maps (both current and historical) make it possible to use predefined or self combined routes. One of the manifold challenges was to create a broad ranged guide, in which all detailed information are well linked with each other. This enables heterogeneous groups of potential users to find a wide range of specific information, corresponding with their particular wishes and interests. The multitude of potential ways to navigate through the diversified information causes (hopefully) the users to utilize app and web site for a second or third time and with a continued interest. Therefore 90 locations, a lot of them situated in Berlin’s city centre, have been chosen. For all of them text-, picture and/or audio/video material gives extensive information. Suggested combinations of several of these “site stories” are leading to the offer of detailed excursion routes. Events and biographies are also presented. A few of the implemented biographies are especially enriched with source material concerning the aspect of (forced) migration of these persons during the Nazi time. All this was done in a close and fruitful interdisciplinary cooperation of computer scientists and historians. The suggested conference paper aims to show the challenges shaping complex source material for practical use by different user-groups in a proper technical and didactic way. Based on the historical research in archives, museums, libraries and digital resources the quantitative dimension of the project can be sized as follows: The paper focuses on the following historiographical and technical aspects: - Shaping the text material didactically for the use in new media, especially a Smartphone-App running on differing platforms; - Geo-referencing of the sites on historical and current map material; - Overlay of old and new maps to present and find the sites; - Using Augmented Reality technologies to re-visualize destroyed buildings; - Visualization of black-/white-picture-material; - Presentation of historical footage and the resulting problems to need too much storage space; - Financial and juridical aspects in gaining copyrights to present archival material.

Keywords: smartphone app, history, tourists, German

Procedia PDF Downloads 344
193 The Efficacy of Lithium vs. Valporate on Bipolar Patients and Their Sexual Side Effect: A Meta-Analysis of 4159 Patients

Authors: Yasmeen Jamal Alabdallat, Almutazballlah Bassam Qablan, Obada Ahmad Al Jayyousi, Ihdaa Mahmoud Bani Khalaf, Eman E. Alshial

Abstract:

Background: Bipolar disorder, formerly known as manic depression, is a mental health status that leads to extreme mood swings that include emotional lows (depression) and highs (mania or hypomania). This systematic review and meta-analysis aimed to assess the safety and efficacy of lithium versus valproate among bipolar patients. Methods: A computer literature search of PubMed, Scopus, Web of Science, and Cochrane Central Register of Controlled Trials was conducted from inception until June 2022. Studies comparing lithium versus valproate among bipolar patients were selected for the analysis, and all relevant outcomes were pooled in the meta-analysis using Review Manager Software. Results: 11 Randomized Clinical Trials were included in this meta-analysis with a total of 4159 patients. Our meta showed that lithium was superior to valproate in terms of Young Mania Rating Scale (YMRS) (MD = 0.00 with 95% CI, (-0.55 – 0.55; I2 = 0%), P = 1.00). The results of the Hamilton Depression Rating Scale (HDRS) showed that the overall effect favored the valproate treated group (MD = 1.41 with 95% CI, (-0.15 – 2.67; I2 = 0%), P = 0.03). Concerning the results of the Montgomery-Asberg Depression Rating Scale (MADRS), the results showed that the lithium was superior to valproate (MD = 0.03 with 95% CI, (-0.80 to 0.87; I2 = 40%), P = 0.94). In terms of the sexual side effect, we found that the valproate was superior to lithium (RR 1.19 with 95% CI, (0.74 to 1.91; I2 = 0%), P = 0.47). The lithium-treated group was superior in comparison to valproate treated group in terms of Abnormal Involuntary Movement Scale (AIMS) (MD = -0.03 with 95% CI (-0.38 to 0.32; I2 = 0%), P = 0.87). The lithium was more favorable in terms of Simpson-Agnes scale (MD = -0.40 with 95% CI, (-0.86 to 0.06; I2 = 0%), P = 0.09). The results of the Barnes akathisia scale showed that the overall effect of the valproate was more favorable in comparison to lithium (MD = 0.05 with 95% CI, (-0.12 to 0.22; I2 = 0%), P = 0.57). Conclusion: Our study revealed that on the scales of efficacy Lithium treated group surpassed Valproate treated group in terms of Young Mania Rating Scale (YMRS), Abnormal Involuntary Movement Scale (AIMS) and Simpson-Agnes scale, but valproate surpassed it in Barnes Akathisia scale. Furthermore, on the scales of depression Hamilton Depression Rating Scale (HDRS) showed that the overall effect favored Valproate treated group, but Lithium surpassed valproate in terms of Montgomery-Asberg Depression Rating Scale (MADRS). Valproate surpassed Lithium in terms of sexual side effects.

Keywords: bipolar, mania, bipolar-depression, sexual dysfunction, sexual side effects, treatment

Procedia PDF Downloads 125
192 Digitization and Economic Growth in Africa: The Role of Financial Sector Development

Authors: Abdul Ganiyu Iddrisu, Bei Chen

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth and reducing poverty. Yet the compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, and low-income flows, among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector. However, empirical evidence on the digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We, therefore, argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa, focusing on the role of digitization and financial sector development. First, we assess how digitization influences financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to the private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improve economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, financial sector development, Africa, economic growth

Procedia PDF Downloads 104
191 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 362
190 Non-Perturbative Vacuum Polarization Effects in One- and Two-Dimensional Supercritical Dirac-Coulomb System

Authors: Andrey Davydov, Konstantin Sveshnikov, Yulia Voronina

Abstract:

There is now a lot of interest to the non-perturbative QED-effects, caused by diving of discrete levels into the negative continuum in the supercritical static or adiabatically slowly varying Coulomb fields, that are created by the localized extended sources with Z > Z_cr. Such effects have attracted a considerable amount of theoretical and experimental activity, since in 3+1 QED for Z > Z_cr,1 ≈ 170 a non-perturbative reconstruction of the vacuum state is predicted, which should be accompanied by a number of nontrivial effects, including the vacuum positron emission. Similar in essence effects should be expected also in both 2+1 D (planar graphene-based hetero-structures) and 1+1 D (one-dimensional ‘hydrogen ion’). This report is devoted to the study of such essentially non-perturbative vacuum effects for the supercritical Dirac-Coulomb systems in 1+1D and 2+1D, with the main attention drawn to the vacuum polarization energy. Although the most of works considers the vacuum charge density as the main polarization observable, vacuum energy turns out to be not less informative and in many respects complementary to the vacuum density. Moreover, the main non-perturbative effects, which appear in vacuum polarization for supercritical fields due to the levels diving into the lower continuum, show up in the behavior of vacuum energy even more clear, demonstrating explicitly their possible role in the supercritical region. Both in 1+1D and 2+1D, we explore firstly the renormalized vacuum density in the supercritical region using the Wichmann-Kroll method. Thereafter, taking into account the results for the vacuum density, we formulate the renormalization procedure for the vacuum energy. To evaluate the latter explicitly, an original technique, based on a special combination of analytical methods, computer algebra tools and numerical calculations, is applied. It is shown that, for a wide range of the external source parameters (the charge Z and size R), in the supercritical region the renormalized vacuum energy could significantly deviate from the perturbative quadratic growth up to pronouncedly decreasing behavior with jumps by (-2 x mc^2), which occur each time, when the next discrete level dives into the negative continuum. In the considered range of variation of Z and R, the vacuum energy behaves like ~ -Z^2/R in 1+1D and ~ -Z^3/R in 2+1D, exceeding deeply negative values. Such behavior confirms the assumption of the neutral vacuum transmutation into the charged one, and thereby of the spontaneous positron emission, accompanying the emergence of the next vacuum shell due to the total charge conservation. To the end, we also note that the methods, developed for the vacuum energy evaluation in 2+1 D, with minimal complements could be carried over to the three-dimensional case, where the vacuum energy is expected to be ~ -Z^4/R and so could be competitive with the classical electrostatic energy of the Coulomb source.

Keywords: non-perturbative QED-effects, one- and two-dimensional Dirac-Coulomb systems, supercritical fields, vacuum polarization

Procedia PDF Downloads 184
189 AI Applications in Accounting: Transforming Finance with Technology

Authors: Alireza Karimi

Abstract:

Artificial Intelligence (AI) is reshaping various industries, and accounting is no exception. With the ability to process vast amounts of data quickly and accurately, AI is revolutionizing how financial professionals manage, analyze, and report financial information. In this article, we will explore the diverse applications of AI in accounting and its profound impact on the field. Automation of Repetitive Tasks: One of the most significant contributions of AI in accounting is automating repetitive tasks. AI-powered software can handle data entry, invoice processing, and reconciliation with minimal human intervention. This not only saves time but also reduces the risk of errors, leading to more accurate financial records. Pattern Recognition and Anomaly Detection: AI algorithms excel at pattern recognition. In accounting, this capability is leveraged to identify unusual patterns in financial data that might indicate fraud or errors. AI can swiftly detect discrepancies, enabling auditors and accountants to focus on resolving issues rather than hunting for them. Real-Time Financial Insights: AI-driven tools, using natural language processing and computer vision, can process documents faster than ever. This enables organizations to have real-time insights into their financial status, empowering decision-makers with up-to-date information for strategic planning. Fraud Detection and Prevention: AI is a powerful tool in the fight against financial fraud. It can analyze vast transaction datasets, flagging suspicious activities and reducing the likelihood of financial misconduct going unnoticed. This proactive approach safeguards a company's financial integrity. Enhanced Data Analysis and Forecasting: Machine learning, a subset of AI, is used for data analysis and forecasting. By examining historical financial data, AI models can provide forecasts and insights, aiding businesses in making informed financial decisions and optimizing their financial strategies. Artificial Intelligence is fundamentally transforming the accounting profession. From automating mundane tasks to enhancing data analysis and fraud detection, AI is making financial processes more efficient, accurate, and insightful. As AI continues to evolve, its role in accounting will only become more significant, offering accountants and finance professionals powerful tools to navigate the complexities of modern finance. Embracing AI in accounting is not just a trend; it's a necessity for staying competitive in the evolving financial landscape.

Keywords: artificial intelligence, accounting automation, financial analysis, fraud detection, machine learning in finance

Procedia PDF Downloads 36