Search results for: computer science education
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11144

Search results for: computer science education

614 An Analysis of Gamification in the Post-Secondary Classroom

Authors: F. Saccucci

Abstract:

Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.

Keywords: gamification, inexpensive, non-quantitative advantages, post-secondary

Procedia PDF Downloads 211
613 Efforts to Revitalize Piipaash Language: An Explorative Study to Develop Culturally Appropriate and Contextually Relevant Teaching Materials for Preschoolers

Authors: Shahzadi Laibah Burq, Gina Scarpete Walters

Abstract:

Piipaash, representing one large family of North American languages, Yuman, is reported as one of the seriously endangered languages in the Salt River Pima-Maricopa Indian Community of Arizona. In a collaborative venture between Arizona State University (ASU) and Salt River Pima-Maricopa Indian Community (SRPMIC), efforts have been made to revitalize and preserve the Piipaash language and its cultural heritage. The present study is one example of several other language documentation and revitalization initiatives that Humanities Lab ASU has taken. This study was approved to receive a “Beyond the lab” grant after the researchers successfully created a Teaching Guide for Early Childhood Piipaash storybook during their time working in the Humanities Lab. The current research is an extension of the previous project and focuses on creating customized teaching materials and tools for the teachers and parents of the students of the Early Enrichment Program at SRPMIC. However, to determine and maximize the usefulness of the teaching materials with regards to their reliability, validity, and practicality in the given context, this research aims to conduct Environmental Analysis and Need Analysis. Environmental Analysis seeks to evaluate the Early Enrichment Program situation and Need Analysis to investigate the specific and situated requirements of the teachers to assist students in building target language skills. The study employs a qualitative methods approach for the collection of the data. Multiple data collection strategies are used concurrently to gather information from the participants. The research tools include semi-structured interviews with the program administrators and teachers, classroom observations, and teacher shadowing. The researchers utilize triangulation of the data to maintain validity in the process of data interpretation. The preliminary results of the study show a need for culturally appropriate materials that can further the learning of students of the target language as well as the culture, i.e., clay pots and basket-making materials. It was found that the course and teachers focus on developing the Listening and Speaking skills of the students. Moreover, to assist the young learners beyond the classroom, the teachers could make use of send-home teaching materials to reinforce the learning (i.e., coloring books, including illustrations of culturally relevant animals, food, and places). Audio language resources are also identified as helpful additional materials for the parents to assist the learning of the kids.

Keywords: indigenous education, materials development, need analysis, piipaash language revitalizaton

Procedia PDF Downloads 90
612 Consumer Knowledge and Behavior in the Aspect of Food Waste

Authors: Katarzyna Neffe-Skocinska, Marzena Tomaszewska, Beata Bilska, Dorota Zielinska, Monika Trzaskowska, Anna Lepecka, Danuta Kolozyn-Krajewska

Abstract:

The aim of the study was to assess Polish consumer behavior towards food waste, including knowledge of information on food labels. The survey was carried out using the CAPI (computer assisted personal interview) method, which involves interviewing the respondent using mobile devices. The research group was a representative sample for Poland due to demographic variables: gender, age, place of residence. A total of 1.115 respondents participated in the study (51.1% were women and 48.9% were men). The questionnaire included questions on five thematic aspects: 1. General knowledge and sources of information on the phenomenon of food waste; 2. Consumption of food after the date of minimum durability; 3. The meanings of the phrase 'best before ...'; 4. Indication of the difference between the meaning of the words 'best before ...' and 'use by'; 5. Indications products marked with the phrase 'best before ...'. It was found that every second surveyed Pole met with the topic of food waste (54.8%). Among the respondents, the most popular source of information related to the research topic was television (89.4%), radio (26%) and the Internet (24%). Over a third of respondents declared that they consume food after the date of minimum durability. Only every tenth (9.8%) respondent does not pay attention to the expiry date and type of consumed products (durable and perishable products). Correctly 39.8% of respondents answered the question: How do you understand the phrase 'best before ...'? In the opinion of 42.8% of respondents, the statements 'best before ...' and 'use by' mean the same thing, while 36% of them think differently. In addition, more than one-fifth of respondents could not respond to the questions. In the case of products of the indication information 'best before ...', more than 40% of the respondents chosen perishable products, e.g., yoghurts and durable, e.g., groats. A slightly lower percentage of indications was recorded for flour (35.1%), sausage (32.8%), canned corn (31.8%), and eggs (25.0%). Based on the assessment of the behavior of Polish consumers towards the phenomenon of food waste, it can be concluded that respondents have elementary knowledge of the study subject. Noteworthy is the good conduct of most respondents in terms of compliance with shelf life and dates of minimum durability of food products. The publication was financed on the basis of an agreement with the National Center for Research and Development No. Gospostrateg 1/385753/1/NCBR/2018 for the implementation and financing of the project under the strategic research and development program social and economic development of Poland in the conditions of globalizing markets – GOSPOSTRATEG - acronym PROM.

Keywords: food waste, shelf life, dates of durability, consumer knowledge and behavior

Procedia PDF Downloads 174
611 Developing an Online Application for Mental Skills Training and Development

Authors: Arjun Goutham, Chaitanya Sridhar, Sunita Maheshwari, Robin Uthappa, Prasanna Gopinath

Abstract:

In alignment with the growth in the sporting industry, a number of people playing and competing in sports are growing exponentially across the globe. However, the number of sports psychology experts are not growing at a similar rate, especially in the Asian and more so, Indian context. Hence, the access to actionable mental training solutions specific to individual athletes is limited. Also, the time constraint an athlete faces due to their intense training schedule makes one-on-one sessions difficult. One of the means to bridge that gap is through technology. Technology makes individualization possible. It allows for easy access to specific-qualitative content/information and provides a medium to place individualized assessments, analysis, solutions directly into an athlete's hands. This enables mental training awareness, education, and real-time actionable solutions possible for athletes in-spite of the limitation of available sports psychology experts in their region. Furthermore, many athletes are hesitant to seek support due to the stigma of appearing weak. Such individuals would prefer a more discreet way. Athletes who have strong mental performance tend to produce better results. The mobile application helps to equip athletes with assessing and developing their mental strategies directed towards improving performance on an ongoing basis. When an athlete understands their strengths and limitations in their mental application, they can focus specifically on applying the strategies that work and improve on zones of limitation. With reports, coaches get to understand the unique inner workings of an athlete and can utilize the data & analysis to coach them with better precision and use coaching styles & communication that suits better. Systematically capturing data and supporting athletes(with individual-specific solutions) or teams with assessment, planning, instructional content, actionable tools & strategies, reviewing mental performance and the achievement of objectives & goals facilitate for a consistent mental skills development at all levels of sporting stages of an athlete's career. The mobile application will help athletes recognize and align with their stable attributes such as their personalities, learning & execution modalities, challenges & requirements of their sport, etc and help develop dynamic attributes like states, beliefs, motivation levels, focus etc. with practice and training. It will provide measurable analysis on a regular basis and help them stay aligned to their objectives & goals. The solutions are based on researched areas of influence on sporting performance individually or in teams.

Keywords: athletes, mental training, mobile application, performance, sports

Procedia PDF Downloads 268
610 Contact Zones and Fashion Hubs: From Circular Economy to Circular Neighbourhoods

Authors: Tiziana Ferrero-Regis, Marissa Lindquist

Abstract:

Circular Economy (CE) is increasingly seen as the reorganisation of production and consumption, and cities are acknowledged as the sources of many ecological and social problems; at the same time, they can be re-imagined through an ecologically and socially resilient future. The concept of the CE has received pointed critiques for its techno-deterministic orientation, focus on science and transformation by the policy. At the heart of our local re-imagining of the CE into circularity through contact zones there is the acknowledgment of collective, spontaneous and shared imaginations of alternative and sustainable futures through the creation of networks of community initiatives that are transformative, creating opportunities that simultaneously make cities rich and enrich humans. This paper presents a mapping project of the fashion and textile ecosystem in Brisbane, Queensland, Australia. Brisbane is currently the most aspirational city in Australia, as its population growth rate is the highest in the country. Yet, Brisbane is considered the least “fashion city” in the country. In contrast, the project revealed a greatly enhanced picture of distinct fashion and textile clusters across greater Brisbane and the adjacency of key services that may act to consolidate CE community contact zones. Clusters to the north of Brisbane and several locales to the south are zones of a greater mix between public/social amenities, walkable zones and local transport networks with educational precincts, community hubs, concentration of small enterprises, designers, artisans and waste recovery centers that will help to establish knowledge of key infrastructure networks that will support enmeshing these zones together. The paper presents two case studies of independent designers who work on new and re-designed clothing through recovering pre-consumer textiles and that operate from within creative precincts. The first case is designer Nelson Molloy, who recently returned to the inner city suburb of West End with their Chasing Zero Design project. The area was known in the 1980s and 1990s for its alternative lifestyle with creative independent production, thrifty clothing shops, alternative fashion and a socialist agenda. After 30 years of progressive gentrification of the suburb, which has dislocated many of the artists, designers and artisans, West End is seeing the return and amplification of clusters of artisans, artists, designers and architects. The other case study is Practice Studio, located in a new zone of creative growth, Bowen Hills, north of the CBD. Practice Studio combines retail with a workroom, offers repair and remaking services, becoming a point of reference for young and emerging Australian designers and artists. The paper demonstrates the spatial politics of the CE and the way in which new cultural capital is produced thanks to cultural specificities and resources. It argues for the recognition of contact zones that are created by local actors, communities and knowledge networks, whose grass-roots agency is fundamental for the co-production of CE’s systems of local governance.

Keywords: contact zones, circular citities, fashion and textiles, circular neighbourhoods, australia

Procedia PDF Downloads 100
609 Balancing Biodiversity and Agriculture: A Broad-Scale Analysis of the Land Sparing/Land Sharing Trade-Off for South African Birds

Authors: Chevonne Reynolds, Res Altwegg, Andrew Balmford, Claire N. Spottiswoode

Abstract:

Modern agriculture has revolutionised the planet’s capacity to support humans, yet has simultaneously had a greater negative impact on biodiversity than any other human activity. Balancing the demand for food with the conservation of biodiversity is one of the most pressing issues of our time. Biodiversity-friendly farming (‘land sharing’), or alternatively, separation of conservation and production activities (‘land sparing’), are proposed as two strategies for mediating the trade-off between agriculture and biodiversity. However, there is much debate regarding the efficacy of each strategy, as this trade-off has typically been addressed by short term studies at fine spatial scales. These studies ignore processes that are relevant to biodiversity at larger scales, such as meta-population dynamics and landscape connectivity. Therefore, to better understand species response to agricultural land-use and provide evidence to underpin the planning of better production landscapes, we need to determine the merits of each strategy at larger scales. In South Africa, a remarkable citizen science project - the South African Bird Atlas Project 2 (SABAP2) – collates an extensive dataset describing the occurrence of birds at a 5-min by 5-min grid cell resolution. We use these data, along with fine-resolution data on agricultural land-use, to determine which strategy optimises the agriculture-biodiversity trade-off in a southern African context, and at a spatial scale never considered before. To empirically test this trade-off, we model bird species population density, derived for each 5-min grid cell by Royle-Nicols single-species occupancy modelling, against both the amount and configuration of different types of agricultural production in the same 5-min grid cell. In using both production amount and configuration, we can show not only how species population densities react to changes in yield, but also describe the production landscape patterns most conducive to conservation. Furthermore, the extent of both the SABAP2 and land-cover datasets allows us to test this trade-off across multiple regions to determine if bird populations respond in a consistent way and whether results can be extrapolated to other landscapes. We tested the land sparing/sharing trade-off for 281 bird species across three different biomes in South Africa. Overall, a higher proportion of species are classified as losers, and would benefit from land sparing. However, this proportion of loser-sparers is not consistent and varies across biomes and the different types of agricultural production. This is most likely because of differences in the intensity of agricultural land-use and the interactions between the differing types of natural vegetation and agriculture. Interestingly, we observe a higher number of species that benefit from agriculture than anticipated, suggesting that agriculture is a legitimate resource for certain bird species. Our results support those seen at smaller scales and across vastly different agricultural systems, that land sparing benefits the most species. However, our analysis suggests that land sparing needs to be implemented at spatial scales much larger than previously considered. Species persistence in agricultural landscapes will require the conservation of large tracts of land, and is an important consideration in developing countries, which are undergoing rapid agricultural development.

Keywords: agriculture, birds, land sharing, land sparing

Procedia PDF Downloads 209
608 Virtual Team Performance: A Transactive Memory System Perspective

Authors: Belbaly Nassim

Abstract:

Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.

Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination

Procedia PDF Downloads 174
607 Low- and High-Temperature Methods of CNTs Synthesis for Medicine

Authors: Grzegorz Raniszewski, Zbigniew Kolacinski, Lukasz Szymanski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza

Abstract:

One of the most promising area for carbon nanotubes (CNTs) application is medicine. One of the most devastating diseases is cancer. Carbon nanotubes may be used as carriers of a slowly released drug. It is possible to use of electromagnetic waves to destroy cancer cells by the carbon nanotubes (CNTs). In our research we focused on thermal ablation by ferromagnetic carbon nanotubes (Fe-CNTs). In the cancer cell hyperthermia functionalized carbon nanotubes are exposed to radio frequency electromagnetic field. Properly functionalized Fe-CNTs join the cancer cells. Heat generated in nanoparticles connected to nanotubes warm up nanotubes and then the target tissue. When the temperature in tumor tissue exceeds 316 K the necrosis of cancer cells may be observed. Several techniques can be used for Fe-CNTs synthesis. In our work, we use high-temperature methods where arc-discharge is applied. Low-temperature systems are microwave plasma with assisted chemical vapor deposition (MPCVD) and hybrid physical-chemical vapor deposition (HPCVD). In the arc discharge system, the plasma reactor works with a pressure of He up to 0,5 atm. The electric arc burns between two graphite rods. Vapors of carbon move from the anode, through a short arc column and forms CNTs which can be collected either from the reactor walls or cathode deposit. This method is suitable for the production of multi-wall and single-wall CNTs. A disadvantage of high-temperature methods is a low purification, short length, random size and multi-directional distribution. In MPCVD system plasma is generated in waveguide connected to the microwave generator. Then containing carbon and ferromagnetic elements plasma flux go to the quartz tube. The additional resistance heating can be applied to increase the reaction effectiveness and efficiency. CNTs nucleation occurs on the quartz tube walls. It is also possible to use substrates to improve carbon nanotubes growth. HPCVD system involves both chemical decomposition of carbon containing gases and vaporization of a solid or liquid source of catalyst. In this system, a tube furnace is applied. A mixture of working and carbon-containing gases go through the quartz tube placed inside the furnace. As a catalyst ferrocene vapors can be used. Fe-CNTs may be collected then either from the quartz tube walls or on the substrates. Low-temperature methods are characterized by higher purity product. Moreover, carbon nanotubes from tested CVD systems were partially filled with the iron. Regardless of the method of Fe-CNTs synthesis the final product always needs to be purified for applications in medicine. The simplest method of purification is an oxidation of the amorphous carbon. Carbon nanotubes dedicated for cancer cell thermal ablation need to be additionally treated by acids for defects amplification on the CNTs surface what facilitates biofunctionalization. Application of ferromagnetic nanotubes for cancer treatment is a promising method of fighting with cancer for the next decade. Acknowledgment: The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013

Keywords: arc discharge, cancer, carbon nanotubes, CVD, thermal ablation

Procedia PDF Downloads 449
606 Risk Factors for Determining Anti-HBcore to Hepatitis B Virus Among Blood Donors

Authors: Tatyana Savchuk, Yelena Grinvald, Mohamed Ali, Ramune Sepetiene, Dinara Sadvakassova, Saniya Saussakova, Kuralay Zhangazieva, Dulat Imashpayev

Abstract:

Introduction. The problem of viral hepatitis B (HBV) takes a vital place in the global health system. The existing risk of HBV transmission through blood transfusions is associated with transfusion of blood taken from infected individuals during the “serological window” period or from patients with latent HBV infection, the marker of which is anti-HBcore. In the absence of information about other markers of hepatitis B, the presence of anti-HBcore suggests that a person may be actively infected or has suffered hepatitis B in the past and has immunity. Aim. To study the risk factors influencing the positive anti-HBcore indicators among the donor population. Materials and Methods. The study was conducted in 2021 in the Scientific and Production Center of Transfusiology of the Ministry of Healthcare in Kazakhstan. The samples taken from blood donors were tested for anti-HBcore, by CLIA on the Architect i2000SR (ABBOTT). A special questionnaire was developed for the blood donors’ socio-demographic characteristics. Statistical analysis was conducted by the R software (version 4.1.1, USA, 2021). Results.5709 people aged 18 to 66 years were included in the study, the proportion of men and women was 68.17% and 31.83%, respectively. The average age of the participants was 35.7 years. A weighted multivariable mixed effects logistic regression analysis showed that age (p<0.001), ethnicity (p<0.05), and marital status (p<0.05) were statistically associated with anti-HBcore positivity. In particular, analysis adjusting for gender, nationality, education, marital status, family history of hepatitis, blood transfusion, injections, and surgical interventions, with a one-year increase in age (adjOR=1.06, 95%CI:1.05-1.07), showed an 6% growth in odds of having anti-HBcore positive results. Those who were russian ethnicity (adjOR=0.65, 95%CI:0.46-0.93) and representatives of other nationality groups (adjOR=0.56, 95%CI:0.37-0.85) had lower odds of having anti-HBcore when compared to Kazakhs when controlling for other covariant variables. Among singles, the odds of having a positive anti-HBcore were lower by 29% (adjOR = 0.71, 95%CI:0.57-0.89) compared to married participants when adjusting for other variables. Conclusions.Kazakhstan is one of the countries with medium endemicity of HBV prevalence (2%-7%). Results of the study demonstrated the possibility to form a profile of risk factors (age, nationality, marital status). Taking into account the data, it is recommended to increase attention to donor questionnaires by adding leading questions and to improve preventive measures to prevent HBV. Funding. This research was supported by a grant from Abbott Laboratories.

Keywords: anti-HBcore, blood donor, donation, hepatitis B virus, occult hepatitis

Procedia PDF Downloads 108
605 Institutional and Economic Determinants of Foreign Direct Investment: Comparative Analysis of Three Clusters of Countries

Authors: Ismatilla Mardanov

Abstract:

There are three types of countries, the first of which is willing to attract foreign direct investment (FDI) in enormous amounts and do whatever it takes to make this happen. Therefore, FDI pours into such countries. In the second cluster of countries, even if the country is suffering tremendously from the shortage of investments, the governments are hesitant to attract investments because they are at the hands of local oligarchs/cartels. Therefore, FDI inflows are moderate to low in such countries. The third type is countries whose companies prefer investing in the most efficient locations globally and are hesitant to invest in the homeland. Sorting countries into such clusters, the present study examines the essential institutions and economic factors that make these countries different. Past literature has discussed various determinants of FDI in all kinds of countries. However, it did not classify countries based on government motivation, institutional setup, and economic factors. A specific approach to each target country is vital for corporate foreign direct investment risk analysis and decisions. The research questions are 1. What specific institutional and economic factors paint the pictures of the three clusters; 2. What specific institutional and economic factors are determinants of FDI; 3. Which of the determinants are endogenous and exogenous variables? 4. How can institutions and economic and political variables impact corporate investment decisions Hypothesis 1: In the first type, country institutions and economic factors will be favorable for FDI. Hypothesis 2: In the second type, even if country economic factors favor FDI, institutions will not. Hypothesis 3: In the third type, even if country institutions favorFDI, economic factors will not favor domestic investments. Therefore, FDI outflows occur in large amounts. Methods: Data come from open sources of the World Bank, the Fraser Institute, the Heritage Foundation, and other reliable sources. The dependent variable is FDI inflows. The independent variables are institutions (economic and political freedom indices) and economic factors (natural, material, and labor resources, government consumption, infrastructure, minimum wage, education, unemployment, tax rates, consumer price index, inflation, and others), the endogeneity or exogeneity of which are tested in the instrumental variable estimation. Political rights and civil liberties are used as instrumental variables. Results indicate that in the first type, both country institutions and economic factors, specifically labor and logistics/infrastructure/energy intensity, are favorable for potential investors. In the second category of countries, the risk of loss of assets is very high due to governmentshijacked by local oligarchs/cartels/special interest groups. In the third category of countries, the local economic factors are unfavorable for domestic investment even if the institutions are well acceptable. Cluster analysis and instrumental variable estimation were used to reveal cause-effect patterns in each of the clusters.

Keywords: foreign direct investment, economy, institutions, instrumental variable estimation

Procedia PDF Downloads 159
604 A Case Study on Quantitatively and Qualitatively Increasing Student Output by Using Available Word Processing Applications to Teach Reluctant Elementary School-Age Writers

Authors: Vivienne Cameron

Abstract:

Background: Between 2010 and 2017, teachers in a suburban public school district struggled to get students to consistently produce adequate writing samples as measured by the Pennsylvania state writing rubric for measuring focus, content, organization, style, and conventions. A common thread in all of the data was the need to develop stamina in the student writers. Method: All of the teachers used the traditional writing process model (prewrite, draft, revise, edit, final copy) during writing instruction. One teacher taught the writing process using word processing and incentivizing with publication instead of the traditional pencil/paper/grading method. Students did not have instruction in typing/keyboarding. The teacher submitted resulting student work to real-life contests, magazines, and publishers. Results: Students in the test group increased both the quantity and quality of their writing over a seven month period as measured by the Pennsylvania state writing rubric. Reluctant writers, as well as students with autism spectrum disorder, benefited from this approach. This outcome was repeated consistently over a five-year period. Interpretation: Removing the burden of pencil and paper allowed students to participate in the writing process more fully. Writing with pencil and paper is physically tiring. Students are discouraged when they submit a draft and are instructed to use the Add, Remove, Move, Substitute (ARMS) method to revise their papers. Each successive version becomes shorter. Allowing students to type their papers frees them to quickly and easily make changes. The result is longer writing pieces in shorter time frames, allowing the teacher to spend more time working on individual needs. With this additional time, the teacher can concentrate on teaching focus, content, organization, style, conventions, and audience. S/he also has a larger body of works from which to work on whole group instruction such as developing effective leads. The teacher submitted the resulting student work to contests, magazines, and publishers. Although time-consuming, the submission process was an invaluable lesson for teaching about audience and tone. All students in the test sample had work accepted for publication. Students became highly motivated to succeed when their work was accepted for publication. This motivation applied to special needs students, regular education students, and gifted students.

Keywords: elementary-age students, reluctant writers, teaching strategies, writing process

Procedia PDF Downloads 175
603 The Achievements and Challenges of Physics Teachers When Implementing Problem-Based Learning: An Exploratory Study Applied to Rural High Schools

Authors: Osman Ali, Jeanne Kriek

Abstract:

Introduction: The current instructional approach entrenched in memorizing does not assist conceptual understanding in science. Instructional approaches that encourage research, investigation, and experimentation, which depict how scientists work, should be encouraged. One such teaching strategy is problem-based learning (PBL). PBL has many advantages; enhanced self-directed learning and improved problem-solving and critical thinking skills. However, despite many advantages, PBL has challenges. Research confirmed is time-consuming and difficult to formulate ill-structured questions. Professional development interventions are needed for in-service educators to adopt the PBL strategy. The purposively selected educators had to implement PBL in their classrooms after the intervention to develop their practice and then reflect on the implementation. They had to indicate their achievements and challenges. This study differs from previous studies as the rural educators were subjected to implementing PBL in their classrooms and reflected on their experiences, beliefs, and attitudes regarding PBL. Theoretical Framework: The study reinforced Vygotskian sociocultural theory. According to Vygotsky, the development of a child's cognitive is sustained by the interaction between the child and more able peers in his immediate environment. The theory suggests that social interactions in small groups create an opportunity for learners to form concepts and skills on their own better than working individually. PBL emphasized learning in small groups. Research Methodology: An exploratory case study was employed. The reason is that the study was not necessarily for specific conclusive evidence. Non-probability purposive sampling was adopted to choose eight schools from 89 rural public schools. In each school, two educators were approached, teaching physical sciences in grades 10 and 11 (N = 16). The research instruments were questionnaires, interviews, and lesson observation protocol. Two open-ended questionnaires were developed before and after intervention and analyzed thematically. Three themes were identified. The semi-structured interviews and responses were coded and transcribed into three themes. Subsequently, the Reform Teaching Observation Protocol (RTOP) was adopted for lesson observation and was analyzed using five constructs. Results: Evidence from analyzing the questionnaires before and after the intervention shows that participants knew better what was required to develop an ill-structured problem during the implementation. Furthermore, indications from the interviews are that participants had positive views about the PBL strategy. They stated that they only act as facilitators, and learners’ problem-solving and critical thinking skills are enhanced. They suggested a change in curriculum to adopt the PBL strategy. However, most participants may not continue to apply the PBL strategy stating that it is time-consuming and difficult to complete the Annual Teaching Plan (ATP). They complained about materials and equipment and learners' readiness to work. Evidence from RTOP shows that after the intervention, participants learn to encourage exploration and use learners' questions and comments to determine the direction and focus of classroom discussions.

Keywords: problem-solving, self-directed, critical thinking, intervention

Procedia PDF Downloads 119
602 The Impact of Corporate Social Responsibility Perception on Organizational Commitment: The Case of Cabin Crew in a Civil Aviation Company

Authors: Şeyda Kaya

Abstract:

The aim of this study is to examine the relationship between corporate social responsibility perception and organizational commitment among Turkish cabin crew. At the same time, the social responsibility perception and organizational commitment scores of the participants were compared according to their gender, age, education level, title, and work experience. In the globalizing world, businesses have developed some innovative marketing methods in order to survive and strengthen their place in the market. Nowadays, consumers who are connected to the brand with an emotional bond rather than being just consumers. Corporate Social Responsibility Projects, on the one hand, provide social benefit, on the other hand, increase the brand awareness of businesses by providing credibility in the eyes of consumers. The rapid increase of competition, requires businesses to use their human resources, which is the most important resource to sustain their existence, in the most effective and efficient way. For this reason, the concept of ‘Organizational Commitment’ has become an important research topic for business and academics. Although there are studies in the literature to determine the effect of the perception of corporate social Responsibility on Organizational Commitment in Banking and Finance and Tourism sectors, there are no studies conducted specifically for the Turkish aviation sector to our best knowledge. Personal information form, CSR scale, Importance of CSR scale, Organizational commitment scale were used as data collection tools in the research. CSR Scale created by Türker (2006). was used to find out how employees felt about CSR. Importance of CSR Scale through a subscale of the Perceived Role of Ethics and Social Responsibility (PRESOR) that Etheredge (1999) converted into a two-factor framework, the significance of social responsibility for employees was assessed. Organizational Commitment Scale, Mowday, Steers, and Porter (1979) created the OCQ, which uses 15 measures to evaluate global commitment to the organization. As a result of the study, there is a significant positive relationship between the participants' CSR scale sub-dimensions, CSR to Employees, CSR to Customers, CSR to Society, CSR to Government, CSR to Natural Environment, CSR to Next Generation, CSR to Governmental Organizations, Importance of CSR, and Organizational Commitment scores. As a result; as the participants' Corporate Social Responsibility scores increase, their organizational commitment increases. To summarize the findings of our study, the scores obtained from the CSR scale and the scores obtained from the Organizational Commitment scale were found to have a positive and significant relationship. In other words, if the participants value the corporate social responsibility projects of the institution they work for and think that they spare time and effort, the importance they attach to the corporate social responsibility projects and their organizational commitment to the institution they work for, increase. Similarly, the scores obtained from the Importance of CSR and the scores obtained from the Organizational Commitment scale also have a positive and significant relationship. As the importance given to corporate social responsibility projects by the participants increases, their organizational commitment to the institution they work for also increases.

Keywords: corporate social responsibility, organizational commitment, Turkish cabin crew, aviation

Procedia PDF Downloads 109
601 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme

Authors: Eleanor Nel

Abstract:

Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.

Keywords: evaluation, framework, impact, research output

Procedia PDF Downloads 76
600 Determinants of Unmet Need for Contraception among Currently Married Women in Rural and Urban Communities of Osun State, South-West Nigeria

Authors: Abiola O. Temitayo-Oboh, Olugbenga L. Abodunrin, Wasiu O. Adebimpe, Micheal C. Asuzu

Abstract:

Introduction: Many women who are sexually active would prefer to avoid becoming pregnant but are not using any method of contraception. These women are considered to have an unmet need for contraception. In an ideal situation, all women who want to space or limit their births and are exposed to the risk of conception would use some kind of conception; in practice, however, some women fail to use contraception which put them at risk of having mistimed or unwanted births, induced abortion, or maternal death. This study, therefore, aimed to assess the determinants of unmet need for contraception among currently married women in rural and urban communities of Osun State, South-West Nigeria. Methods: This was an analytical cross-sectional comparative study, which was carried out among currently married women. Three hundred and twenty respondents each were selected for the rural and urban groups from four Local Government Areas using multi-stage sampling technique. Data were collected using a pre-tested semi-structured interviewer-administered questionnaire and focus group discussion (FGD) guide; data analysis was done with Statistical Package for Social Sciences (SPSS) version 17.0 and detailed content analysis method respectively. Statistical analysis of the difference between proportions was done by the use of the Chi-square test and T-test was used to compare the means of the continuous variables. The study also utilized descriptive, bivariate and multivariate analytical techniques to examine the effect of some variables on unmet need. Level of statistical significance was set at p-value < 0.05 for all values. Results: Two hundred and ninety-six (92.5%) of the rural and 306 (95.6%) of the urban study population had heard of contraception, 365 (57.0 %) of the total respondents had good knowledge [162 (50.6 %) for rural respondents and 203 (63.4 %) for urban respondents]. This difference was statistically significant (p < 0.001). Five hundred and twenty-one (81.4%) respondents had a positive attitude towards contraception [243 (75.9%) in the rural and 278 (86.9%) in the urban area], and the difference was also statistically significant (p < 0.001). Only 47 (14.7%) and 59 (18.4%) of rural and urban women were current contraceptive users respectively. The total unmet need for contraception among rural women was 138 (43.1%) of which 82 (25.6%) was for spacing and 56 (17.5%), for limiting. While the total unmet need for contraception among urban women was 145 (45.3%) of which 96 (30.0%) was for spacing and 49 (15.3%) for limiting. Number of living children, knowledge of contraceptive methods, discussion with health workers about family planning, couples discussion about family planning and availability of family planning services were found to be predictors of women’s unmet need for contraception (p < 0.05). Conclusion: It is, therefore, recommended that there is need to intensify reproductive health education in bridging the knowledge gap, improving attitude and modifying practices regarding use of contraception in Nigeria. Hence, this will help to enhance the utilization of family planning services among Nigerian women.

Keywords: contraception, married women, Nigeria, rural, urban, unmet need

Procedia PDF Downloads 198
599 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation

Procedia PDF Downloads 245
598 The Effect of Different Strength Training Methods on Muscle Strength, Body Composition and Factors Affecting Endurance Performance

Authors: Shaher A. I. Shalfawi, Fredrik Hviding, Bjornar Kjellstadli

Abstract:

The main purpose of this study was to measure the effect of two different strength training methods on muscle strength, muscle mass, fat mass and endurance factors. Fourteen physical education students accepted to participate in this study. The participants were then randomly divided into three groups, traditional training group (TTG), cluster training group (CTG) and control group (CG). TTG consisted of 4 participants aged ( ± SD) (22.3 ± 1.5 years), body mass (79.2 ± 15.4 kg) and height (178.3 ± 11.9 cm). CTG consisted of 5 participants aged (22.2 ± 3.5 years), body mass (81.0 ± 24.0 kg) and height (180.2 ± 12.3 cm). CG consisted of 5 participants aged (22 ± 2.8 years), body mass (77 ± 19 kg) and height (174 ± 6.7 cm). The participants underwent a hypertrophy strength training program twice a week consisting of 4 sets of 10 reps at 70% of one-repetition maximum (1RM), using barbell squat and barbell bench press for 8 weeks. The CTG performed 2 x 5 reps using 10 s recovery in between repetitions and 50 s recovery between sets, while TTG performed 4 sets of 10 reps with 90 s recovery in between sets. Pre- and post-tests were administrated to assess body composition (weight, muscle mass, and fat mass), 1RM (bench press and barbell squat) and a laboratory endurance test (Bruce Protocol). Instruments used to collect the data were Tanita BC-601 scale (Tanita, Illinois, USA), Woodway treadmill (Woodway, Wisconsin, USA) and Vyntus CPX breath-to-breath system (Jaeger, Hoechberg, Germany). Analysis was conducted at all measured variables including time to peak VO2, peak VO2, heart rate (HR) at peak VO2, respiratory exchange ratio (RER) at peak VO2, and number of breaths per minute. The results indicate an increase in 1RM performance after 8 weeks of training. The change in 1RM squat was for the TTG = 30 ± 3.8 kg, CTG = 28.6 ± 8.3 kg and CG = 10.3 ± 13.8 kg. Similarly, the change in 1RM bench press was for the TTG = 9.8 ± 2.8 kg, CTG = 7.4 ± 3.4 kg and CG = 4.4 ± 3.4 kg. The within-group analysis from the oxygen consumption measured during the incremental exercise indicated that the TTG had only a statistical significant increase in their RER from 1.16 ± 0.04 to 1.23 ± 0.05 (P < 0.05). The CTG had a statistical significant improvement in their HR at peak VO2 from 186 ± 24 to 191 ± 12 Beats Per Minute (P < 0.05) and their RER at peak VO2 from 1.11 ± 0.06 to 1.18 ±0.05 (P < 0.05). Finally, the CG had only a statistical significant increase in their RER at peak VO2 from 1.11 ± 0.07 to 1.21 ± 0.05 (P < 0.05). The between-group analysis showed no statistical differences between all groups in all the measured variables from the oxygen consumption test during the incremental exercise including changes in muscle mass, fat mass, and weight (kg). The results indicate a similar effect of hypertrophy strength training irrespective of the methods of the training used on untrained subjects. Because there were no notable changes in body-composition measures, the results suggest that the improvements in performance observed in all groups is most probably due to neuro-muscular adaptation to training.

Keywords: hypertrophy strength training, cluster set, Bruce protocol, peak VO2

Procedia PDF Downloads 250
597 Towards a Strategic Framework for State-Level Epistemological Functions

Authors: Mark Darius Juszczak

Abstract:

While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.

Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program

Procedia PDF Downloads 77
596 Bending the Consciousnesses: Uncovering Environmental Issues Through Circuit Bending

Authors: Enrico Dorigatti

Abstract:

The growing pile of hazardous e-waste produced especially by those developed and wealthy countries gets relentlessly bigger, composed of the EEDs (Electric and Electronic Device) that are often thrown away although still well functioning, mainly due to (programmed) obsolescence. As a consequence, e-waste has taken, over the last years, the shape of a frightful, uncontrollable, and unstoppable phenomenon, mainly fuelled by market policies aiming to maximize sales—and thus profits—at any cost. Against it, governments and organizations put some efforts in developing ambitious frameworks and policies aiming to regulate, in some cases, the whole lifecycle of EEDs—from the design to the recycling. Incidentally, however, such regulations sometimes make the disposal of the devices economically unprofitable, which often translates into growing illegal e-waste trafficking—an activity usually undertaken by criminal organizations. It seems that nothing, at least in the near future, can stop the phenomenon of e-waste production and accumulation. But while, from a practical standpoint, a solution seems hard to find, much can be done regarding people's education, which translates into informing and promoting good practices such as reusing and repurposing. This research argues that circuit bending—an activity rooted in neo-materialist philosophy and post-digital aesthetic, and based on repurposing EEDs into novel music instruments and sound generators—could have a great potential in this. In particular, it asserts that circuit bending could expose ecological, environmental, and social criticalities related to the current market policies and economic model. Not only thanks to its practical side (e.g., sourcing and repurposing devices) but also to the artistic one (e.g., employing bent instruments for ecological-aware installations, performances). Currently, relevant literature and debate lack interest and information about the ecological aspects and implications of the practical and artistic sides of circuit bending. This research, therefore, although still at an early stage, aims to fill in this gap by investigating, on the one side, the ecologic potential of circuit bending and, on the other side, its capacity of sensitizing people, through artistic practice, about e-waste-related issues. The methodology will articulate in three main steps. Firstly, field research will be undertaken—with the purpose of understanding where and how to source, in an ecologic and sustainable way, (discarded) EEDs for circuit bending. Secondly, artistic installations and performances will be organized—to sensitize the audience about environmental concerns through sound art and music derived from bent instruments. Data, such as audiences' feedback, will be collected at this stage. The last step will consist in realising workshops to spread an ecologically-aware circuit bending practice. Additionally, all the data and findings collected will be made available and disseminated as resources.

Keywords: circuit bending, ecology, sound art, sustainability

Procedia PDF Downloads 171
595 On the Possibility of Real Time Characterisation of Ambient Toxicity Using Multi-Wavelength Photoacoustic Instrument

Authors: Tibor Ajtai, Máté Pintér, Noémi Utry, Gergely Kiss-Albert, Andrea Palágyi, László Manczinger, Csaba Vágvölgyi, Gábor Szabó, Zoltán Bozóki

Abstract:

According to the best knowledge of the authors, here we experimentally demonstrate first, a quantified correlation between the real-time measured optical feature of the ambient and the off-line measured toxicity data. Finally, using these correlations we are presenting a novel methodology for real time characterisation of ambient toxicity based on the multi wavelength aerosol phase photoacoustic measurement. Ambient carbonaceous particulate matter is one of the most intensively studied atmospheric constituent in climate science nowadays. Beyond their climatic impact, atmospheric soot also plays an important role as an air pollutant that harms human health. Moreover, according to the latest scientific assessments ambient soot is the second most important anthropogenic emission source, while in health aspect its being one of the most harmful atmospheric constituents as well. Despite of its importance, generally accepted standard methodology for the quantitative determination of ambient toxicology is not available yet. Dominantly, ambient toxicology measurement is based on the posterior analysis of filter accumulated aerosol with limited time resolution. Most of the toxicological studies are based on operational definitions using different measurement protocols therefore the comprehensive analysis of the existing data set is really limited in many cases. The situation is further complicated by the fact that even during its relatively short residence time the physicochemical features of the aerosol can be masked significantly by the actual ambient factors. Therefore, decreasing the time resolution of the existing methodology and developing real-time methodology for air quality monitoring are really actual issues in the air pollution research. During the last decades many experimental studies have verified that there is a relation between the chemical composition and the absorption feature quantified by Absorption Angström Exponent (AAE) of the carbonaceous particulate matter. Although the scientific community are in the common platform that the PhotoAcoustic Spectroscopy (PAS) is the only methodology that can measure the light absorption by aerosol with accurate and reliable way so far, the multi-wavelength PAS which are able to selectively characterise the wavelength dependency of absorption has become only available in the last decade. In this study, the first results of the intensive measurement campaign focusing the physicochemical and toxicological characterisation of ambient particulate matter are presented. Here we demonstrate the complete microphysical characterisation of winter time urban ambient including optical absorption and scattering as well as size distribution using our recently developed state of the art multi-wavelength photoacoustic instrument (4λ-PAS), integrating nephelometer (Aurora 3000) as well as single mobility particle sizer and optical particle counter (SMPS+C). Beyond this on-line characterisation of the ambient, we also demonstrate the results of the eco-, cyto- and genotoxicity measurements of ambient aerosol based on the posterior analysis of filter accumulated aerosol with 6h time resolution. We demonstrate a diurnal variation of toxicities and AAE data deduced directly from the multi-wavelength absorption measurement results.

Keywords: photoacoustic spectroscopy, absorption Angström exponent, toxicity, Ames-test

Procedia PDF Downloads 302
594 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections

Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor

Abstract:

Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).

Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance

Procedia PDF Downloads 168
593 21st Century Business Dynamics: Acting Local and Thinking Global through Extensive Business Reporting Language (XBRL)

Authors: Samuel Faboyede, Obiamaka Nwobu, Samuel Fakile, Dickson Mukoro

Abstract:

In the present dynamic business environment of corporate governance and regulations, financial reporting is an inevitable and extremely significant process for every business enterprise. Several financial elements such as Annual Reports, Quarterly Reports, ad-hoc filing, and other statutory/regulatory reports provide vital information to the investors and regulators, and establish trust and rapport between the internal and external stakeholders of an organization. Investors today are very demanding, and emphasize greatly on authenticity, accuracy, and reliability of financial data. For many companies, the Internet plays a key role in communicating business information, internally to management and externally to stakeholders. Despite high prominence being attached to external reporting, it is disconnected in most companies, who generate their external financial documents manually, resulting in high degree of errors and prolonged cycle times. Chief Executive Officers and Chief Financial Officers are increasingly susceptible to endorsing error-laden reports, late filing of reports, and non-compliance with regulatory acts. There is a lack of common platform to manage the sensitive information – internally and externally – in financial reports. The Internet financial reporting language known as eXtensible Business Reporting Language (XBRL) continues to develop in the face of challenges and has now reached the point where much of its promised benefits are available. This paper looks at the emergence of this revolutionary twenty-first century language of digital reporting. It posits that today, the world is on the brink of an Internet revolution that will redefine the ‘business reporting’ paradigm. The new Internet technology, eXtensible Business Reporting Language (XBRL), is already being deployed and used across the world. It finds that XBRL is an eXtensible Markup Language (XML) based information format that places self-describing tags around discrete pieces of business information. Once tags are assigned, it is possible to extract only desired information, rather than having to download or print an entire document. XBRL is platform-independent and it will work on any current or recent-year operating system, or any computer and interface with virtually any software. The paper concludes that corporate stakeholders and the government cannot afford to ignore the XBRL. It therefore recommends that all must act locally and think globally now via the adoption of XBRL that is changing the face of worldwide business reporting.

Keywords: XBRL, financial reporting, internet, internal and external reports

Procedia PDF Downloads 286
592 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction

Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun

Abstract:

The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.

Keywords: usability, qualitative data, text-processing algorithm, natural language processing

Procedia PDF Downloads 285
591 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 94
590 The Measurement of City Brand Effectiveness as Methodological and Strategic Challenge: Insights from Individual Interviews with International Experts

Authors: A. Augustyn, M. Florek, M. Herezniak

Abstract:

Since the public authorities are constantly pressured by the public opinion to showcase the tangible and measurable results of their efforts, the evaluation of place brand-related activities becomes a necessity. Given the political and social character of place branding process, the legitimization of the branding efforts requires the compliance of the objectives set out in the city brand strategy with the actual needs, expectations, and aspirations of various internal stakeholders. To deliver on the diverse promises, city authorities and brand managers need to translate them into the measurable indicators against which the brand strategy effectiveness will be evaluated. In concert with these observations are the findings from branding and marketing literature with a widespread consensus that places should adopt a more systematic and holistic approach in order to ensure the performance of their brands. However, the measurement of the effectiveness of place branding remains insufficiently explored in theory, even though it is considered a significant step in the process of place brand management. Therefore, the aim of the research presented in the current paper was to collect insights on the nature of effectiveness measurement of city brand strategies and to juxtapose these findings with the theoretical assumptions formed on the basis of the state-of-the-art literature review. To this end, 15 international academic experts (out of 18 initially selected) with affiliation from ten countries (five continents), were individually interviewed. The standardized set of 19 open-ended questions was used for all the interviewees, who had been selected based on their expertise and reputation in the fields of place branding/marketing. Findings were categorized into four modules: (i) conceptualizations of city brand effectiveness, (ii) methodological issues of city brand effectiveness measurement, (iii) the nature of measurement process, (iv) articulation of key performance indicators (KPIs). Within each module, the interviewees offered diverse insights into the subject based on their academic expertise and professional activity as consultants. They proposed that there should be a twofold understanding of effectiveness. The narrow one when it is conceived as the aptitude to achieve specific goals, and the broad one in which city brand effectiveness is seen as an increase in social and economic reality of a place, which in turn poses diverse challenges for the measurement concepts and processes. Moreover, the respondents offered a variety of insights into the methodological issues, particularly about the need for customization and flexibility of the measurement systems, for the employment of interdisciplinary approach to measurement and implications resulting therefrom. Considerable emphasis was put on the inward approach to measurement, namely the necessity to monitor the resident’s evaluation of brand related activities instead of benchmarking cities against the competitive set. Other findings encompass the issues of developing appropriate KPIs for the city brand, managing the measurement process and the inclusion of diverse stakeholders to produce a sound measurement system. Furthermore, the interviewees enumerated the most frequently made mistakes in measurement mainly resulting from the misunderstanding of the nature of city brands. This research was financed by the National Science Centre, Poland, research project no. 2015/19/B/HS4/00380 Towards the categorization of place brand strategy effectiveness indicators – findings from strategic documents of Polish district cities – theoretical and empirical approach.

Keywords: city branding, effectiveness, experts’ insights, measurement

Procedia PDF Downloads 145
589 Epidemiological Patterns of Pediatric Fever of Unknown Origin

Authors: Arup Dutta, Badrul Alam, Sayed M. Wazed, Taslima Newaz, Srobonti Dutta

Abstract:

Background: In today's world, with modern science and contemporary technology, a lot of diseases may be quickly identified and ruled out, but children's fever of unknown origin (FUO) still presents diagnostic difficulties in clinical settings. Any fever that reaches 38 °C and lasts for more than seven days without a known cause is now classified as a fever of unknown origin (FUO). Despite tremendous progress in the medical sector, fever of unknown origin, or FOU, persists as a major health issue and a major contributor to morbidity and mortality, particularly in children, and its spectrum is sometimes unpredictable. The etiology is influenced by geographic location, age, socioeconomic level, frequency of antibiotic resistance, and genetic vulnerability. Since there are currently no known diagnostic algorithms, doctors are forced to evaluate each patient one at a time with extreme caution. A persistent fever poses difficulties for both the patient and the doctor. This prospective observational study was carried out in a Bangladeshi tertiary care hospital from June 2018 to May 2019 with the goal of identifying the epidemiological patterns of fever of unknown origin in pediatric patients. Methods: It was a hospital-based prospective observational study carried out on 106 children (between 2 months and 12 years) with prolonged fever of >38.0 °C lasting for more than 7 days without a clear source. Children with additional chronic diseases or known immunodeficiency problems were not allowed. Clinical practices that helped determine the definitive etiology were assessed. Initial testing included a complete blood count, a routine urine examination, PBF, a chest X-ray, CRP measurement, blood cultures, serology, and additional pertinent investigations. The analysis focused mostly on the etiological results. The standard program SPSS 21 was used to analyze all of the study data. Findings: A total of 106 patients identified as having FUO were assessed, with over half (57.5%) being female and the majority (40.6%) falling within the 1 to 3-year age range. The study categorized the etiological outcomes into five groups: infections, malignancies, connective tissue conditions, miscellaneous, and undiagnosed. In the group that was being studied, infections were found to be the main cause in 44.3% of cases. Undiagnosed cases came in at 31.1%, cancers at 10.4%, other causes at 8.5%, and connective tissue disorders at 4.7%. Hepato-splenomegaly was seen in people with enteric fever, malaria, acute lymphoid leukemia, lymphoma, and hepatic abscesses, either by itself or in combination with other conditions. About 53% of people who were not diagnosed also had hepato-splenomegaly at the same time. Conclusion: Infections are the primary cause of PUO (pyrexia of unknown origin) in children, with undiagnosed cases being the second most common cause. An incremental approach is beneficial in the process of diagnosing a condition. Non-invasive examinations are used to diagnose infections and connective tissue disorders, while invasive investigations are used to diagnose cancer and other ailments. According to this study, the prevalence of undiagnosed diseases is still remarkable, so extensive historical analysis and physical examinations are necessary in order to provide a precise diagnosis.

Keywords: children, diagnostic challenges, fever of unknown origin, pediatric fever, undiagnosed diseases

Procedia PDF Downloads 27
588 The Late Bronze Age Archeometallurgy of Copper in Mountainous Colchis (Lechkhumi), Georgia

Authors: Nino Sulava, Brian Gilmour, Nana Rezesidze, Tamar Beridze, Rusudan Chagelishvili

Abstract:

Studies of ancient metallurgy are a subject of worldwide current interest. Georgia with its famous early metalworking traditions is one of the central parts of in the Caucasus region. The aim of the present study is to introduce the results of archaeometallurgical investigations being undertaken in the mountain region of Colchis, Lechkhumi (the Tsageri Municipality of western Georgia) and establish their place in the existing archaeological context. Lechkhumi (one of the historic provinces of Georgia known from Georgian, Greek, Byzantine and Armenian written sources as Lechkhumi/Skvimnia/Takveri) is the part of the Colchian mountain area. It is one of the important but little known centres of prehistoric metallurgy in the Caucasian region and of Colchian Bronze Age culture. Reconnaissance archaeological expeditions (2011-2015) revealed significant prehistoric metallurgical sites in Lechkhumi. Sites located in the vicinity of Dogurashi Village (Tsageri Municipality) have become the target area for archaeological excavations. During archaeological excavations conducted in 2016-2018 two archaeometallurgical sites – Dogurashi I and Dogurashi II were investigated. As a result of an interdisciplinary (archaeological, geological and geophysical) survey, it has been established that at both prehistoric Dogurashi mountain sites, it was copper that was being smelted and the ore sources are likely to be of local origin. Radiocarbon dating results confirm they were operating between about the 13th and 9th century BC. More recently another similar site has been identified in this area (Dogurashi III), and this is about to undergo detailed investigation. Other prehistoric metallurgical sites are being located and investigated in the Lechkhumi region as well as chance archaeological finds (often in hoards) – copper ingots, metallurgical production debris, slag, fragments of crucibles, tuyeres (air delivery pipes), furnace wall fragments and other related waste debris. Other chance finds being investigated are the many copper, bronze and (some) iron artefacts that have been found over many years. These include copper ingots, copper, bronze and iron artefacts such as tools, jewelry, and decorative items. These show the important but little known or understood the role of Lechkhumi in the late Bronze Age culture of Colchis. It would seem that mining and metallurgical manufacture form part of the local agricultural yearly lifecycle. Colchian ceramics have been found and also evidence for artefact production, small stone mould fragments and encrusted material from the casting of a fylfot (swastika) form of Colchian bronze buckle found in the vicinities of the early settlements of Tskheta and Dekhviri. Excavation and investigation of previously unknown archaeometallurgical sites in Lechkhumi will contribute significantly to the knowledge and understanding of prehistoric Colchian metallurgy in western Georgia (Adjara, Guria, Samegrelo, and Svaneti) and will reveal the importance of this region in the study of ancient metallurgy in Georgia and the Caucasus. Acknowledgment: This work has been supported by the Shota Rustaveli National Science Foundation (grant FR # 217128).

Keywords: archaeometallurgy, Colchis, copper, Lechkhumi

Procedia PDF Downloads 136
587 Implementing Urban Rainwater Harvesting Systems: Between Policy and Practice

Authors: Natàlia Garcia Soler, Timothy Moss

Abstract:

Despite the multiple benefits of sustainable urban drainage, as demonstrated in numerous case studies across the world, urban rainwater harvesting techniques are generally restricted to isolated model projects. The leap from niche to mainstream has, in most cities, proved an elusive goal. Why policies promoting rainwater harvesting are limited in their widespread implementation has seldom been subjected to systematic analysis. Much of the literature on the policy, planning and institutional contexts of these techniques focus either on their potential benefits or on project design, but very rarely on a critical-constructive analysis of past experiences of implementation. Moreover, the vast majority of these contributions are restricted to single-case studies. There is a dearth of knowledge with respect to, firstly, policy implementation processes and, secondly, multi-case analysis. Insights from both, the authors argue, are essential to inform more effective rainwater harvesting in cities in the future. This paper presents preliminary findings from a research project on rainwater harvesting in cities from a social science perspective that is funded by the Swedish Research Foundation (Formas). This project – UrbanRain – is examining the challenges and opportunities of mainstreaming rainwater harvesting in three European cities. The paper addresses two research questions: firstly, what lessons can be learned on suitable policy incentives and planning instruments for rainwater harvesting from a meta-analysis of the relevant international literature and, secondly, how far these lessons are reflected in a study of past and ongoing rainwater harvesting projects in a European forerunner city. This two-tier approach frames the structure of the paper. We present, first, the results of the literature analysis on policy and planning issues of urban rainwater harvesting. Here, we analyze quantitatively and qualitatively the literature of the past 15 years on this topic in terms of thematic focus, issues addressed and key findings and draw conclusions on research gaps, highlighting the need for more studies on implementation factors, actor interests, institutional adaptation and multi-level governance. In a second step we focus in on the experiences of rainwater harvesting in Berlin and present the results of a mapping exercise on a wide variety of projects implemented there over the last 30 years. Here, we develop a typology to characterize the rainwater harvesting projects in terms of policy issues (what problems and goals are targeted), project design (which kind of solutions are envisaged), project implementation (how and when they were implemented), location (whether they are in new or existing urban developments) and actors (which stakeholders are involved and how), paying particular attention to the shifting institutional framework in Berlin. Mapping and categorizing these projects is based on a combination of document analysis and expert interviews. The paper concludes by synthesizing the findings, identifying how far the goals, governance structures and instruments applied in the Berlin projects studied reflect the findings emerging from the meta-analysis of the international literature on policy and planning issues of rainwater harvesting and what implications these findings have for mainstreaming such techniques in future practice.

Keywords: institutional framework, planning, policy, project implementation, urban rainwater management

Procedia PDF Downloads 287
586 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 254
585 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.

Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy

Procedia PDF Downloads 134