Search results for: easily identification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4382

Search results for: easily identification

422 Recognition of a Thinly Bedded Distal Turbidite: A Case Study from a Proterozoic Delta System, Chaossa Formation, Simla Group, Western Lesser Himalaya, India

Authors: Priyanka Mazumdar, Ananya Mukhopadhyay

Abstract:

A lot of progress has been achieved in the research of turbidites during the last decades. However, their relationship to delta systems still deserves further attention. This paper addresses example of fine grained turbidite from a pro-deltaic deposit of a Proterozoic mixed energy delta system exposed along Chaossa-Baliana river section of the Chaossa Formation of the Simla Basin. Lithostratigraphic analysis of the Chaossa Formation reveals three major facies associations (prodelta deposit-FA1, delta slope deposit-FA2 and delta front deposit-FA3) based on lithofacies types, petrography and sedimentary structures. Detailed process-based facies and paleoenvironmental analysis of the study area have led to identification of more than150 m thick coarsening-upwards deltaic successions composed of fine grained turbidites overlain by delta slope deposits. Erosional features are locally common at the base of turbidite beds and still more widespread at the top. The complete sequence has eight sub-divisions that are here termed T1 to T8. The basal subdivision (T1) comprises a massive graded unit with a sharp, scoured base, internal parallel-lamination and cross-lamination. The overlying sequence shows textural and compositional grading through alternating silt and mud laminae (T2). T2 is overlying by T3 which is characterized by climbing ripple and cross lamination. Parallel laminae are the predominant facies attributes of T4 which caps the T3 unit. T5 has a loaded scour base and is mainly characterized laminated silt. The topmost three divisions, graded mud (T6), ungraded mud (T7) and laminated mud (T8). The proposed sequence is analogous to the Bouma (1962) structural scheme for sandy turbidites. Repetition of partial sequences represents deposition from different stages of evolution of a large, muddy, turbidity flow. Detailed facies analysis of the study area reveals that the sediments of the turbidites developed during normal regression at the stage of stable or marginally rising sea level. Thin-bedded turbidites were deposited predominantly by turbidity currents in the relatively shallower part of the Simla basin. The fine-grained turbidites are developed by resedimentation of delta-front sands and slumping of upper pro-delta muds.

Keywords: turbidites, prodelta, proterozoic, Simla Basin, Bouma sequence

Procedia PDF Downloads 269
421 An Ecological Grandeur: Environmental Ethics in Buddhist Perspective

Authors: Merina Islam

Abstract:

There are many environmental problems. Various counter measures have been taken for environmental problems. Philosophy is an important contributor to environmental studies as it takes deep interest in meaning analysis of the concept environment and other related concepts. The Buddhist frame, which is virtue ethical, remains a better alternative to the traditional environmental outlook. Granting the unique role of man in immoral deliberations, the Buddhist approach, however, maintains a holistic concept of ecological harmony. Buddhist environmental ethics is more concerned about the complete moral community, the total ecosystem, than any particular species within the community. The moral reorientation proposed here has resemblance to the concept of 'deep ecology. Given the present day prominence of virtue ethics, we need to explore further into the Buddhist virtue theory, so that a better framework to treat the natural world would be ensured. Environment has turned out to be one of the most widely discussed issues in the recent times. Buddhist concepts such as Pratityasamutpadavada, Samvrit Satya, Paramartha Satya, Shunyata, Sanghatvada, Bodhisattva, Santanvada and others deal with interdependence in terms of both internal as well external ecology. The internal ecology aims at mental well-being whereas external ecology deals with physical well-being. The fundamental Buddhist concepts for dealing with environmental Problems are where the environment has the same value as humans as from the two Buddhist doctrines of the Non-duality of Life and its Environment and the Origination in Dependence; and the inevitability of overcoming environmental problems through the practice of the way of the Bodhisattva, because environmental problems are evil for people and nature. Buddhism establishes that there is a relationship among all the constituents of the world. There is nothing in the world which is independent from any other thing. Everything is dependent on others. The realization that everything in the universe is mutually interdependent also shows that the man cannot keep itself unaffected from ecology. This paper would like to focus how the Buddhist’s identification of nature and the Dhamma can contribute toward transforming our understanding, attitudes, and actions regarding the care of the earth. Environmental Ethics in Buddhism presents a logical and thorough examination of the metaphysical and ethical dimensions of early Buddhist literature. From the Buddhist viewpoint, humans are not in a category that is distinct and separate from other sentient beings, nor are they intrinsically superior. All sentient beings are considered to have the Buddha-nature, that is, the potential to become fully enlightened. Buddhists do not believe in treating of non-human sentient beings as objects for human consumption. The significance of Buddhist theory of interdependence can be understood from the fact that it shows that one’s happiness or suffering originates from ones realization or non-realization respectively of the dependent nature of everything. It is obvious, even without emphasis, which in the context of deep ecological crisis of today there is a need to infuse the consciousness of interdependence.

Keywords: Buddhism, deep ecology, environmental problems, Pratityasamutpadavada

Procedia PDF Downloads 315
420 Implementation of Learning Disability Annual Review Clinics to Ensure Good Patient Care, Safety, and Equality in Covid-19: A Two Pass Audit in General Practice

Authors: Liam Martin, Martha Watson

Abstract:

Patients with learning disabilities (LD) are at increased risk of physical and mental illness due to health inequality. To address this, NICE recommends that people from the age of 14 with a learning disability should have an annual LD health check. This consultation should include a holistic review of the patient’s physical, mental and social health needs with a view of creating an action plan to support the patient’s care. The expected standard set by the Quality and Outcomes Framework (QOF) is that each general practice should review at least 75% of their LD patients annually. During COVID-19, there have been barriers to primary care, including health anxiety, the shift to online general practice and the increase in GP workloads. A surgery in North London wanted to assess whether they were falling short of the expected standard for LD patient annual reviews in order to optimize care post Covid-19. A baseline audit was completed to assess how many LD patients were receiving their annual reviews over the period of 29th September 2020 to 29th September 2021. This information was accessed using EMIS Web Health Care System (EMIS). Patients included were aged 14 and over as per QOF standards. Doctors were not notified of this audit taking place. Following the results of this audit, the creation of learning disability clinics was recommended. These clinics were recommended to be on the ground floor and should be a dedicated time for LD reviews. A re-audit was performed via the same process 6 months later in March 2022. At the time of the baseline audit, there were 71 patients aged 14 and over that were on the LD register. 54% of these LD patients were found to have documentation of an annual LD review within the last 12 months. None of the LD patients between the ages of 14-18 years old had received their annual review. The results were discussed with the practice, and dedicated clinics were set up to review their LD patients. A second pass of the audit was completed 6 months later. This showed an improvement, with 84% of the LD patients registered at the surgery now having a documented annual review within the last 12 months. 78% of the patients between the ages of 14-18 years old had now been reviewed. The baseline audit revealed that the practice was not meeting the expected standard for LD patient’s annual health checks as outlined by QOF, with the most neglected patients being between the ages of 14-18. Identification and awareness of this vulnerable cohort is important to ensure measures can be put into place to support their physical, mental and social wellbeing. Other practices could consider an audit of their annual LD health checks to make sure they are practicing within QOF standards, and if there is a shortfall, they could consider implementing similar actions as used here; dedicated clinics for LD patient reviews.

Keywords: COVID-19, learning disability, learning disability health review, quality and outcomes framework

Procedia PDF Downloads 85
419 Automated Facial Symmetry Assessment for Orthognathic Surgery: Utilizing 3D Contour Mapping and Hyperdimensional Computing-Based Machine Learning

Authors: Wen-Chung Chiang, Lun-Jou Lo, Hsiu-Hsia Lin

Abstract:

This study aimed to improve the evaluation of facial symmetry, which is crucial for planning and assessing outcomes in orthognathic surgery (OGS). Facial symmetry plays a key role in both aesthetic and functional aspects of OGS, making its accurate evaluation essential for optimal surgical results. To address the limitations of traditional methods, a different approach was developed, combining three-dimensional (3D) facial contour mapping with hyperdimensional (HD) computing to enhance precision and efficiency in symmetry assessments. The study was conducted at Chang Gung Memorial Hospital, where data were collected from 2018 to 2023 using 3D cone beam computed tomography (CBCT), a highly detailed imaging technique. A large and comprehensive dataset was compiled, consisting of 150 normal individuals and 2,800 patients, totaling 5,750 preoperative and postoperative facial images. These data were critical for training a machine learning model designed to analyze and quantify facial symmetry. The machine learning model was trained to process 3D contour data from the CBCT images, with HD computing employed to power the facial symmetry quantification system. This combination of technologies allowed for an objective and detailed analysis of facial features, surpassing the accuracy and reliability of traditional symmetry assessments, which often rely on subjective visual evaluations by clinicians. In addition to developing the system, the researchers conducted a retrospective review of 3D CBCT data from 300 patients who had undergone OGS. The patients’ facial images were analyzed both before and after surgery to assess the clinical utility of the proposed system. The results showed that the facial symmetry algorithm achieved an overall accuracy of 82.5%, indicating its robustness in real-world clinical applications. Postoperative analysis revealed a significant improvement in facial symmetry, with an average score increase of 51%. The mean symmetry score rose from 2.53 preoperatively to 3.89 postoperatively, demonstrating the system's effectiveness in quantifying improvements after OGS. These results underscore the system's potential for providing valuable feedback to surgeons and aiding in the refinement of surgical techniques. The study also led to the development of a web-based system that automates facial symmetry assessment. This system integrates HD computing and 3D contour mapping into a user-friendly platform that allows for rapid and accurate evaluations. Clinicians can easily access this system to perform detailed symmetry assessments, making it a practical tool for clinical settings. Additionally, the system facilitates better communication between clinicians and patients by providing objective, easy-to-understand symmetry scores, which can help patients visualize the expected outcomes of their surgery. In conclusion, this study introduced a valuable and highly effective approach to facial symmetry evaluation in OGS, combining 3D contour mapping, HD computing, and machine learning. The resulting system achieved high accuracy and offers a streamlined, automated solution for clinical use. The development of the web-based platform further enhances its practicality, making it a valuable tool for improving surgical outcomes and patient satisfaction in orthognathic surgery.

Keywords: facial symmetry, orthognathic surgery, facial contour mapping, hyperdimensional computing

Procedia PDF Downloads 27
418 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 249
417 Storms Dynamics in the Black Sea in the Context of the Climate Changes

Authors: Eugen Rusu

Abstract:

The objective of the work proposed is to perform an analysis of the wave conditions in the Black Sea basin. This is especially focused on the spatial and temporal occurrences and on the dynamics of the most extreme storms in the context of the climate changes. A numerical modelling system, based on the spectral phase averaged wave model SWAN, has been implemented and validated against both in situ measurements and remotely sensed data, all along the sea. Moreover, a successive correction method for the assimilation of the satellite data has been associated with the wave modelling system. This is based on the optimal interpolation of the satellite data. Previous studies show that the process of data assimilation improves considerably the reliability of the results provided by the modelling system. This especially concerns the most sensitive cases from the point of view of the accuracy of the wave predictions, as the extreme storm situations are. Following this numerical approach, it has to be highlighted that the results provided by the wave modelling system above described are in general in line with those provided by some similar wave prediction systems implemented in enclosed or semi-enclosed sea basins. Simulations of this wave modelling system with data assimilation have been performed for the 30-year period 1987-2016. Considering this database, the next step was to analyze the intensity and the dynamics of the higher storms encountered in this period. According to the data resulted from the model simulations, the western side of the sea is considerably more energetic than the rest of the basin. In this western region, regular strong storms provide usually significant wave heights greater than 8m. This may lead to maximum wave heights even greater than 15m. Such regular strong storms may occur several times in one year, usually in the wintertime, or in late autumn, and it can be noticed that their frequency becomes higher in the last decade. As regards the case of the most extreme storms, significant wave heights greater than 10m and maximum wave heights close to 20m (and even greater) may occur. Such extreme storms, which in the past were noticed only once in four or five years, are more recent to be faced almost every year in the Black Sea, and this seems to be a consequence of the climate changes. The analysis performed included also the dynamics of the monthly and annual significant wave height maxima as well as the identification of the most probable spatial and temporal occurrences of the extreme storm events. Finally, it can be concluded that the present work provides valuable information related to the characteristics of the storm conditions and on their dynamics in the Black Sea. This environment is currently subjected to high navigation traffic and intense offshore and nearshore activities and the strong storms that systematically occur may produce accidents with very serious consequences.

Keywords: Black Sea, extreme storms, SWAN simulations, waves

Procedia PDF Downloads 248
416 Public Values in Service Innovation Management: Case Study in Elderly Care in Danish Municipality

Authors: Christian T. Lystbaek

Abstract:

Background: The importance of innovation management has traditionally been ascribed to private production companies, however, there is an increasing interest in public services innovation management. One of the major theoretical challenges arising from this situation is to understand public values justifying public services innovation management. However, there is not single and stable definition of public value in the literature. The research question guiding this paper is: What is the supposed added value operating in the public sphere? Methodology: The study takes an action research strategy. This is highly contextualized methodology, which is enacted within a particular set of social relations into which on expects to integrate the results. As such, this research strategy is particularly well suited for its potential to generate results that can be applied by managers. The aim of action research is to produce proposals with a creative dimension capable of compelling actors to act in a new and pertinent way in relation to the situations they encounter. The context of the study is a workshop on public services innovation within elderly care. The workshop brought together different actors, such as managers, personnel and two groups of users-citizens (elderly clients and their relatives). The process was designed as an extension of the co-construction methods inherent in action research. Scenario methods and focus groups were applied to generate dialogue. The main strength of these techniques is to gather and exploit as much data as possible by exposing the discourse of justification used by the actors to explain or justify their points of view when interacting with others on a given subject. The approach does not directly interrogate the actors on their values, but allows their values to emerge through debate and dialogue. Findings: The public values related to public services innovation management in elderly care were identified in two steps. In the first step, identification of values, values were identified in the discussions. Through continuous analysis of the data, a network of interrelated values was developed. In the second step, tracking group consensus, we then ascertained the degree to which the meaning attributed to the value was common to the participants, classifying the degree of consensus as high, intermediate or low. High consensus corresponds to strong convergence in meaning, intermediate to generally shared meanings between participants, and low to divergences regarding the meaning between participants. Only values with high or intermediate degree of consensus were retained in the analysis. Conclusion: The study shows that the fundamental criterion for justifying public services innovation management is the capacity for actors to enact public values in their work. In the workshop, we identified two categories of public values, intrinsic value and behavioural values, and a list of more specific values.

Keywords: public services innovation management, public value, co-creation, action research

Procedia PDF Downloads 279
415 Development of a Bus Information Web System

Authors: Chiyoung Kim, Jaegeol Yim

Abstract:

Bus service is often either main or the only public transportation available in cities. In metropolitan areas, both subways and buses are available whereas in the medium sized cities buses are usually the only type of public transportation available. Bus Information Systems (BIS) provide current locations of running buses, efficient routes to travel from one place to another, points of interests around a given bus stop, a series of bus stops consisting of a given bus route, and so on to users. Thanks to BIS, people do not have to waste time at a bus stop waiting for a bus because BIS provides exact information on bus arrival times at a given bus stop. Therefore, BIS does a lot to promote the use of buses contributing to pollution reduction and saving natural resources. BIS implementation costs a huge amount of budget as it requires a lot of special equipment such as road side equipment, automatic vehicle identification and location systems, trunked radio systems, and so on. Consequently, medium and small sized cities with a low budget cannot afford to install BIS even though people in these cities need BIS service more desperately than people in metropolitan areas. It is possible to provide BIS service at virtually no cost under the assumption that everybody carries a smartphone and there is at least one person with a smartphone in a running bus who is willing to reveal his/her location details while he/she is sitting in a bus. This assumption is usually true in the real world. The smartphone penetration rate is greater than 100% in the developed countries and there is no reason for a bus driver to refuse to reveal his/her location details while driving. We have developed a mobile app that periodically reads values of sensors including GPS and sends GPS data to the server when the bus stops or when the elapsed time from the last send attempt is greater than a threshold. This app detects the bus stop state by investigating the sensor values. The server that receives GPS data from this app has also been developed. Under the assumption that the current locations of all running buses collected by the mobile app are recorded in a database, we have also developed a web site that provides all kinds of information that most BISs provide to users through the Internet. The development environment is: OS: Windows 7 64bit, IDE: Eclipse Luna 4.4.1, Spring IDE 3.7.0, Database: MySQL 5.1.7, Web Server: Apache Tomcat 7.0, Programming Language: Java 1.7.0_79. Given a start and a destination bus stop, it finds a shortest path from the start to the destination using the Dijkstra algorithm. Then, it finds a convenient route considering number of transits. For the user interface, we use the Google map. Template classes that are used by the Controller, DAO, Service and Utils classes include BUS, BusStop, BusListInfo, BusStopOrder, RouteResult, WalkingDist, Location, and so on. We are now integrating the mobile app system and the web app system.

Keywords: bus information system, GPS, mobile app, web site

Procedia PDF Downloads 216
414 Carbon Sequestration in Spatio-Temporal Vegetation Dynamics

Authors: Nothando Gwazani, K. R. Marembo

Abstract:

An increase in the atmospheric concentration of carbon dioxide (CO₂) from fossil fuel and land use change necessitates identification of strategies for mitigating threats associated with global warming. Oceans are insufficient to offset the accelerating rate of carbon emission. However, the challenges of oceans as a source of reducing carbon footprint can be effectively overcome by the storage of carbon in terrestrial carbon sinks. The gases with special optical properties that are responsible for climate warming include carbon dioxide (CO₂), water vapors, methane (CH₄), nitrous oxide (N₂O), nitrogen oxides (NOₓ), stratospheric ozone (O₃), carbon monoxide (CO) and chlorofluorocarbons (CFC’s). Amongst these, CO₂ plays a crucial role as it contributes to 50% of the total greenhouse effect and has been linked to climate change. Because plants act as carbon sinks, interest in terrestrial carbon sequestration has increased in an effort to explore opportunities for climate change mitigation. Removal of carbon from the atmosphere is a topical issue that addresses one important aspect of an overall strategy for carbon management namely to help mitigate the increasing emissions of CO₂. Thus, terrestrial ecosystems have gained importance for their potential to sequester carbon and reduce carbon sink in oceans, which have a substantial impact on the ocean species. Field data and electromagnetic spectrum bands were analyzed using ArcGIS 10.2, QGIS 2.8 and ERDAS IMAGINE 2015 to examine the vegetation distribution. Satellite remote sensing data coupled with Normalized Difference Vegetation Index (NDVI) was employed to assess future potential changes in vegetation distributions in Eastern Cape Province of South Africa. The observed 5-year interval analysis examines the amount of carbon absorbed using vegetation distribution. In 2015, the numerical results showed low vegetation distribution, therefore increased the acidity of the oceans and gravely affected fish species and corals. The outcomes suggest that the study area could be effectively utilized for carbon sequestration so as to mitigate ocean acidification. The vegetation changes measured through this investigation suggest an environmental shift and reduced vegetation carbon sink, and that threatens biodiversity and ecosystem. In order to sustain the amount of carbon in the terrestrial ecosystems, the identified ecological factors should be enhanced through the application of good land and forest management practices. This will increase the carbon stock of terrestrial ecosystems thereby reducing direct loss to the atmosphere.

Keywords: remote sensing, vegetation dynamics, carbon sequestration, terrestrial carbon sink

Procedia PDF Downloads 151
413 Identification of Candidate Gene for Root Development and Its Association With Plant Architecture and Yield in Cassava

Authors: Abiodun Olayinka, Daniel Dzidzienyo, Pangirayi Tongoona, Samuel Offei, Edwige Gaby Nkouaya Mbanjo, Chiedozie Egesi, Ismail Yusuf Rabbi

Abstract:

Cassava (Manihot esculenta Crantz) is a major source of starch for various industrial applications. However, the traditional cultivation and harvesting methods of cassava are labour-intensive and inefficient, limiting the supply of fresh cassava roots for industrial starch production. To achieve improved productivity and quality of fresh cassava roots through mechanized cultivation, cassava cultivars with compact plant architecture and moderate plant height are needed. Plant architecture-related traits, such as plant height, harvest index, stem diameter, branching angle, and lodging tolerance, are critical for crop productivity and suitability for mechanized cultivation. However, the genetics of cassava plant architecture remain poorly understood. This study aimed to identify the genetic bases of the relationships between plant architecture traits and productivity-related traits, particularly starch content. A panel of 453 clones developed at the International Institute of Tropical Agriculture, Nigeria, was genotyped and phenotyped for 18 plant architecture and productivity-related traits at four locations in Nigeria. A genome-wide association study (GWAS) was conducted using the phenotypic data from a panel of 453 clones and 61,238 high-quality Diversity Arrays Technology sequencing (DArTseq) derived Single Nucleotide Polymorphism (SNP) markers that are evenly distributed across the cassava genome. Five significant associations between ten SNPs and three plant architecture component traits were identified through GWAS. We found five SNPs on chromosomes 6 and 16 that were significantly associated with shoot weight, harvest index, and total yield through genome-wide association mapping. We also discovered an essential candidate gene that is co-located with peak SNPs linked to these traits in M. esculenta. A review of the cassava reference genome v7.1 revealed that the SNP on chromosome 6 is in proximity to Manes.06G101600.1, a gene that regulates endodermal differentiation and root development in plants. The findings of this study provide insights into the genetic basis of plant architecture and yield in cassava. Cassava breeders could leverage this knowledge to optimize plant architecture and yield in cassava through marker-assisted selection and targeted manipulation of the candidate gene.

Keywords: manihot esculenta crantz, plant architecture, dartseq, snp markers, genome-wide association study

Procedia PDF Downloads 96
412 Service Business Model Canvas: A Boundary Object Operating as a Business Development Tool

Authors: Taru Hakanen, Mervi Murtonen

Abstract:

This study aims to increase understanding of the transition of business models in servitization. The significance of service in all business has increased dramatically during the past decades. Service-dominant logic (SDL) describes this change in the economy and questions the goods-dominant logic on which business has primarily been based in the past. A business model canvas is one of the most cited and used tools in defining end developing business models. The starting point of this paper lies in the notion that the traditional business model canvas is inherently goods-oriented and best suits for product-based business. However, the basic differences between goods and services necessitate changes in business model representations when proceeding in servitization. Therefore, new knowledge is needed on how the conception of business model and the business model canvas as its representation should be altered in servitized firms in order to better serve business developers and inter-firm co-creation. That is to say, compared to products, services are intangible and they are co-produced between the supplier and the customer. Value is always co-created in interaction between a supplier and a customer, and customer experience primarily depends on how well the interaction succeeds between the actors. The role of service experience is even stronger in service business compared to product business, as services are co-produced with the customer. This paper provides business model developers with a service business model canvas, which takes into account the intangible, interactive, and relational nature of service. The study employs a design science approach that contributes to theory development via design artifacts. This study utilizes qualitative data gathered in workshops with ten companies from various industries. In particular, key differences between Goods-dominant logic (GDL) and SDL-based business models are identified when an industrial firm proceeds in servitization. As the result of the study, an updated version of the business model canvas is provided based on service-dominant logic. The service business model canvas ensures a stronger customer focus and includes aspects salient for services, such as interaction between companies, service co-production, and customer experience. It can be used for the analysis and development of a current service business model of a company or for designing a new business model. It facilitates customer-focused new service design and service development. It aids in the identification of development needs, and facilitates the creation of a common view of the business model. Therefore, the service business model canvas can be regarded as a boundary object, which facilitates the creation of a common understanding of the business model between several actors involved. The study contributes to the business model and service business development disciplines by providing a managerial tool for practitioners in service development. It also provides research insight into how servitization challenges companies’ business models.

Keywords: boundary object, business model canvas, managerial tool, service-dominant logic

Procedia PDF Downloads 367
411 The Effectiveness of an Educational Program on Awareness of Cancer Signs, Symptoms, and Risk Factors among School Students in Oman

Authors: Khadija Al-Hosni, Moon Fai Chan, Mohammed Al-Azri

Abstract:

Background: Several studies suggest that most school-age adolescents are poorly informed on cancer warning signs and risk factors. Providing adolescents with sufficient knowledge would increase their awareness in adulthood and improve seeking behaviors later. Significant: The results will provide a clear vision in assisting key decision-makers in formulating policies on the students' awareness programs towards cancer. So, the likelihood of avoiding cancer in the future will be increased or even promote early diagnosis. Objectives: to evaluate the effectiveness of an education program designed to increase awareness of cancer signs and symptoms risk factors, improve the behavior of seeking help among school students in Oman, and address the barriers to obtaining medical help. Methods: A randomized controlled trial with two groups was conducted in Oman. A total of 1716 students (n=886/control, n= 830/education), aged 15-17 years, at 10th and 11th grade from 12 governmental schools 3 in governorates from 20-February-2022 to 12-May-2022. Basic demographic data were collected, and the Cancer Awareness Measure (CAM) was used as the primary outcome. Data were collected at baseline (T0) and 4 weeks after (T1). The intervention group received an education program about cancer's cause and its signs and symptoms. In contrast, the control group did not receive any education related to this issue during the study period. Non-parametric tests were used to compare the outcomes between groups. Results: At T0, the lamp was the most recognized cancer warning sign in control (55.0%) and intervention (55.2%) groups. However, there were no significant changes at T1 for all signs in the control group. In contrast, all sign outcomes were improved significantly (p<0.001) in the intervention group, the highest response was unexplained pain (93.3%). Smoking was the most recognized risk factor in both groups: (82.8% for control; 84.1% for intervention) at T0. However, there was no significant change in T1 for the control group, but there was for the intervention group (p<0.001), the highest identification was smoking cigarettes (96.5%). Too scared was the largest barrier to seeking medical help by students in the control group at T0 (63.0%) and T1 (62.8%). However, there were no significant changes in all barriers in this group. Otherwise, being too embarrassed (60.2%) was the largest barrier to seeking medical help for students in the intervention group at T0 and too scared (58.6%) at T1. Although there were reductions in all barriers, significant differences were found in six of ten only (p<0.001). Conclusion: The intervention was effective in improving students' awareness of cancer symptoms, warning signs (p<0.001), and risk factors (p<0.001 reduced the most addressed barriers to seeking medical help (p<0.001) in comparison to the control group. The Ministry of Education in Oman could integrate awareness of cancer within the curriculum, and more interventions are needed on the sociological part to overcome the barriers that interfere with seeking medical help.

Keywords: adolescents, awareness, cancer, education, intervention, student

Procedia PDF Downloads 86
410 Social Entrepreneurship Core Dimensions and Influential Perspectives: An Exploratory Study

Authors: Filipa Lancastre, Carmen Lages, Filipe Santos

Abstract:

The concept of social entrepreneurship (SE) remains ambiguous and deprived of a widely accepted operational definition. We argue that an awareness about the consensual constituent elements of SE from all key players from its ecosystem as well as a deeper understanding of apparently divergent perspectives will allow the different stakeholders (social entrepreneurs, corporations, investors, policymakers, the beneficiaries themselves) to bridge and cooperate for societal value co-creation in trying to solve our most pressing societal issues. To address our research question –what are the dimensions of SE that are consensual and controversial across existing perspectives? – We designed a two-step qualitative study. In a first step, we conducted an extensive literature review, collecting and analyzing 155 different SE definitions. From this initial step, we extracted and characterized three consensual and six controversial dimensions of the SE concept. In a second step, we conducted 20 semi-structured interviews with practitioners that are actively involved in the SE field. The goal of this second step was to verify if the literature did not capture any key dimension, understand how the dimensions related to each other and to understand the rationale behind them. The dimensions of the SE concept were extracted based on the relevance of each theme and on the theoretical relationship among them. To identify the relevance, we used as a proxy the frequency of each theme was referred to in our sample of definitions. To understand relationships, as identified in the previous section, we included concepts from both the management and psychology literature, such as the Entrepreneurial Orientation concept from the entrepreneurship literature, the Subjective Well Being construct from psychology literature, and the Resource-Based Theory from the strategy literature. This study has two main contributions; First, the identification of (consensual and controversial) dimensions of SE that exist across scattered definitions from the academic and practitioner literature. Second, a framework that parsimoniously synthesizes four dominant perspectives of SE and relates them with the SE dimensions. Assuming the contested nature of the SE concept, it is not expected that these views will be reconciled at the academic or practitioner field level. In future research, academics can, however, be aware of the existence of different understandings of SE and avoid bias towards a single view, developing holistic studies on SE phenomena or comparing differences by studying their underlying assumptions. Additionally, it is important that researchers make explicit the perspective they are embracing to ensure consistency among the research question, sampling procedures and implications of results. At the practitioner level, individuals or groups following different logics are predictably mutually suspicious and might benefit from taking stock of other perspectives on SE, building bridges and fostering cross-fertilization to the benefit of the SE ecosystem for which all contribute.

Keywords: social entrepreneurship, conceptualization, dimensions, perspectives

Procedia PDF Downloads 175
409 Technological Transference Tools to Diffuse Low-Cost Earthquake Resistant Construction with Adobe in Rural Areas of the Peruvian Andes

Authors: Marcial Blondet, Malena Serrano, Álvaro Rubiños, Elin Mattsson

Abstract:

In Peru, there are more than two million houses made of adobe (sun dried mud bricks) or rammed earth (35% of the total houses), in which almost 9 million people live, mainly because they cannot afford to purchase industrialized construction materials. Although adobe houses are cheap to build and thermally comfortable, their seismic performance is very poor, and they usually suffer significant damage or collapse with tragic loss of life. Therefore, over the years, researchers at the Pontifical Catholic University of Peru and other institutions have developed many reinforcement techniques as an effort to improve the structural safety of earthen houses located in seismic areas. However, most rural communities live under unacceptable seismic risk conditions because these techniques have not been adopted massively, mainly due to high cost and lack of diffusion. The nylon rope mesh reinforcement technique is simple and low-cost, and two technological transference tools have been developed to diffuse it among rural communities: 1) Scale seismic simulations using a portable shaking table have been designed to prove its effectiveness to protect adobe houses; 2) A step-by-step illustrated construction manual has been developed to guide the complete building process of a nylon rope mesh reinforced adobe house. As a study case, it was selected the district of Pullo: a small rural community in the Peruvian Andes where more than 80% of its inhabitants live in adobe houses and more than 60% are considered to live in poverty or extreme poverty conditions. The research team carried out a one-day workshop in May 2015 and a two-day workshop in September 2015. Results were positive: First, the nylon rope mesh reinforcement procedure was proven simple enough to be replicated by adults, both young and seniors, and participants handled ropes and knots easily as they use them for daily livestock activity. In addition, nylon ropes were proven highly available in the study area as they were found at two local stores in variety of color and size.. Second, the portable shaking table demonstration successfully showed the effectiveness of the nylon rope mesh reinforcement and generated interest on learning about it. On the first workshop, more than 70% of the participants were willing to formally subscribe and sign up for practical training lessons. On the second workshop, more than 80% of the participants returned the second day to receive introductory practical training. Third, community members found illustrations on the construction manual simple and friendly but the roof system illustrations led to misinterpretation so they were improved. The technological transfer tools developed in this project can be used to train rural dwellers on earthquake-resistant self-construction with adobe, which is still very common in the Peruvian Andes. This approach would allow community members to develop skills and capacities to improve safety of their households on their own, thus, mitigating their high seismic risk and preventing tragic losses. Furthermore, proper training in earthquake-resistant self-construction with adobe would prevent rural dwellers from depending on external aid after an earthquake and become agents of their own development.

Keywords: adobe, Peruvian Andes, safe housing, technological transference

Procedia PDF Downloads 293
408 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales

Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias

Abstract:

Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.

Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline

Procedia PDF Downloads 80
407 Reading as Moral Afternoon Tea: An Empirical Study on the Compensation Effect between Literary Novel Reading and Readers’ Moral Motivation

Authors: Chong Jiang, Liang Zhao, Hua Jian, Xiaoguang Wang

Abstract:

The belief that there is a strong relationship between reading narrative and morality has generally become the basic assumption of scholars, philosophers, critics, and cultural critics. The virtuality constructed by literary novels inspires readers to regard the narrative as a thinking experiment, creating the distance between readers and events so that they can freely and morally experience the positions of different roles. Therefore, the virtual narrative combined with literary characteristics is always considered as a "moral laboratory." Well-established findings revealed that people show less lying and deceptive behaviors in the morning than in the afternoon, called the morning morality effect. As a limited self-regulation resource, morality will be constantly depleted with the change of time rhythm under the influence of the morning morality effect. It can also be compensated and restored in various ways, such as eating, sleeping, etc. As a common form of entertainment in modern society, literary novel reading gives people more virtual experience and emotional catharsis, just as a relaxing afternoon tea that helps people break away from fast-paced work, restore physical strength, and relieve stress in a short period of leisure. In this paper, inspired by the compensation control theory, we wonder whether reading literary novels in the digital environment could replenish a kind of spiritual energy for self-regulation to compensate for people's moral loss in the afternoon. Based on this assumption, we leverage the social annotation text content generated by readers in digital reading to represent the readers' reading attention. We then recognized the semantics and calculated the readers' moral motivation expressed in the annotations and investigated the fine-grained dynamics of the moral motivation changing in each time slot within 24 hours of a day. Comprehensively comparing the division of different time intervals, sufficient experiments showed that the moral motivation reflected in the annotations in the afternoon is significantly higher than that in the morning. The results robustly verified the hypothesis that reading compensates for moral motivation, which we called the moral afternoon tea effect. Moreover, we quantitatively identified that such moral compensation can last until 14:00 in the afternoon and 21:00 in the evening. In addition, it is interesting to find that the division of time intervals of different units impacts the identification of moral rhythms. Dividing the time intervals by four-hour time slot brings more insights of moral rhythms compared with that of three-hour and six-hour time slot.

Keywords: digital reading, social annotation, moral motivation, morning morality effect, control compensation

Procedia PDF Downloads 149
406 The Management of Company Directors Conflicts of Interest in Large Corporations and the Issue of Public Interest

Authors: Opemiposi Adegbulu

Abstract:

The research investigates the existence of a public interest consideration or rationale for the management of directors’ conflicts of interest within large public corporations. This is conducted through extensive literature review and theories on the definition of conflicts of interest, the firm and purposes of the fiduciary duty of loyalty under which the management of these conflicts of interest find their foundation. Conflicts of interest is an elusive, diverse and engaging subject, a cross-cutting problem of governance which involves all levels of governance, ranging from local to global, public to corporate or financial sectors. It is a common issue that affects corporate governance and corporate culture, having a negative impact on the reputation of corporations and their trustworthiness. It is clear that addressing this issue is imperative for good governance of corporations as they are increasingly becoming and are powerful global economies with significant power and influence in the society. Similarly, the bargaining power of these powerful corporations has been recognised by international organisations such as the UN and the OECD. This is made evident by the increasing calls and push for greater responsibility of these corporations for environmental and social disasters caused by their corporate activities and their impact in various parts of the world. Equally, in the US, the Sarbanes-Oxley Act like other legislation and regulatory efforts made to manage conflicts of interest linked to corporate governance, in many countries indicates that there is a (global) public interest in the maintenance of the orderly functioning of commerce. Consequently, the governance of these corporations is tremendously pivotal to the society as it touches upon a key aspect of the good functioning of society. This is because corporations, particularly large international corporations can be said to be the plumbing of the global economy. This study will employ theoretical, doctrinal and comparative methods. The research will make use largely of theory-guided methodology and theoretical framework – theories of the firm, public interest, regulation, conflicts of interest in general, directors’ conflicts of interest and corporate governance. Although, the research is intended to be narrowed down to the topic of conflicts of interest in corporate governance, the subject of company directors’ duty of loyalty and the management of conflicts of interest, an examination of the history, origin and typology of conflicts of interest in general will be carried out in order to identify some specific challenges to understanding and identifying these conflicts of interest; origin, diverging theories, psychological barrier to definition, similarities with public sector conflicts of interest due to the notions of corrosion of trust, the effect on decision-making and judgment, “being in a particular kind of situation”, etc. The result of this research will be useful and relevant in the identification of the rationale for the management of directors’ conflicts of interest, contributing to the understanding of conflicts of interest in the private sector and the significance of public interest in corporate governance of large corporations.

Keywords: conflicts of interest, corporate governance, corporate law, directors duty of loyalty, public interest

Procedia PDF Downloads 368
405 A Proposal for an Excessivist Social Welfare Ordering

Authors: V. De Sandi

Abstract:

In this paper, we characterize a class of rank-weighted social welfare orderings that we call ”Excessivist.” The Excessivist Social Welfare Ordering (eSWO) judges incomes above a fixed threshold θ as detrimental to society. To accomplish this, the identification of a richness or affluence line is necessary. We employ a fixed, exogenous line of excess. We define an eSWF in the form of a weighted sum of individual’s income. This requires introducing n+1 vectors of weights, one for all possible numbers of individuals below the threshold. To do this, the paper introduces a slight modification of the class of rank weighted class of social welfare function. Indeed, in our excessivist social welfare ordering, we allow the weights to be both positive (for individuals below the line) and negative (for individuals above). Then, we introduce ethical concerns through an axiomatic approach. The following axioms are required: continuity above and below the threshold (Ca, Cb), anonymity (A), absolute aversion to excessive richness (AER), pigou dalton positive weights preserving transfer (PDwpT), sign rank preserving full comparability (SwpFC) and strong pareto below the threshold (SPb). Ca, Cb requires that small changes in two income distributions above and below θ do not lead to changes in their ordering. AER suggests that if two distributions are identical in any respect but for one individual above the threshold, who is richer in the first, then the second should be preferred by society. This means that we do not care about the waste of resources above the threshold; the priority is the reduction of excessive income. According to PDwpT, a transfer from a better-off individual to a worse-off individual despite their relative position to the threshold, without reversing their ranks, leads to an improved distribution if the number of individuals below the threshold is the same after the transfer or the number of individuals below the threshold has increased. SPb holds only for individuals below the threshold. The weakening of strong pareto and our ethics need to be justified; we support them through the notion of comparative egalitarianism and income as a source of power. SwpFC is necessary to ensure that, following a positive affine transformation, an individual does not become excessively rich in only one distribution, thereby reversing the ordering of the distributions. Given the axioms above, we can characterize the class of the eSWO, getting the following result through a proof by contradiction and exhaustion: Theorem 1. A social welfare ordering satisfies the axioms of continuity above and below the threshold, anonymity, sign rank preserving full comparability, aversion to excessive richness, Pigou Dalton positive weight preserving transfer, and strong pareto below the threshold, if and only if it is an Excessivist-social welfare ordering. A discussion about the implementation of different threshold lines reviewing the primary contributions in this field follows. What the commonly implemented social welfare functions have been overlooking is the concern for extreme richness at the top. The characterization of Excessivist Social Welfare Ordering, given the axioms above, aims to fill this gap.

Keywords: comparative egalitarianism, excess income, inequality aversion, social welfare ordering

Procedia PDF Downloads 64
404 The Potential Involvement of Platelet Indices in Insulin Resistance in Morbid Obese Children

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Association between insulin resistance (IR) and hematological parameters has long been a matter of interest. Within this context, body mass index (BMI), red blood cells, white blood cells and platelets were involved in this discussion. Parameters related to platelets associated with IR may be useful indicators for the identification of IR. Platelet indices such as mean platelet volume (MPV), platelet distribution width (PDW) and plateletcrit (PCT) are being questioned for their possible association with IR. The aim of this study was to investigate the association between platelet (PLT) count as well as PLT indices and the surrogate indices used to determine IR in morbid obese (MO) children. A total of 167 children participated in the study. Three groups were constituted. The number of cases was 34, 97 and 36 children in the normal BMI, MO and metabolic syndrome (MetS) groups, respectively. Sex- and age-dependent BMI-based percentile tables prepared by World Health Organization were used for the definition of morbid obesity. MetS criteria were determined. BMI values, homeostatic model assessment for IR (HOMA-IR), alanine transaminase-to-aspartate transaminase ratio (ALT/AST) and diagnostic obesity notation model assessment laboratory (DONMA-lab) index values were computed. PLT count and indices were analyzed using automated hematology analyzer. Data were collected for statistical analysis using SPSS for Windows. Arithmetic mean and standard deviation were calculated. Mean values of PLT-related parameters in both control and study groups were compared by one-way ANOVA followed by Tukey post hoc tests to determine whether a significant difference exists among the groups. The correlation analyses between PLT as well as IR indices were performed. Statistically significant difference was accepted as p-value < 0.05. Increased values were detected for PLT (p < 0.01) and PCT (p > 0.05) in MO group compared to those observed in children with N-BMI. Significant increases for PLT (p < 0.01) and PCT (p < 0.05) were observed in MetS group in comparison with the values obtained in children with N-BMI (p < 0.01). Significantly lower MPV and PDW values were obtained in MO group compared to the control group (p < 0.01). HOMA-IR (p < 0.05), DONMA-lab index (p < 0.001) and ALT/AST (p < 0.001) values in MO and MetS groups were significantly increased compared to the N-BMI group. On the other hand, DONMA-lab index values also differed between MO and MetS groups (p < 0.001). In the MO group, PLT was negatively correlated with MPV and PDW values. These correlations were not observed in the N-BMI group. None of the IR indices exhibited a correlation with PLT and PLT indices in the N-BMI group. HOMA-IR showed significant correlations both with PLT and PCT in the MO group. All of the three IR indices were well-correlated with each other in all groups. These findings point out the missing link between IR and PLT activation. In conclusion, PLT and PCT may be related to IR in addition to their identities as hemostasis markers during morbid obesity. Our findings have suggested that DONMA-lab index appears as the best surrogate marker for IR due to its discriminative feature between morbid obesity and MetS.

Keywords: children, insulin resistance, metabolic syndrome, plateletcrit, platelet indices

Procedia PDF Downloads 106
403 Challenges to Safe and Effective Prescription Writing in the Environment Where Digital Prescribing is Absent

Authors: Prashant Neupane, Asmi Pandey, Mumna Ehsan, Katie Davies, Richard Lowsby

Abstract:

Introduction/Background & aims: Safe and effective prescribing in hospitals, directly and indirectly, impacts the health of the patients. Even though digital prescribing in the National Health Service (NHS), UK has been used in lots of tertiary centers along with district general hospitals, a significant number of NHS trusts are still using paper prescribing. We came across lots of irregularities in our daily clinical practice when we are doing paper prescribing. The main aim of the study was to assess how safely and effectively are we prescribing at our hospital where there is no access to digital prescribing. Method/Summary of work: We conducted a prospective audit in the critical care department at Mid Cheshire Hopsitals NHS Foundation Trust in which 20 prescription charts from different patients were randomly selected over a period of 1 month. We assessed 16 multiple categories from each prescription chart and compared them to the standard trust guidelines on prescription. Results/Discussion: We collected data from 20 different prescription charts. 16 categories were evaluated within each prescription chart. The results showed there was an urgent need for improvement in 8 different sections. In 85% of the prescription chart, all the prescribers who prescribed the medications were not identified. Name, GMC number and signature were absent in the required prescriber identification section of the prescription chart. In 70% of prescription charts, either indication or review date of the antimicrobials was absent. Units of medication were not documented correctly in 65% and the allergic status of the patient was absent in 30% of the charts. The start date of medications was missing and alternations of the medications were not done properly in 35%of charts. The patient's name was not recorded in all desired sections of the chart in 50% of cases and cancellations of the medication were not done properly in 45% of the prescription charts. Conclusion(s): From the audit and data analysis, we assessed the areas in which we needed improvement in prescription writing in the Critical care department. However, during the meetings and conversations with the experts from the pharmacy department, we realized this audit is just a representation of the specialized department of the hospital where access to prescribing is limited to a certain number of prescribers. But if we consider bigger departments of the hospital where patient turnover is much more, the results could be much worse. The findings were discussed in the Critical care MDT meeting where suggestions regarding digital/electronic prescribing were discussed. A poster and presentation regarding safe and effective prescribing were done, awareness poster was prepared and attached alongside every bedside in critical care where it is visible to prescribers. We consider this as a temporary measure to improve the quality of prescribing, however, we strongly believe digital prescribing will help to a greater extent to control weak areas which are seen in paper prescribing.

Keywords: safe prescribing, NHS, digital prescribing, prescription chart

Procedia PDF Downloads 120
402 Interculturalizing Ethiopian Universities: Between Initiation and Institutionalization

Authors: Desta Kebede Ayana, Lies Sercu, Demelash Mengistu

Abstract:

The study is set in Ethiopia, a sub-Saharan multilingual, multiethnic African country, which has seen a significant increase in the number of universities in recent years. The aim of this growth is to provide access to education for all cultural and linguistic groups across the country. However, there are challenges in promoting intercultural competence among students in this diverse context. The aim of the study is to investigate the interculturalization of Ethiopian Higher Education Institutions as perceived by university lecturers and administrators. In particular, the study aims to determine the level of support for this educational innovation and gather suggestions for its implementation and institutionalization. The researchers employed semi-structured interviews with administrators and lecturers from two large Ethiopian universities to gather data. Thematic analysis was utilized for coding and analyzing the interview data, with the assistance of the NVIVO software. The findings obtained from the grounded analysis of the interview data reveal that while there are opportunities for interculturalization in the curriculum and campus life, support for educational innovation remains low. Administrators and lecturers also emphasize the government's responsibility to prioritize interculturalization over other educational innovation goals. The study contributes to the existing literature by examining an under-researched population in an under-researched context. Additionally, the study explores whether Western perspectives of intercultural competence align with the African context, adding to the theoretical understanding of intercultural education. The data for this study was collected through semi-structured interviews conducted with administrators and lecturers from two large Ethiopian universities. The interviews allowed for an in-depth exploration of the participants' views on interculturalization in higher education. Thematic analysis was applied to the interview data, allowing for the identification and organization of recurring themes and patterns. The analysis was conducted using the NVIVO software, which aided in coding and analyzing the data. The study addresses the extent to which administrators and lecturers support the interculturalization of Ethiopian Higher Education Institutions. It also explores their suggestions for implementing and institutionalizing intercultural education, as well as their perspectives on the current level of institutionalization. The study highlights the challenges in interculturalizing Ethiopian universities and emphasizes the need for greater support and prioritization of intercultural education. It also underscores the importance of considering the African context when conceptualizing intercultural competence. This research contributes to the understanding of intercultural education in diverse contexts and provides valuable insights for policymakers and educational institutions aiming to promote intercultural competence in higher education settings.

Keywords: administrators, educational change, Ethiopia, intercultural competence, lecturers

Procedia PDF Downloads 98
401 Investigating Links in Achievement and Deprivation (ILiAD): A Case Study Approach to Community Differences

Authors: Ruth Leitch, Joanne Hughes

Abstract:

This paper presents the findings of a three-year government-funded study (ILiAD) that aimed to understand the reasons for differential educational achievement within and between socially and economically deprived areas in Northern Ireland. Previous international studies have concluded that there is a positive correlation between deprivation and underachievement. Our preliminary secondary data analysis suggested that the factors involved in educational achievement within multiple deprived areas may be more complex than this, with some areas of high multiple deprivation having high levels of student attainment, whereas other less deprived areas demonstrated much lower levels of student attainment, as measured by outcomes on high stakes national tests. The study proposed that no single explanation or disparate set of explanations could easily account for the linkage between levels of deprivation and patterns of educational achievement. Using a social capital perspective that centralizes the connections within and between individuals and social networks in a community as a valuable resource for educational achievement, the ILiAD study involved a multi-level case study analysis of seven community sites in Northern Ireland, selected on the basis of religious composition (housing areas are largely segregated by religious affiliation), measures of multiple deprivation and differentials in educational achievement. The case study approach involved three (interconnecting) levels of qualitative data collection and analysis - what we have termed Micro (or community/grassroots level) understandings, Meso (or school level) explanations and Macro (or policy/structural) factors. The analysis combines a statistical mapping of factors with qualitative, in-depth data interpretation which, together, allow for deeper understandings of the dynamics and contributory factors within and between the case study sites. Thematic analysis of the qualitative data reveals both cross-cutting factors (e.g. demographic shifts and loss of community, place of the school in the community, parental capacity) and analytic case studies of explanatory factors associated with each of the community sites also permit a comparative element. Issues arising from the qualitative analysis are classified either as drivers or inhibitors of educational achievement within and between communities. Key issues that are emerging as inhibitors/drivers to attainment include: the legacy of the community conflict in Northern Ireland, not least in terms of inter-generational stress, related with substance abuse and mental health issues; differing discourses on notions of ‘community’ and ‘achievement’ within/between community sites; inter-agency and intra-agency levels of collaboration and joined-up working; relationship between the home/school/community triad and; school leadership and school ethos. At this stage, the balance of these factors can be conceptualized in terms of bonding social capital (or lack of it) within families, within schools, within each community, within agencies and also bridging social capital between the home/school/community, between different communities and between key statutory and voluntary organisations. The presentation will outline the study rationale, its methodology, present some cross-cutting findings and use an illustrative case study of the findings from a community site to underscore the importance of attending to community differences when trying to engage in research to understand and improve educational attainment for all.

Keywords: educational achievement, multiple deprivation, community case studies, social capital

Procedia PDF Downloads 388
400 Fresh Amnion Membrane Grafting for the Regeneration of Skin in Full Thickness Burn in Newborn - Case Report

Authors: Priyanka Yadav, Umesh Bnasal, Yashvinder Kumar

Abstract:

The placenta is an important structure that provides oxygen and nutrients to the growing fetus in utero. It is usually thrown away after birth, but it has a therapeutic role in the regeneration of tissue. It is covered by the amniotic membrane, which can be easily separated into the amnion layer and the chorion layer—the amnion layer act as a biofilm for the healing of burn wound and non-healing ulcers. The freshly collected membrane has stem cells, cytokines, growth factors, and anti-inflammatory properties, which act as a biofilm for the healing of wounds. It functions as a barrier and prevents heat and water loss and also protects from bacterial contamination, thus supporting the healing process. The application of Amnion membranes has been successfully used for wound and reconstructive purposes for decades. It is a very cheap and easy process and has shown superior results to allograft and xenograft. However, there are very few case reports of amnion membrane grafting in newborns; we intend to highlight its therapeutic importance in burn injuries in newborns. We present a case of 9 days old male neonate who presented to the neonatal unit of Maulana Azad Medical College with a complaint of fluid-filled blisters and burns wound on the body for six days. He was born outside the hospital at 38 weeks of gestation to a 24-year-old primigravida mother by vaginal delivery. The presentation was cephalic and the amniotic fluid was clear. His birth weight was 2800 gm and APGAR scores were 7 and 8 at 1 and 5 minutes, respectively. His anthropometry was appropriate for gestational age. He developed respiratory distress after birth requiring oxygen support by nasal prongs for three days. On the day of life three, he developed blisters on his body, starting from than face then over the back and perineal region. At a presentation on the day of life nine, he had blisters and necrotic wound on the right side of the face, back, right shoulder and genitalia, affecting 60% of body surface area with full-thickness loss of skin. He was started on intravenous antibiotics and fluid therapy. Pus culture grew Pseudomonas aeuroginosa, for which culture-specific antibiotics were started. Plastic surgery reference was taken and regular wound dressing was done with antiseptics. He had a storming course during the hospital stay. On the day of life 35 when the baby was hemodynamically stable, amnion membrane grafting was done on the wound site; for the grafting, fresh amnion membrane was removed under sterile conditions from the placenta obtained by caesarean section. It was then transported to the plastic surgery unit in half an hour in a sterile fluid where the graft was applied over the infant’s wound. The amnion membrane grafting was done twice in two weeks for covering the whole wound area. After successful uptake of amnion membrane, skin from the thigh region was autografted over the whole wound area by Meek technique in a single setting. The uptake of autograft was excellent and most of the areas were healed. In some areas, there was patchy regeneration of skin so dressing was continued. The infant was discharged after three months of hospital stay and was later followed up in the plastic surgery unit of the hospital.

Keywords: amnion membrane grafting, autograft, meek technique, newborn, regeneration of skin

Procedia PDF Downloads 161
399 Synthesis and Properties of Poly(N-(sulfophenyl)aniline) Nanoflowers and Poly(N-(sulfophenyl)aniline) Nanofibers/Titanium dioxide Nanoparticles by Solid Phase Mechanochemical and Their Application in Hybrid Solar Cell

Authors: Mazaher Yarmohamadi-Vasel, Ali Reza Modarresi-Alama, Sahar Shabzendedara

Abstract:

Purpose/Objectives: The first purpose was synthesize Poly(N-(sulfophenyl)aniline) nanoflowers (PSANFLs) and Poly(N-(sulfophenyl)aniline) nanofibers/titanium dioxide nanoparticles ((PSANFs/TiO2NPs) by a solid-state mechano-chemical reaction and template-free method and use them in hybrid solar cell. Also, our second aim was to increase the solubility and the processability of conjugated nanomaterials in water through polar functionalized materials. poly[N-(4-sulfophenyl)aniline] is easily soluble in water because of the presence of polar groups of sulfonic acid in the polymer chain. Materials/Methods: Iron (III) chloride hexahydrate (FeCl3∙6H2O) were bought from Merck Millipore Company. Titanium oxide nanoparticles (TiO2, <20 nm, anatase) and Sodium diphenylamine-4-sulfonate (99%) were bought from Sigma-Aldrich Company. Titanium dioxide nanoparticles paste (PST-20T) was prepared from Sharifsolar Co. Conductive glasses coated with indium tin oxide (ITO) were bought from Xinyan Technology Co (China). For the first time we used the solid-state mechano-chemical reaction and template-free method to synthesize Poly(N-(sulfophenyl)aniline) nanoflowers. Moreover, for the first time we used the same technique to synthesize nanocomposite of Poly(N-(sulfophenyl)aniline) nanofibers and titanium dioxide nanoparticles (PSANFs/TiO2NPs) also for the first time this nanocomposite was synthesized. Examining the results of electrochemical calculations energy gap obtained by CV curves and UV–vis spectra demonstrate that PSANFs/TiO2NPs nanocomposite is a p-n type material that can be used in photovoltaic cells. Doctor blade method was used to creat films for three kinds of hybrid solar cells in terms of different patterns like ITO│TiO2NPs│Semiconductor sample│Al. In the following, hybrid photovoltaic cells in bilayer and bulk heterojunction structures were fabricated as ITO│TiO2NPs│PSANFLs│Al and ITO│TiO2NPs│PSANFs /TiO2NPs│Al, respectively. Fourier-transform infrared spectra, field emission scanning electron microscopy (FE-SEM), ultraviolet-visible spectra, cyclic voltammetry (CV) and electrical conductivity were the analysis that used to characterize the synthesized samples. Results and Conclusions: FE-SEM images clearly demonstrate that the morphology of the synthesized samples are nanostructured (nanoflowers and nanofibers). Electrochemical calculations of band gap from CV curves demonstrated that the forbidden band gap of the PSANFLs and PSANFs/TiO2NPs nanocomposite are 2.95 and 2.23 eV, respectively. I–V characteristics of hybrid solar cells and their power conversion efficiency (PCE) under 100 mWcm−2 irradiation (AM 1.5 global conditions) were measured that The PCE of the samples were 0.30 and 0.62%, respectively. At the end, all the results of solar cell analysis were discussed. To sum up, PSANFLs and PSANFLs/TiO2NPs were successfully synthesized by an affordable and straightforward mechanochemical reaction in solid-state under the green condition. The solubility and processability of the synthesized compounds have been improved compared to the previous work. We successfully fabricated hybrid photovoltaic cells of synthesized semiconductor nanostructured polymers and TiO2NPs as different architectures. We believe that the synthesized compounds can open inventive pathways for the development of other Poly(N-(sulfophenyl)aniline based hybrid materials (nanocomposites) proper for preparing new generation solar cells.

Keywords: mechanochemical synthesis, PSANFLs, PSANFs/TiO2NPs, solar cell

Procedia PDF Downloads 67
398 Simo-syl: A Computer-Based Tool to Identify Language Fragilities in Italian Pre-Schoolers

Authors: Marinella Majorano, Rachele Ferrari, Tamara Bastianello

Abstract:

The recent technological advance allows for applying innovative and multimedia screen-based assessment tools to test children's language and early literacy skills, monitor their growth over the preschool years, and test their readiness for primary school. Several are the advantages that a computer-based assessment tool offers with respect to paper-based tools. Firstly, computer-based tools which provide the use of games, videos, and audio may be more motivating and engaging for children, especially for those with language difficulties. Secondly, computer-based assessments are generally less time-consuming than traditional paper-based assessments: this makes them less demanding for children and provides clinicians and researchers, but also teachers, with the opportunity to test children multiple times over the same school year and, thus, to monitor their language growth more systematically. Finally, while paper-based tools require offline coding, computer-based tools sometimes allow obtaining automatically calculated scores, thus producing less subjective evaluations of the assessed skills and provide immediate feedback. Nonetheless, using computer-based assessment tools to test meta-phonological and language skills in children is not yet common practice in Italy. The present contribution aims to estimate the internal consistency of a computer-based assessment (i.e., the Simo-syl assessment). Sixty-three Italian pre-schoolers aged between 4;10 and 5;9 years were tested at the beginning of the last year of the preschool through paper-based standardised tools in their lexical (Peabody Picture Vocabulary Test), morpho-syntactical (Grammar Repetition Test for Children), meta-phonological (Meta-Phonological skills Evaluation test), and phono-articulatory skills (non-word repetition). The same children were tested through Simo-syl assessment on their phonological and meta-phonological skills (e.g., recognise syllables and vowels and read syllables and words). The internal consistency of the computer-based tool was acceptable (Cronbach's alpha = .799). Children's scores obtained in the paper-based assessment and scores obtained in each task of the computer-based assessment were correlated. Significant and positive correlations emerged between all the tasks of the computer-based assessment and the scores obtained in the CMF (r = .287 - .311, p < .05) and in the correct sentences in the RCGB (r = .360 - .481, p < .01); non-word repetition standardised test significantly correlates with the reading tasks only (r = .329 - .350, p < .05). Further tasks should be included in the current version of Simo-syl to have a comprehensive and multi-dimensional approach when assessing children. However, such a tool represents a good chance for the teachers to early identifying language-related problems even in the school environment.

Keywords: assessment, computer-based, early identification, language-related skills

Procedia PDF Downloads 183
397 Approach to Honey Volatiles' Profiling by Gas Chromatography and Mass Spectrometry

Authors: Igor Jerkovic

Abstract:

Biodiversity of flora provides many different nectar sources for the bees. Unifloral honeys possess distinctive flavours, mainly derived from their nectar sources (characteristic volatile organic components (VOCs)). Specific or nonspecific VOCs (chemical markers) could be used for unifloral honey characterisation as addition to the melissopalynologycal analysis. The main honey volatiles belong, in general, to three principal categories: terpenes, norisoprenoids, and benzene derivatives. Some of these substances have been described as characteristics of the floral source, and other compounds, like several alcohols, branched aldehydes, and furan derivatives, may be related to the microbial purity of honey processing and storage conditions. Selection of the extraction method for the honey volatiles profiling should consider that heating of the honey produce different artefacts and therefore conventional methods of VOCs isolation (such as hydrodistillation) cannot be applied for the honey. Two-way approach for the isolation of the honey VOCs was applied using headspace solid-phase microextraction (HS-SPME) and ultrasonic solvent extraction (USE). The extracts were analysed by gas chromatography and mass spectrometry (GC-MS). HS-SPME (with the fibers of different polarity such as polydimethylsiloxane/ divinylbenzene (PDMS/DVB) or divinylbenzene/carboxene/ polydimethylsiloxane (DVB/CAR/PDMS)) enabled isolation of high volatile headspace VOCs of the honey samples. Among them, some characteristic or specific compounds can be found such as 3,4-dihydro-3-oxoedulan (in Centaurea cyanus L. honey) or 1H-indole, methyl anthranilate, and cis-jasmone (in Citrus unshiu Marc. honey). USE with different solvents (mainly dichloromethane or the mixture pentane : diethyl ether 1 : 2 v/v) enabled isolation of less volatile and semi-volatile VOCs of the honey samples. Characteristic compounds from C. unshiu honey extracts were caffeine, 1H-indole, 1,3-dihydro-2H-indol-2-one, methyl anthranilate, and phenylacetonitrile. Sometimes, the selection of solvent sequence was useful for more complete profiling such as sequence I: pentane → diethyl ether or sequence II: pentane → pentane/diethyl ether (1:2, v/v) → dichloromethane). The extracts with diethyl ether contained hydroquinone and 4-hydroxybenzoic acid as the major compounds, while (E)-4-(r-1’,t-2’,c-4’-trihydroxy-2’,6’,6’-trimethylcyclo-hexyl)but-3-en-2-one predominated in dichloromethane extracts of Allium ursinum L. honey. With this two-way approach, it was possible to obtain a more detailed insight into the honey volatile and semi-volatile compounds and to minimize the risks of compound discrimination due to their partial extraction that is of significant importance for the complete honey profiling and identification of the chemical biomarkers that can complement the pollen analysis.

Keywords: honey chemical biomarkers, honey volatile compounds profiling, headspace solid-phase microextraction (HS-SPME), ultrasonic solvent extraction (USE)

Procedia PDF Downloads 203
396 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 68
395 Corrosion Analysis of Brazed Copper-Based Conducts in Particle Accelerator Water Cooling Circuits

Authors: A. T. Perez Fontenla, S. Sgobba, A. Bartkowska, Y. Askar, M. Dalemir Celuch, A. Newborough, M. Karppinen, H. Haalien, S. Deleval, S. Larcher, C. Charvet, L. Bruno, R. Trant

Abstract:

The present study investigates the corrosion behavior of copper (Cu) based conducts predominantly brazed with Sil-Fos (self-fluxing copper-based filler with silver and phosphorus) within various cooling circuits of demineralized water across different particle accelerator components at CERN. The study covers a range of sample service time, from a few months to fifty years, and includes various accelerator components such as quadrupoles, dipoles, and bending magnets. The investigation comprises the established sample extraction procedure, examination methodology including non-destructive testing, evaluation of the corrosion phenomena, and identification of commonalities across the studied components as well as analysis of the environmental influence. The systematic analysis included computed microtomography (CT) of the joints that revealed distributed defects across all brazing interfaces. Some defects appeared to result from areas not wetted by the filler during the brazing operation, displaying round shapes, while others exhibited irregular contours and radial alignment, indicative of a network or interconnection. The subsequent dry cutting performed facilitated access to the conduct's inner surface and the brazed joints for further inspection through light and electron microscopy (SEM) and chemical analysis via Energy Dispersive X-ray spectroscopy (EDS). Brazing analysis away from affected areas identified the expected phases for a Sil-Fos alloy. In contrast, the affected locations displayed micrometric cavities propagating into the material, along with selective corrosion of the bulk Cu initiated at the conductor-braze interface. Corrosion product analysis highlighted the consistent presence of sulfur (up to 6 % in weight), whose origin and role in the corrosion initiation and extension is being further investigated. The importance of this study is paramount as it plays a crucial role in comprehending the underlying factors contributing to recently identified water leaks and evaluating the extent of the issue. Its primary objective is to provide essential insights for the repair of impacted brazed joints when accessibility permits. Moreover, the study seeks to contribute to the improvement of design and manufacturing practices for future components, ultimately enhancing the overall reliability and performance of magnet systems within CERN accelerator facilities.

Keywords: accelerator facilities, brazed copper conducts, demineralized water, magnets

Procedia PDF Downloads 46
394 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing

Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn

Abstract:

Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.

Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency

Procedia PDF Downloads 112
393 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 292