Search results for: dynamic load cases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10510

Search results for: dynamic load cases

340 Toward the Decarbonisation of EU Transport Sector: Impacts and Challenges of the Diffusion of Electric Vehicles

Authors: Francesca Fermi, Paola Astegiano, Angelo Martino, Stephanie Heitel, Michael Krail

Abstract:

In order to achieve the targeted emission reductions for the decarbonisation of the European economy by 2050, fundamental contributions are required from both energy and transport sectors. The objective of this paper is to analyse the impacts of a largescale diffusion of e-vehicles, either battery-based or fuel cells, together with the implementation of transport policies aiming at decreasing the use of motorised private modes in order to achieve greenhouse gas emission reduction goals, in the context of a future high share of renewable energy. The analysis of the impacts and challenges of future scenarios on transport sector is performed with the ASTRA (ASsessment of TRAnsport Strategies) model. ASTRA is a strategic system-dynamic model at European scale (EU28 countries, Switzerland and Norway), consisting of different sub-modules related to specific aspects: the transport system (e.g. passenger trips, tonnes moved), the vehicle fleet (composition and evolution of technologies), the demographic system, the economic system, the environmental system (energy consumption, emissions). A key feature of ASTRA is that the modules are linked together: changes in one system are transmitted to other systems and can feed-back to the original source of variation. Thanks to its multidimensional structure, ASTRA is capable to simulate a wide range of impacts stemming from the application of transport policy measures: the model addresses direct impacts as well as second-level and third-level impacts. The simulation of the different scenarios is performed within the REFLEX project, where the ASTRA model is employed in combination with several energy models in a comprehensive Modelling System. From the transport sector perspective, some of the impacts are driven by the trend of electricity price estimated from the energy modelling system. Nevertheless, the major drivers to a low carbon transport sector are policies related to increased fuel efficiency of conventional drivetrain technologies, improvement of demand management (e.g. increase of public transport and car sharing services/usage) and diffusion of environmentally friendly vehicles (e.g. electric vehicles). The final modelling results of the REFLEX project will be available from October 2018. The analysis of the impacts and challenges of future scenarios is performed in terms of transport, environmental and social indicators. The diffusion of e-vehicles produces a consistent reduction of future greenhouse gas emissions, although the decarbonisation target can be achieved only with the contribution of complementary transport policies on demand management and supporting the deployment of low-emission alternative energy for non-road transport modes. The paper explores the implications through time of transport policy measures on mobility and environment, underlying to what extent they can contribute to a decarbonisation of the transport sector. Acknowledgements: The results refer to the REFLEX project which has received grants from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 691685.

Keywords: decarbonisation, greenhouse gas emissions, e-mobility, transport policies, energy

Procedia PDF Downloads 153
339 From Indigeneity to Urbanity: A Performative Study of Indian Saang (Folk Play) Tradition

Authors: Shiv Kumar

Abstract:

In the shifting scenario of postmodern age that foregrounds the multiplicity of meanings and discourses, the present research article seeks to investigate various paradigm shift of contemporary performances concerning Haryanvi Saangs, so-called folk plays, which are being performed widely in the regional territory of Haryana, a northern state of India. Folk arts cannot be studied efficiently by using the tools of literary criticism because it differs from the literature in many aspects. One of the most essential differences is that literary works invariably have an author. Folk works, on the contrary, never have an author. The situation is quite clear: either we acknowledge the presence of folk art as a phenomenon in the social and cultural history of people, or we do not acknowledge it and argue it is a poetical or art of fiction. This paper is an effort to understand the performative tradition of Saang which is traditionally known as Saang, Swang or Svang became a popular source for instruction and entertainment in the region and neighbouring states. Scholars and critics have long been debating about the origin of the word swang/svang/saang and their relationship to the Sanskrit word –Sangit, which means singing and music. But in the cultural context of Haryana, the word Saang means ‘to impersonate’ or ‘to imitate’ or ‘to copy someone or something’. The stories they portray are derived for the most part from the same myths, tales, epics and from the lives of Indian religious and folk heroes. Literally, the use of poetic sense, the implication of prose style and elaborate figurative technique are worthwhile to compile the productivity of a performance. All use music and song as an integral part of the performance so that it is also appropriate to call them folk opera. These folk plays are performed strictly by aboriginal people in the state. These people, sometimes denominated as Saangi, possess a culture distinct from the rest of Indian folk performances. The concerned form is also known with various other names like Manch, Khayal, Opera, Nautanki. The group of such folk plays can be seen as a dynamic activity and performed in the open space of the theatre. Nowadays, producers contributed greatly in order to create a rapidly growing musical outlet for budding new style of folk presentation and give rise to the electronic focus genre utilizing many musicians and performers who had to become precursors of the folk tradition in the region. Moreover, the paper proposes to examine available sources relative to this article, and it is believed to draw some different conclusions. For instance, to be a spectator of ongoing performances will contribute to providing enough guidance to move forward on this root. In this connection, the paper focuses critically upon the major performative aspects of Haryanvi Saang in relation to several inquiries such as the study of these plays in the context of Indian literary scenario, gender visualization and their dramatic representation, a song-music tradition in folk creativity and development of Haryanvi dramatic art in the contemporary socio-political background.

Keywords: folk play, indigenous, performance, Saang, tradition

Procedia PDF Downloads 156
338 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 164
337 Proposals for the Practical Implementation of the Biological Monitoring of Occupational Exposure for Antineoplastic Drugs

Authors: Mireille Canal-Raffin, Nadege Lepage, Antoine Villa

Abstract:

Context: Most antineoplastic drugs (AD) have a potential carcinogenic, mutagenic and/or reprotoxic effect and are classified as 'hazardous to handle' by National Institute for Occupational Safety and Health Their handling increases with the increase of cancer incidence. AD contamination from workers who handle AD and/or care for treated patients is, therefore, a major concern for occupational physicians. As part of the process of evaluation and prevention of chemical risks for professionals exposed to AD, Biological Monitoring of Occupational Exposure (BMOE) is the tool of choice. BMOE allows identification of at-risk groups, monitoring of exposures, assessment of poorly controlled exposures and the effectiveness and/or wearing of protective equipment, and documenting occupational exposure incidents to AD. This work aims to make proposals for the practical implementation of the BMOE for AD. The proposed strategy is based on the French good practice recommendations for BMOE, issued in 2016 by 3 French learned societies. These recommendations have been adapted to occupational exposure to AD. Results: AD contamination of professionals is a sensitive topic, and the BMOE requires the establishment of a working group and information meetings within the concerned health establishment to explain the approach, objectives, and purpose of monitoring. Occupational exposure to AD is often discontinuous and 2 steps are essential upstream: a study of the nature and frequency of AD used to select the Biological Exposure Indice(s) (BEI) most representative of the activity; a study of AD path in the institution to target exposed professionals and to adapt medico-professional information sheet (MPIS). The MPIS is essential to gather the necessary elements for results interpretation. Currently, 28 urinary specific BEIs of AD exposure have been identified, and corresponding analytical methods have been published: 11 BEIs were AD metabolites, and 17 were AD. Results interpretation is performed by groups of homogeneous exposure (GHE). There is no threshold biological limit value of interpretation. Contamination is established when an AD is detected in trace concentration or in a urine concentration equal or greater than the limit of quantification (LOQ) of the analytical method. Results can only be compared to LOQs of these methods, which must be as low as possible. For 8 of the 17 AD BEIs, the LOQ is very low with values between 0.01 to 0.05µg/l. For the other BEIs, the LOQ values were higher between 0.1 to 30µg/l. Results restitution by occupational physicians to workers should be individual and collective. Faced with AD dangerousness, in cases of workers contamination, it is necessary to put in place corrective measures. In addition, the implementation of prevention and awareness measures for those exposed to this risk is a priority. Conclusion: This work is a help for occupational physicians engaging in a process of prevention of occupational risks related to AD exposure. With the current analytical tools, effective and available, the (BMOE) to the AD should now be possible to develop in routine occupational physician practice. The BMOE may be complemented by surface sampling to determine workers' contamination modalities.

Keywords: antineoplastic drugs, urine, occupational exposure, biological monitoring of occupational exposure, biological exposure indice

Procedia PDF Downloads 136
336 Community Engagement: Experience from the SIREN Study in Sub-Saharan Africa

Authors: Arti Singh, Carolyn Jenkins, Oyedunni S. Arulogun, Mayowa O. Owolabi, Fred S. Sarfo, Bruce Ovbiagele, Enzinne Sylvia

Abstract:

Background: Stroke, the leading cause of adult-onset disability and the second leading cause of death, is a major public health concern particularly pertinent in Sub-Saharan Africa (SSA), where nearly 80% of all global stroke mortalities occur. The Stroke Investigative Research and Education Network (SIREN) seeks to comprehensively characterize the genomic, sociocultural, economic, and behavioral risk factors for stroke and to build effective teams for research to address and decrease the burden of stroke and other non communicable diseases in SSA. One of the first steps to address this goal was to effectively engage the communities that suffer the high burden of disease in SSA. This study describes how the SIREN project engaged six sites in Ghana and Nigeria over the past three years, describing the community engagement activities that have arisen since inception. Aim: The aim of community engagement (CE) within SIREN is to elucidate information about knowledge, attitudes, beliefs, and practices (KABP) about stroke and its risk factors from individuals of African ancestry in SSA, and to educate the community about stroke and ways to decrease disabilities and deaths from stroke using socioculturally appropriate messaging and messengers. Methods: Community Advisory Board (CABs), Focus Group Discussions (FGDs) and community outreach programs. Results: 27 FGDs with 168 participants including community heads, religious leaders, health professionals and individuals with stroke among others, were conducted, and over 60 CE outreaches have been conducted within the SIREN performance sites. Over 5,900 individuals have received education on cardiovascular risk factors and about 5,000 have been screened for cardiovascular risk factors during the outreaches. FGDs and outreach programs indicate that knowledge of stroke, as well as risk factors and follow-up evidence-based care is limited and often late. Other findings include: 1) Most recognize hypertension as a major risk factor for stroke. 2) About 50% report that stroke is hereditary and about 20% do not know organs affected by stroke. 3) More than 95% willing to participate in genetic testing research and about 85% willing to pay for testing and recommend the test to others. 4) Almost all indicated that genetic testing could help health providers better treat stroke and help scientists better understand the causes of stroke. The CABs provided stakeholder input into SIREN activities and facilitated collaborations among investigators, community members and stakeholders. Conclusion: The CE core within SIREN is a first-of-its kind public outreach engagement initiative to evaluate and address perceptions about stroke and genomics by patients, caregivers, and local leaders in SSA and has implications as a model for assessment in other high-stroke risk populations. SIREN’s CE program uses best practices to build capacity for community-engaged research, accelerate integration of research findings into practice and strengthen dynamic community-academic partnerships within our communities. CE has had several major successes over the past three years including our multi-site collaboration examining the KABP about stroke (symptoms, risk factors, burden) and genetic testing across SSA.

Keywords: community advisory board, community engagement, focus groups, outreach, SSA, stroke

Procedia PDF Downloads 428
335 Nutritional Genomics Profile Based Personalized Sport Nutrition

Authors: Eszter Repasi, Akos Koller

Abstract:

Our genetic information determines our look, physiology, sports performance and all our features. Maximizing the performances of athletes have adopted a science-based approach to the nutritional support. Nowadays genetics studies have blended with nutritional sciences, and a dynamically evolving, new research field have appeared. Nutritional genomics is needed to be used by nutritional experts. This is a recent field of nutritional science, which can provide a solution to reach the best sport performance using correlations between the athlete’s genome, nutritions, molecules, included human microbiome (links between food, microbiome and epigenetics), nutrigenomics and nutrigenetics. Nutritional genomics has a tremendous potential to change the future of dietary guidelines and personal recommendations. Experts need to use new technology to get information about the athletes, like nutritional genomics profile (included the determination of the oral and gut microbiome and DNA coded reaction for food components), which can modify the preparation term and sports performance. The influence of nutrients on the genes expression is called Nutrigenomics. The heterogeneous response of gene variants to nutrients, dietary components is called Nutrigenetics. The human microbiome plays a critical role in the state of health and well-being, and there are more links between food or nutrition and the human microbiome composition, which can develop diseases and epigenetic changes as well. A nutritional genomics-based profile of athletes can be the best technic for a dietitian to make a unique sports nutrition diet plan. Using functional food and the right food components can be effected on health state, thus sports performance. Scientists need to determine the best response, due to the effect of nutrients on health, through altering genome promote metabolites and result changes in physiology. Nutritional biochemistry explains why polymorphisms in genes for the absorption, circulation, or metabolism of essential nutrients (such as n-3 polyunsaturated fatty acids or epigallocatechin-3-gallate), would affect the efficacy of that nutrient. Controlled nutritional deficiencies and failures, prevented the change of health state or a newly discovered food intolerance are observed by a proper medical team, can support better sports performance. It is important that the dietetics profession informed on gene-diet interactions, that may be leading to optimal health, reduced risk of injury or disease. A special medical application for documentation and monitoring of data of health state and risk factors can uphold and warn the medical team for an early action and help to be able to do a proper health service in time. This model can set up a personalized nutrition advice from the status control, through the recovery, to the monitoring. But more studies are needed to understand the mechanisms and to be able to change the composition of the microbiome, environmental and genetic risk factors in cases of athletes.

Keywords: gene-diet interaction, multidisciplinary team, microbiome, diet plan

Procedia PDF Downloads 172
334 Comparison of Nutritional Status of Asthmatic vs Non-Asthmatic Adults

Authors: Ayesha Mushtaq

Abstract:

Asthma is a pulmonary disease in which blockade of the airway takes place due to inflammation as a response to certain allergens. Breathing troubles, cough, and dyspnea are one of the few symptoms. Several studies have indicated a significant effect on asthma due to changes in dietary routines. Certain food items, such as oily foods and other materials, are known to cause an increase in the symptoms of asthma. Low dietary intake of fruits and vegetables may be important in relation to asthma prevalence. The objective of this study is to assess and compare the nutritional status of asthmatic and non-asthmatic patients. The significance of this study lies in the factor that it will help nutritionists to arrange a feasible dietary routine for asthmatic patients. This research was conducted at the Pulmonology Department of the Pakistan Institute of Medical Science Islamabad. About thirty hundred thirty-four million people are affected by asthma worldwide. Pakistan is on the verge of being an uplifted urban population and asthma cases are increasingly high these days. Several studies suggest an increase in the Asthmatic patient population due to improper diet. This is a cross-sectional study aimed at assessing the nutritious standing of Asthmatic and non-asthmatic patients. This research took place at the Pakistan Institute of Medical Sciences (PIMS), Islamabad, Pakistan. The research included asthmatic and non-asthmatic patients coming to the pulmonology department clinic at the Pakistan Institute of Medical Sciences (PIMS). These patients were aged between 20-60 years. A questionnaire was developed for these patients to estimate their dietary plans in these patients. The methodology included four sections. The first section was the Socio-Demographic profile, which included age, gender, monthly income and occupation. The next section was anthropometric measurements which included the weight, height and body mass index (BMI) of an individual. The next section, section three, was about the biochemical attributes, such as for biochemical profiling, pulmonary function testing (PFT) was performed. In the next section, Dietary habits were assessed by a food frequency questionnaire (FFQ) through food habits and consumption pattern was assessed. The next section life style data, in which the person's level of physical activity, sleep and smoking habits were assessed. The next section was statistical analysis. All the data obtained from the study were statistically analyzed and assessed. Most of the asthma Patients were females, with weight more than normal or even obese. Body Mass Index (BMI) was higher in asthma Patients than those in non-Asthmatic ones. When the nutritional Values were assessed, we came to know that these patients were low on certain nutrients and their diet included more junk and oily food than healthy vegetables and fruits. Beverages intake was also included in the same assessment. It is evident from this study that nutritional status has a contributory effect on asthma. So, patients on the verge of developing asthma or those who have developed asthma should focus on their diet, maintain good eating habits and take healthy diets, including fruits and vegetables rather than oily foods. Proper sleep may also contribute to the control of asthma.

Keywords: BMI, nutrition, PAL, diet

Procedia PDF Downloads 77
333 Effects of Exposure to a Language on Perception of Non-Native Phonologically Contrastive Duration

Authors: Chuyu Huang, Itsuki Minemi, Kuanlin Chen, Yuki Hirose

Abstract:

It remains unclear how language speakers are able to perceive phonological contrasts that do not exist on their own. This experiment uses the vowel-length distinction in Japanese, which is phonologically contrastive and co-occurs with tonal change in some cases. For speakers whose first language does not distinguish vowel length, contrastive duration is usually misperceived, e.g., Mandarin speakers. Two alternative hypotheses for how Mandarin speakers would perceive a phonological contrast that does not exist in their language make different predictions. The stress parameter model does not have a clear prediction about the impact of tonal type. Mandarin speakers will likely be not able to perceive vowel length as well as Japanese native speakers do, but the performance might not correlate to tonal type because the prosody of their language is distinctive, which requires users to encode lexical prosody and notice subtle differences in word prosody. By contrast, cue-based phonetic models predict that Mandarin speakers may rely on pitch differences, a secondary cue, to perceive vowel length. Two groups of Mandarin speakers, including naive non-Japanese speakers and beginner learners, were recruited to participate in an AX discrimination task involving two Japanese sound stimuli that contain a phonologically contrastive environment. Participants were asked to indicate whether the two stimuli containing a vowel-length contrast (e.g., maapero vs. mapero) sound the same. The experiment was bifactorial. The first factor contrasted three syllabic positions (syllable position; initial/medial/final), as it would be likely to affect the perceptual difficulty, as seen in previous studies, and the second factor contrasted two pitch types (accent type): one with accentual change that could be distinguished with the lexical tones in Mandarin (the different condition), with the other group having no tonal distinction but only differing in vowel length (the same condition). The overall results showed that a significant main effect of accent type by applying a linear mixed-effects model (β = 1.48, SE = 0.35, p < 0.05), which implies that Mandarin speakers tend to more successfully recognize vowel-length differences when the long vowel counterpart takes on a tone that exists in Mandarin. The interaction between the accent type and the syllabic position is also significant (β = 2.30, SE = 0.91, p < 0.05), showing that vowel lengths in the different conditions are more difficult to recognize in the word-final case relative to the initial condition. The second statistical model, which compares naive speakers to beginners, was conducted with logistic regression to test the effects of the participant group. A significant difference was found between the two groups (β = 1.06, 95% CI = [0.36, 2.03], p < 0.05). This study shows that: (1) Mandarin speakers are likely to use pitch cues to perceive vowel length in a non-native language, which is consistent with the cue-based approaches; (2) an exposure effect was observed: the beginner group achieved a higher accuracy for long vowel perception, which implied the exposure effect despite the short period of language learning experience.

Keywords: cue-based perception, exposure effect, prosodic perception, vowel duration

Procedia PDF Downloads 220
332 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 133
331 Assessing P0.1 and Occlusion Pressures in Brain-Injured Patients on Pressure Support Ventilation: A Study Protocol

Authors: S. B. R. Slagmulder

Abstract:

Monitoring inspiratory effort and dynamic lung stress in patients on pressure support ventilation in the ICU is important for protecting against self inflicted lung injury (P-SILI) and diaphragm dysfunction. Strategies to address the detrimental effects of respiratory drive and effort can lead to improved patient outcomes. Two non-invasive estimation methods, occlusion pressure (Pocc) and P0.1, have been proposed for achieving lung and diaphragm protective ventilation. However, their relationship and interpretation in neuro ICU patients is not well understood. P0.1 is the airway pressure measured during a 100-millisecond occlusion of the inspiratory port. It reflects the neural drive from the respiratory centers to the diaphragm and respiratory muscles, indicating the patient's respiratory drive during the initiation of each breath. Occlusion pressure, measured during a brief inspiratory pause against a closed airway, provides information about the inspiratory muscles' strength and the system's total resistance and compliance. Research Objective: Understanding the relationship between Pocc and P0.1 in brain-injured patients can provide insights into the interpretation of these values in pressure support ventilation. This knowledge can contribute to determining extubation readiness and optimizing ventilation strategies to improve patient outcomes. The central goal is to asses a study protocol for determining the relationship between Pocc and P0.1 in brain-injured patients on pressure support ventilation and their ability to predict successful extubation. Additionally, comparing these values between brain-damaged and non-brain-damaged patients may provide valuable insights. Key Areas of Inquiry: 1. How do Pocc and P0.1 values correlate within brain injury patients undergoing pressure support ventilation? 2. To what extent can Pocc and P0.1 values serve as predictive indicators for successful extubation in patients with brain injuries? 3. What differentiates the Pocc and P0.1 values between patients with brain injuries and those without? Methodology: P0.1 and occlusion pressures are standard measurements for pressure support ventilation patients, taken by attending doctors as per protocol. We utilize electronic patient records for existing data. Unpaired T-test will be conducted to compare P0.1 and Pocc values between both study groups. Associations between P0.1 and Pocc and other study variables, such as extubation, will be explored with simple regression and correlation analysis. Depending on how the data evolve, subgroup analysis will be performed for patients with and without extubation failure. Results: While it is anticipated that neuro patients may exhibit high respiratory drive, the linkage between such elevation, quantified by P0.1, and successful extubation remains unknown The analysis will focus on determining the ability of these values to predict successful extubation and their potential impact on ventilation strategies. Conclusion: Further research is pending to fully understand the potential of these indices and their impact on mechanical ventilation in different patient populations and clinical scenarios. Understanding these relationships can aid in determining extubation readiness and tailoring ventilation strategies to improve patient outcomes in this specific patient population. Additionally, it is vital to account for the influence of sedatives, neurological scores, and BMI on respiratory drive and occlusion pressures to ensure a comprehensive analysis.

Keywords: brain damage, diaphragm dysfunction, occlusion pressure, p0.1, respiratory drive

Procedia PDF Downloads 68
330 An Eco-Systemic Typology of Fashion Resale Business Models in Denmark

Authors: Mette Dalgaard Nielsen

Abstract:

The paper serves the purpose of providing an eco-systemic typology of fashion resale business models in Denmark while pointing to possibilities to learn from its wisdom during a time when a fundamental break with the dominant linear fashion paradigm has become inevitable. As we transgress planetary boundaries and can no longer continue the unsustainable path of over-exploiting the Earth’s resources, the global fashion industry faces a tremendous need for change. One of the preferred answers to the fashion industry’s sustainability crises lies in the circular economy, which aims to maximize the utilization of resources by keeping garments in use for longer. Thus, in the context of fashion, resale business models that allow pre-owned garments to change hands with the purpose of being reused in continuous cycles are considered to be among the most efficient forms of circularity. Methodologies: The paper is based on empirical data from an ongoing project and a series of qualitative pilot studies that have been conducted on the Danish resale market over a 2-year time period from Fall 2021 to Fall 2023. The methodological framework is comprised of (n) ethnography and fieldwork in selected resale environments, as well as semi-structured interviews and a workshop with eight business partners from the Danish fashion and textiles industry. By focusing on the real-world circulation of pre-owned garments, which is enabled by the identified resale business models, the research lets go of simplistic hypotheses to the benefit of dynamic, vibrant and non-linear processes. As such, the paper contributes to the emerging research field of circular economy and fashion, which finds itself in a critical need to move from non-verified concepts and theories to empirical evidence. Findings: Based on the empirical data and anchored in the business partners, the paper analyses and presents five distinct resale business models with different product, service and design characteristics. These are 1) branded resale, 2) trade-in resale, 3) peer-2-peer resale, 4) resale boutiques and consignment shops and 5) resale shelf/square meter stores and flea markets. Together, the five business models represent a plurality of resale-promoting business model design elements that have been found to contribute to the circulation of pre-owned garments in various ways for different garments, users and businesses in Denmark. Hence, the provided typology points to the necessity of prioritizing several rather than single resale business model designs, services and initiatives for the resale market to help reconfigure the linear fashion model and create a circular-ish future. Conclusions: The article represents a twofold research ambition by 1) presenting an original, up-to-date eco-systemic typology of resale business models in Denmark and 2) using the typology and its eco-systemic traits as a tool to understand different business model design elements and possibilities to help fashion grow out of its linear growth model. By basing the typology on eco-systemic mechanisms and actual exemplars of resale business models, it becomes possible to envision the contours of a genuine alternative to business as usual that ultimately helps bend the linear fashion model towards circularity.

Keywords: circular business models, circular economy, fashion, resale, strategic design, sustainability

Procedia PDF Downloads 58
329 A Multipurpose Inertial Electrostatic Magnetic Confinement Fusion for Medical Isotopes Production

Authors: Yasser R. Shaban

Abstract:

A practical multipurpose device for medical isotopes production is most wanted for clinical centers and researches. Unfortunately, the major supply of these radioisotopes currently comes from aging sources, and there is a great deal of uneasiness in the domestic market. There are also many cases where the cost of certain radioisotopes is too high for their introduction on a commercial scale even though the isotopes might have great benefits for society. The medical isotopes such as radiotracers PET (Positron Emission Tomography), Technetium-99 m, and Iodine-131, Lutetium-177 by is feasible to be generated by a single unit named IEMC (Inertial Electrostatic Magnetic Confinement). The IEMC fusion vessel is the upgrading unit of the Inertial Electrostatic Confinement IEC fusion vessel. Comprehensive experimental works on IEC were carried earlier with promising results. The principle of inertial electrostatic magnetic confinement IEMC fusion is based on forcing the binary fuel ions to interact in the opposite directions in ions cyclotrons orbits with different kinetic energies in order to have equal compression (forces) and with different ion cyclotron frequency ω in order to increase the rate of intersection. The IEMC features greater fusion volume than IEC by several orders of magnitude. The particles rate from the IEMC approach are projected to be 8.5 x 10¹¹ (p/s), ~ 0.2 microampere proton, for D/He-3 fusion reaction and 4.2 x 10¹² (n/s) for D/T fusion reaction. The projected values of particles yield (neutrons and protons) are suitable for medical isotope productions on-site by a single unit without any change in the fusion vessel but only the fuel gas. The PET radiotracers are usually produced on-site by medical ion accelerator whereas Technetium-99m (Tc-99m) is usually produced off-site from the irradiation facilities of nuclear power plants. Typically, hospitals receive molybdenum-99 isotope container; the isotope decays to Tc-99mwith half-life time 2.75 days. Even though the projected current from IEMC is lesser than the proton current from the medical ion accelerator but still the IEMC vessel is simpler, and reduced in components and power consumption which add a new value of populating the PET radiotracers in most clinical centers. On the other hand, the projected neutrons flux from the IEMC is lesser than the thermal neutron flux at the irradiation facilities of nuclear power plants, but in the IEMC case the productions of Technetium-99m is suggested to be at the resonance region of which the resonance integral cross section is two orders of magnitude higher than the thermal flux. Thus it can be said the net activity from both is evened. Besides, the particle accelerator cannot be considered a multipurpose particles production unless a significant change is made to the accelerator to change from neutrons mode to protons mode or vice versa. In conclusion, the projected fusion yield from IEMC is a straightforward since slightly change in the primer IEC and ion source is required.

Keywords: electrostatic versus magnetic confinement fusion vessel, ion source, medical isotopes productions, neutron activation

Procedia PDF Downloads 343
328 Real-Time Neuroimaging for Rehabilitation of Stroke Patients

Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge

Abstract:

Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).

Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation

Procedia PDF Downloads 387
327 Residents' Incomes in Local Government Unit as the Major Determinant of Local Budget Transparency in Croatia: Panel Data Analysis

Authors: Katarina Ott, Velibor Mačkić, Mihaela Bronić, Branko Stanić

Abstract:

The determinants of national budget transparency have been widely discussed in the literature, while research on determinants of local budget transparency are scarce and empirically inconclusive, particularly in the new, fiscally centralised, EU member states. To fill the gap, we combine two strands of the literature: that concerned with public administration and public finance, shedding light on the economic and financial determinants of local budget transparency, and that on the political economy of transparency (principal agent theory), covering the relationships among politicians and between politicians and voters. Our main hypothesis states that variables describing residents’ capacity have a greater impact on local budget transparency than variables indicating the institutional capacity of local government units (LGUs). Additional subhypotheses test the impact of each variable analysed on local budget transparency. We address the determinants of local budget transparency in Croatia, measured by the number of key local budget documents published on the LGUs’ websites. By using a data set of 128 cities and 428 municipalities over the 2015-2017 period and by applying panel data analysis based on Poisson and negative binomial distribution, we test our main hypothesis and sub-hypotheses empirically. We measure different characteristics of institutional and residents’ capacity for each LGU. Age, education and ideology of the mayor/municipality head, political competition indicators, number of employees, current budget revenues and direct debt per capita have been used as a measure of the institutional capacity of LGU. Residents’ capacity in each LGU has been measured through the numbers of citizens and their average age as well as by average income per capita. The most important determinant of local budget transparency is average residents' income per capita at both city and municipality level. The results are in line with most previous research results in fiscally decentralised countries. In the context of a fiscally centralised country with numerous small LGUs, most of whom have low administrative and fiscal capacity, this has a theoretical rationale in the legitimacy and principal-agent theory (opportunistic motives of the incumbent). The result is robust and significant, but because of the various other results that change between city and municipality levels (e.g. ideology and political competition), there is a need for further research (both on identifying other determinates and/or methods of analysis). Since in Croatia the fiscal capacity of a LGU depends heavily on the income of its residents, units with higher per capita incomes in many cases have also higher budget revenues allowing them to engage more employees and resources. In addition, residents’ incomes might be also positively associated with local budget transparency because of higher citizen demand for such transparency. Residents with higher incomes expect more public services and have more access to and experience in using the Internet, and will thus typically demand more budget information on the LGUs’ websites.

Keywords: budget transparency, count data, Croatia, local government, political economy

Procedia PDF Downloads 184
326 The U.S. Missile Defense Shield and Global Security Destabilization: An Inconclusive Link

Authors: Michael A. Unbehauen, Gregory D. Sloan, Alberto J. Squatrito

Abstract:

Missile proliferation and global stability are intrinsically linked. Missile threats continually appear at the forefront of global security issues. North Korea’s recently demonstrated nuclear and intercontinental ballistic missile (ICBM) capabilities, for the first time since the Cold War, renewed public interest in strategic missile defense capabilities. To protect from limited ICBM attacks from so-called rogue actors, the United States developed the Ground-based Midcourse Defense (GMD) system. This study examines if the GMD missile defense shield has contributed to a safer world or triggered a new arms race. Based upon increased missile-related developments and the lack of adherence to international missile treaties, it is generally perceived that the GMD system is a destabilizing factor for global security. By examining the current state of arms control treaties as well as existing missile arsenals and ongoing efforts in technologies to overcome U.S. missile defenses, this study seeks to analyze the contribution of GMD to global stability. A thorough investigation cannot ignore that, through the establishment of this limited capability, the U.S. violated longstanding, successful weapons treaties and caused concern among states that possess ICBMs. GMD capability contributes to the perception that ICBM arsenals could become ineffective, creating an imbalance in favor of the United States, leading to increased global instability and tension. While blame for the deterioration of global stability and non-adherence to arms control treaties is often placed on U.S. missile defense, the facts do not necessarily support this view. The notion of a renewed arms race due to GMD is supported neither by current missile arsenals nor by the inevitable development of new and enhanced missile technology, to include multiple independently targeted reentry vehicles (MIRVs), maneuverable reentry vehicles (MaRVs), and hypersonic glide vehicles (HGVs). The methodology in this study encapsulates a period of time, pre- and post-GMD introduction, while analyzing international treaty adherence, missile counts and types, and research in new missile technologies. The decline in international treaty adherence, coupled with a measurable increase in the number and types of missiles or research in new missile technologies during the period after the introduction of GMD, could be perceived as a clear indicator of GMD contributing to global instability. However, research into improved technology (MIRV, MaRV and HGV) prior to GMD, as well as a decline of various global missile inventories and testing of systems during this same period, would seem to invalidate this theory. U.S. adversaries have exploited the perception of the U.S. missile defense shield as a destabilizing factor as a pretext to strengthen and modernize their militaries and justify their policies. As a result, it can be concluded that global stability has not significantly decreased due to GMD; but rather, the natural progression of technological and missile development would inherently include innovative and dynamic approaches to target engagement, deterrence, and national defense.

Keywords: arms control, arms race, global security, GMD, ICBM, missile defense, proliferation

Procedia PDF Downloads 143
325 The Effects of the GAA15 (Gaelic Athletic Association 15) on Lower Extremity Injury Incidence and Neuromuscular Functional Outcomes in Collegiate Gaelic Games: A 2 Year Prospective Study

Authors: Brenagh E. Schlingermann, Clare Lodge, Paula Rankin

Abstract:

Background: Gaelic football, hurling and camogie are highly popular field games in Ireland. Research into the epidemiology of injury in Gaelic games revealed that approximately three quarters of the injuries in the games occur in the lower extremity. These injuries can have player, team and institutional impacts due to multiple factors including financial burden and time loss from competition. Research has shown it is possible to record injury data consistently with the GAA through a closed online recording system known as the GAA injury surveillance database. It has been established that determining the incidence of injury is the first step of injury prevention. The goals of this study were to create a dynamic GAA15 injury prevention programme which addressed five key components/goals; avoid positions associated with a high risk of injury, enhance flexibility, enhance strength, optimize plyometrics and address sports specific agilities. These key components are internationally recognized through the Prevent Injury, Enhance performance (PEP) programme which has proven reductions in ACL injuries by 74%. In national Gaelic games the programme is known as the GAA15 which has been devised from the principles of the PEP. No such injury prevention strategies have been published on this cohort in Gaelic games to date. This study will investigate the effects of the GAA15 on injury incidence and neuromuscular function in Gaelic games. Methods: A total of 154 players (mean age 20.32 ± 2.84) were recruited from the GAA teams within the Institute of Technology Carlow (ITC). Preseason and post season testing involved two objective screening tests; Y balance test and Three Hop Test. Practical workshops, with ongoing liaison, were provided to the coaches on the implementation of the GAA15. The programme was performed before every training session and game and the existing GAA injury surveillance database was accessed to monitor player’s injuries by the college sports rehabilitation athletic therapist. Retrospective analysis of the ITC clinic records were performed in conjunction with the database analysis as a means of tracking injuries that may have been missed. The effects of the programme were analysed by comparing the intervention groups Y balance and three hop test scores to an age/gender matched control group. Results: Year 1 results revealed significant increases in neuromuscular function as a result of the GAA15. Y Balance test scores for the intervention group increased in both the posterolateral (p=.005 and p=.001) and posteromedial reach directions (p= .001 and p=.001). A decrease in performance was determined for the three hop test (p=.039). Overall twenty-five injuries were reported during the season resulting in an injury rate of 3.00 injuries/1000hrs of participation; 1.25 injuries/1000hrs training and 4.25 injuries/1000hrs match play. Non-contact injuries accounted for 40% of the injuries sustained. Year 2 results are pending and expected April 2016. Conclusion: It is envisaged that implementation of the GAA15 will continue to reduce the risk of injury and improve neuromuscular function in collegiate Gaelic games athletes.

Keywords: GAA15, Gaelic games, injury prevention, neuromuscular training

Procedia PDF Downloads 339
324 Temperature-Dependent Post-Mortem Changes in Human Cardiac Troponin-T (cTnT): An Approach in Determining Postmortem Interval

Authors: Sachil Kumar, Anoop Kumar Verma, Wahid Ali, Uma Shankar Singh

Abstract:

Globally approximately 55.3 million people die each year. In the India there were 95 lakh annual deaths in 2013. The number of deaths resulted from homicides, suicides and unintentional injuries in the same period was about 5.7 lakh. The ever-increasing crime rate necessitated the development of methods for determining time since death. An erroneous time of death window can lead investigators down the wrong path or possibly focus a case on an innocent suspect. In this regard a research was carried out by analyzing the temperature dependent degradation of a Cardiac Troponin-T protein (cTnT) in the myocardium postmortem as a marker for time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (in the Department of Forensic Medicine and Toxicology, King George’s Medical University, Lucknow India) after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC), 12 0C, 25 0C and 37 0C for different time periods ((~5, 26, 50, 84, 132, 157, 180, 205, and 230 hours). The cases included were the subjects of road traffic accidents (RTA) without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. The data shows a distinct temporal profile corresponding to the degradation of cTnT by proteases found in cardiac muscle. The disappearance of intact cTnT and the appearance of lower molecular weight bands are easily observed. Western blot data clearly showed the intact protein at 42 kDa, two major (27 kDa, 10kDa) fragments, two additional minor fragments (32 kDa) and formation of low molecular weight fragments as time increases. At 12 0C the intensity of band (intact cTnT) decreased steadily as compared to RT, 25 0C and 37 0C. Overall, both PMI and temperature had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 38 h and at the highest temperature, 37 0C. The combination of high temperature (37 0C) and long Postmortem interval (105.15 hrs) had the most drastic effect on the breakdown of cTnT. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the log of the time postmortem. These plots show a good coefficient of correlation of r = 0.95 (p=0.003) for the regression of the human heart at different temperature conditions. The data presented demonstrates that this technique can provide an extended time range during which Postmortem interval can be more accurately estimated.

Keywords: degradation, postmortem interval, proteolysis, temperature, troponin

Procedia PDF Downloads 386
323 Robots for the Elderly at Home: For Men Only

Authors: Christa Fricke, Sibylle Meyer, Gert G. Wagner

Abstract:

Our research focuses on the question of whether assistive and social robotics could pose a promising strategy to support the independent living of elderly people and potentially relieve relatives of any anxieties. To answer the question of how elderly people perceive the potential of robotics, we analysed the data from the Berlin Aging Study BASE-II (https://www.base2.mpg.de/de) (N=1463) and data from the German SYMPARTNER study (http://www.sympartner.de) (N=120) and compared those to a control group made up of people younger than 30 years (BASE II: N=241; SYMPARTNER: N=30). BASE-II is a cohort study of people living in Berlin, Germany. The sample covers more than 2200 cases; a questionnaire on the use and acceptance of assistive and social robots was carried out with a sub-sample of 1463 respondents in 2015. The SYMPARTNER study was done by SIBIS institute of Social Research, Berlin and included a total of 120 persons between the ages of 60 and 87 in Berlin and the rural German federal state of Thuringia. Both studies included a control group of persons between the ages of 20 and 35 (BASE II: N=241; SYMPARTNER: N=30). Additional data, representative for the whole population in Germany, will be surveyed in fall 2017 (Survey “Technikradar” [technology radar] by the National Academy of Science and Engineering). Since this survey is including some identical questions as BASE-II/SYMPARTNER, comparative results can be presented at 20th International Conference on Social Robotics in New York 2018. The complexity of the data gathered in BASE-II and SYMPARTNER, encompassing detailed socio-economic background characteristics as well as personality traits such as the personal attitude to risk taking, locus of control and Big Five, proves highly valuable and beneficial. Results show that participants’ expressions of resentment against robots are comparatively low. Participants’ personality traits play a role, however the effect sizes are small. Only 15 percent of participants received domestic robots with great scepticism. Participants aged older than 70 years expressed greatest rejection of the robotic assistant. The effect sizes however account for only a few percentage points. Overall, participants were surprisingly open to the robot and its usefulness. The analysis also shows that men’s acceptance of the robot is generally greater than that of women (with odds ratios of about 0.6 to 0.7). This applies to both assistive robots in the private household and in care environments. Men expect greater benefits of the robot than women. Women tend to be more sceptical of their technical feasibility than men. Interview results prove our hypothesis that men, in particular of the age group 60+, are more accustomed to delegate household chores to women. A delegation to machines instead of humans, therefore, seems palpable. The answer to the title question of this planned presentation is: social and assistive robots at home robots are not only accepted by men – but by fewer women than men.

Keywords: acceptance, care, gender, household

Procedia PDF Downloads 197
322 Augmented Reality Enhanced Order Picking: The Potential for Gamification

Authors: Stavros T. Ponis, George D. Plakas-Koumadorakis, Sotiris P. Gayialis

Abstract:

Augmented Reality (AR) can be defined as a technology, which takes the capabilities of computer-generated display, sound, text and effects to enhance the user's real-world experience by overlaying virtual objects into the real world. By doing that, AR is capable of providing a vast array of work support tools, which can significantly increase employee productivity, enhance existing job training programs by making them more realistic and in some cases introduce completely new forms of work and task executions. One of the most promising AR industrial applications, as literature shows, is the use of Head Worn, monocular or binocular Displays (HWD) to support logistics and production operations, such as order picking, part assembly and maintenance. This paper presents the initial results of an ongoing research project for the introduction of a dedicated AR-HWD solution to the picking process of a Distribution Center (DC) in Greece operated by a large Telecommunication Service Provider (TSP). In that context, the proposed research aims to determine whether gamification elements should be integrated in the functional requirements of the AR solution, such as providing points for reaching objectives and creating leaderboards and awards (e.g. badges) for general achievements. Up to now, there is a an ambiguity on the impact of gamification in logistics operations since gamification literature mostly focuses on non-industrial organizational contexts such as education and customer/citizen facing applications, such as tourism and health. To the contrary, the gamification efforts described in this study focus in one of the most labor- intensive and workflow dependent logistics processes, i.e. Customer Order Picking (COP). Although introducing AR in COP, undoubtedly, creates significant opportunities for workload reduction and increased process performance the added value of gamification is far from certain. This paper aims to provide insights on the suitability and usefulness of AR-enhanced gamification in the hard and very demanding environment of a logistics center. In doing so, it will utilize a review of the current state-of-the art regarding gamification of production and logistics processes coupled with the results of questionnaire guided interviews with industry experts, i.e. logisticians, warehouse workers (pickers) and AR software developers. The findings of the proposed research aim to contribute towards a better understanding of AR-enhanced gamification, the organizational change it entails and the consequences it potentially has for all implicated entities in the often highly standardized and structured work required in the logistics setting. The interpretation of these findings will support the decision of logisticians regarding the introduction of gamification in their logistics processes by providing them useful insights and guidelines originating from a real life case study of a large DC operating more than 300 retail outlets in Greece.

Keywords: augmented reality, technology acceptance, warehouse management, vision picking, new forms of work, gamification

Procedia PDF Downloads 150
321 Syrian-Armenian Women Refugees: Crossing Borders between the Past and the Present, Negotiating between the Private and the Public

Authors: Ani Kojoyan

Abstract:

The Syrian refugee crisis has been a matter of worldwide concern during the recent years. And though refugees’ problems are contextualized in terms of time and space, the refugee crisis still remains a global issue to discuss. Since the start of the conflict, Armenia has welcomed thousands of Syrian refugees too. Taking into consideration Armenia’s current socio-economic and geopolitical situation, the flow of refugees is a challenge both for the country and for refugees themselves. However, these people are not simply refugees from Syria, they are Syrian-Armenian refugees; people whose ancestors were survivals of the Armenian Genocide, perpetrated by the Ottoman Turks in 1915, people whose ancestors became refugees a century ago in Syria and now, ironically, a century later they follow their ancestors’ paths, turning into refugees themselves in their historical homeland, facing various difficulties, among them socio-economic, socio-ideological, and identity and gender issues, the latter being the main topic of discussion in the present paper. The situation presented above makes us discuss certain questions within this study: how do Syrian-Armenian refugees define themselves and their status? Which are their gender roles in the socio-economic context? How do social and economic challenges re-shape Syrian-Armenian women refugees’ identities? The study applies qualitative research methods of analysis, which includes semi-structured and in-depth interviews with 15 participants (18-25, 26-40 age groups), and two focus group works, involving 8 participants (18-35 age group) for each focus group activity. The activities were carried out in October 2016, Yerevan, Armenia. The study also includes Secondary Data Analysis. In addition, in order to centralize refugee women’s experiences and identity issues, the study adopts a qualitative lens from a feminist standpoint position. It is based on the assumption that human activity structures and limits understanding, and that the distorted comprehension of events or activities has emerged from the male-oriented dominant judgement which can be discovered through uncovering the understanding of the situation from women’s activity perspectives. The findings suggest that identity is dynamic, complex, over-changing and sensitive to time and space, gender and class. The process of re-shaping identity is even more complicated and multi-layered and is based on internal and external factors, conditioned by individual and collective needs and interests. Refugees are mostly considered as people who lost their identity in the past since they have no longer connection anywhere and try to find it in the present. In turn, female refugees, being a more vulnerable class, go through more complicated identity re-formulating discourse negotiations. They stand between the borders of the old and new, borders of lost and re-found selves, borders of creating and self-fashioning, between illusions and the challenging reality. Particularly, refugee women become more sensitive within the discourses of the private and the public domains: some of them try to create a ‘new-self’, creating their space in a new society, whereas others try to negotiate their affective/emotional labour within their own family domains.

Keywords: feminist standpoint position, gender, identity, refugee studies, Syrian-Armenian women refugees

Procedia PDF Downloads 224
320 Documentary Filmmaking as Activism: Case Studies in Advocacy and Social Justice

Authors: Babatunde Kolawole

Abstract:

This paper embarks on an exploration of the compelling interplay between documentary filmmaking and activism, delving into their symbiotic relationship and profound impact on advocacy and social justice causes. Through an in-depth analysis of diverse case studies, it seeks to illuminate the instances where documentary films have emerged as potent tools for effecting social change and advancing the principles of justice. This research underscores the vital role played by documentary filmmakers in harnessing the medium's unique capacity to engage, educate, and mobilize audiences while advocating for societal transformation. The primary focus of this study is on a selection of compelling case studies spanning various topics and causes, each exemplifying the marriage between documentary filmmaking and activism. These case studies encompass a broad spectrum of subjects, from environmental conservation and climate change to civil rights movements and human rights struggles. By examining these real-world instances, this paper endeavors to provide a comprehensive understanding of the strategies, challenges, and ethical considerations that underpin the practice of documentary filmmaking as a form of activism. Throughout the paper, it becomes evident that the potency of documentary filmmaking lies in its ability to blend artistry with social impact. The selected case studies vividly demonstrate how documentary filmmakers, armed with cameras and a passion for change, have emerged as critical agents of societal transformation. Whether it be exposing environmental atrocities, shedding light on systemic inequalities, or giving voice to marginalized communities, these documentaries have played a pivotal role in pushing the boundaries of advocacy and social justice. One of the key themes explored in this paper is the evolving nature of documentary filmmaking as a tool for activism. It delves into the shift from traditional observational documentaries to more participatory and immersive approaches, highlighting the dynamic ways in which filmmakers engage with their subjects and audiences. This evolution is exemplified in case studies where filmmakers have collaborated with the communities they document, fostering a sense of agency and empowerment among those whose stories are being told. Furthermore, this research underscores the ethical considerations inherent in the intersection of documentary filmmaking and activism. It scrutinizes questions surrounding representation, objectivity, and the responsibility of filmmakers in portraying complex social issues. By dissecting ethical dilemmas faced by documentary filmmakers in these case studies, this paper encourages a critical examination of the ethical boundaries and obligations in the realm of advocacy-driven filmmaking. In conclusion, this paper aims to shed light on the remarkable potential of documentary filmmaking as a catalyst for activism and social justice. Through the lens of compelling case studies, it illustrates the transformative power of the medium in effecting change, amplifying underrepresented voices, and mobilizing global audiences. It is hoped that this research will not only inform the discourse on documentary activism but also inspire filmmakers, scholars, and advocates to continue leveraging the cinematic art form as a formidable force for a more just and equitable world.

Keywords: film, filmmaker, documentary, human right

Procedia PDF Downloads 53
319 Measuring the Biomechanical Effects of Worker Skill Level and Joystick Crane Speed on Forestry Harvesting Performance Using a Simulator

Authors: Victoria L. Chester, Usha Kuruganti

Abstract:

The forest industry is a major economic sector of Canada and also one of the most dangerous industries for workers. The use of mechanized mobile forestry harvesting machines has successfully reduced the incidence of injuries in forest workers related to manual labor. However, these machines have also created additional concerns, including a high machine operation learning curve, increased the length of the workday, repetitive strain injury, cognitive load, physical and mental fatigue, and increased postural loads due to sitting in a confined space. It is critical to obtain objective performance data for employers to develop appropriate work practices for this industry, however ergonomic field studies of this industry are lacking mainly due to the difficulties in obtaining comprehensive data while operators are cutting trees in the woods. The purpose of this study was to establish a measurement and experimental protocol to examine the effects of worker skill level and movement training speed (joystick crane speed) on harvesting performance using a forestry simulator. A custom wrist angle measurement device was developed as part of the study to monitor Euler angles during operation of the simulator. The device of the system consisted of two accelerometers, a Bluetooth module, three 3V coin cells, a microcontroller, a voltage regulator and an application software. Harvesting performance and crane data was provided by the simulator software and included tree to frame collisions, crane to tree collisions, boom tip distance, number of trees cut, etc. A pilot study of 3 operators with various skill levels was tested to identify factors that distinguish highly skilled operators from novice or intermediate operators. Dependent variables such as reaction time, math skill, past work experience, training movement speed (e.g. joystick control speeds), harvesting experience level, muscle activity, and wrist biomechanics were measured and analyzed. A 10-channel wireless surface EMG system was used to monitor the amplitude and mean frequency of 10 upper extremity muscles during pre and postperformance on the forestry harvest stimulator. The results of the pilot study showed inconsistent changes in median frequency pre-and postoperation, but there was the increase in the activity of the flexor carpi radialis, anterior deltoid and upper trapezius of both arms. The wrist sensor results indicated that wrist supination and pronation occurred more than flexion and extension with radial-ulnar rotation demonstrating the least movement. Overall, wrist angular motion increased as the crane speed increased from slow to fast. Further data collection is needed and will help industry partners determine those factors that separate skill levels of operators, identify optimal training speeds, and determine the length of training required to bring new operators to an efficient skill level effectively. In addition to effective and employment training programs, results of this work will be used for selective employee recruitment strategies to improve employee retention after training. Further, improved training procedures and knowledge of the physical and mental demands on workers will lead to highly trained and efficient personnel, reduced risk of injury, and optimal work protocols.

Keywords: EMG, forestry, human factors, wrist biomechanics

Procedia PDF Downloads 145
318 Exploiting Charges on Medicinal Synthetic Aluminum Magnesium Silicate's {Al₄ (SiO₄)₃ + 3Mg₂SiO₄ → 2Al₂Mg₃ (SiO₄)₃} Nanoparticles in Treating Viral Diseases, Tumors, Antimicrobial Resistant Infections

Authors: M. C. O. Ezeibe, F. I. O. Ezeibe

Abstract:

Reasons viral diseases (including AI, HIV/AIDS, and COVID-19), tumors (including Cancers and Prostrate enlargement), and antimicrobial-resistant infections (AMR) are difficult to cure are features of the pathogens which normal cells do not have or need (biomedical markers) have not been identified; medicines that can counter the markers have not been invented; strategies and mechanisms for their treatments have not been developed. When cells become abnormal, they acquire negative electrical charges, and viruses are either positively charged or negatively charged, while normal cells remain neutral (without electrical charges). So, opposite charges' electrostatic attraction is a treatment mechanism for viral diseases and tumors. Medicines that have positive electrical charges would mop abnormal (infected and tumor) cells and DNA viruses (negatively charged), while negatively charged medicines would mop RNA viruses (positively charged). Molecules of Aluminum-magnesium silicate [AMS: Al₂Mg₃ (SiO₄)₃], an approved medicine and pharmaceutical stabilizing agent, consist of nanoparticles which have both positive electrically charged ends and negative electrically charged ends. The very small size (0.96 nm) of the nanoparticles allows them to reach all cells in every organ. By stabilizing antimicrobials, AMS reduces the rate at which the body metabolizes them so that they remain at high concentrations for extended periods. When drugs remain at high concentrations for longer periods, their efficacies improve. Again, nanoparticles enhance the delivery of medicines to effect targets. Both remaining at high concentrations for longer periods and better delivery to effect targets improve efficacy and make lower doses achieve desired effects so that side effects of medicines are reduced to allow the immunity of patients to be enhanced. Silicates also enhance the immune responses of treated patients. Improving antimicrobial efficacies and enhancing patients` immunity terminate infections so that none remains that could develop resistance. Some countries do not have natural deposits of AMS, but they may have Aluminum silicate (AS: Al₄ (SiO₄)₃) and Magnesium silicate (MS: Mg₂SiO₄), which are also approved medicines. So, AS and MS were used to formulate an AMS-brand, named Medicinal synthetic AMS {Al₄ (SiO₄)₃ + 3Mg₂SiO₄ → 2Al₂Mg₃ (SiO₄)₃}. To overcome the challenge of AMS, AS, and MS being un-absorbable, Dextrose monohydrate is incorporated in MSAMS-formulations for the simple sugar to convey the electrically charged nanoparticles into blood circulation by the principle of active transport so that MSAMS-antimicrobial formulations function systemically. In vitro, MSAMS reduced (P≤0.05) titers of viruses, including Avian influenza virus and HIV. When used to treat virus-infected animals, it cured Newcastle disease and Infectious bursa disease of chickens, Parvovirus disease of dogs, and Peste des petits ruminants disease of sheep and goats. A number of HIV/AIDS patients treated with it have been reported to become HIV-negative (antibody and antigen). COVID-19 patients are also reported to recover and test virus negative when treated with MSAMS. PSA titers of prostate cancer/enlargement patients normalize (≤4) following treatment with MSAMS. MSAMS has also potentiated ampicillin trihydrate, sulfadimidin, cotrimoxazole, piparazine citrate and chloroquine phosphate to achieve ≥ 95 % infection-load reductions (AMR-prevention). At 75 % of doses of ampicillin, cotrimoxazole, and streptomycin, supporting MSAMS-formulations' treatments with antioxidants led to the termination of even already resistant infections.

Keywords: electrical charges, viruses, abnormal cells, aluminum-magnesium silicate

Procedia PDF Downloads 63
317 Evaluation of the Incidence of Mycobacterium Tuberculosis Complex Associated with Soil, Hayfeed and Water in Three Agricultural Facilities in Amathole District Municipality in the Eastern Cape Province

Authors: Athini Ntloko

Abstract:

Mycobacterium bovis and other species of Mycobacterium tuberculosis complex (MTBC) can result to a zoonotic infection known as Bovine tuberculosis (bTB). MTBC has members that may contaminate an extensive range of hosts, including wildlife. Diverse wild species are known to cause disease in domestic livestock and are acknowledged as TB reservoirs. It has been a main study worldwide to deliberate on bTB risk factors as a result and some studies focused on particular parts of risk factors such as wildlife and herd management. The significance of the study was to determine the incidence of Mycobacterium tuberculosis complex that is associated with soil, hayfeed and water. Questionnaires were administered to thirty (30) smallholding farm owners in the two villages (kwaMasele and Qungqwala) and three (3) three commercial farms (Fort Hare dairy farm, Middledrift dairy farm and Seven star dairy farm). Detection of M. tuberculosis complex was achieved by Polymerase Chain Reaction using primers for IS6110; whereas a genotypic drug resistance mutation was detected using Genotype MTBDRplus assays. Nine percent (9%) of respondents had more than 40 cows in their herd, while 60% reported between 10 and 20 cows in their herd. Relationship between farm size and vaccination for TB differed from forty one percent (41%) being the highest to the least five percent (5%). The highest number of respondents who knew about relationship between TB cases and cattle location was ninety one percent (91%). Approximately fifty one percent (51%) of respondents had knowledge about wild life access to the farms. Relationship between import of cattle and farm size ranged from nine percent (9%) to thirty five percent (35%). Cattle sickness in relation to farm size differed from forty three (43%) being the highest to the least three percent (3%); while thirty three percent (33%) of respondents had knowledge about health management. Respondents with knowledge about the occurrence of TB infections in farms were forty-eight percent (48%). The frequency of DNA isolation from samples ranged from the highest forty-five percent (45%) from water to the least twenty two percent (22%) from soil. Fort Hare dairy farm had the highest number of positive samples, forty four percent (44%) from water samples; whereas Middledrift dairy farm had the lowest positive from water, seventeen percent (17%). Twelve (22%) out of 55 isolates showed resistance to INH and RIF that is, multi-drug resistance (MDR) and nine percent (9%) were sensitive to either INH or RIF. The mutations at rpoB gene differed from 58% being the highest to the least (23%). Fifty seven percent (57%) of samples showed a S315T1 mutation while only 14% possessed a S531L in the katG gene. The highest inhA mutations were detected in T8A (80 %) and the least was observed in A16G (17%). The results of this study reveal that risk factors for bTB in cattle and dairy farm workers are a serious issue abound in the Eastern Cape of South Africa; with the possibility of widespread dissemination of multidrug resistant determinants in MTBC from the environment.

Keywords: hayfeed, isoniazid, multi-drug resistance, mycobacterium tuberculosis complex, polymerase chain reaction, rifampicin, soil, water

Procedia PDF Downloads 337
316 A Foucauldian Analysis of Child Play: Case Study of a Preschool in the United States

Authors: Meng Wang

Abstract:

Historically, young members (children) in the society have been oppressed by adults through direct violent acts. Direct violence was evident in rampant child labor and child maltreatment cases. After acknowledging the rights of children from the United Nations, it is believed in public that children have been protected against direct physical violence. Nevertheless, at present, this paper argues from Foucauldian and disability study standpoints that similar to the old times, children are oppressed objects in the context of child play, which is constructed by adults to substitute direct violence in regulating children. Particularly, this paper suggests that on the one hand, preschool play is a new way that adults adopt to oppress preschoolers and regulate the society as a whole; on the other hand, preschoolers are taught how to play as an acquired skill and master self-regulation through play. There is a line of contemporary research that centers on child play from social constructivism perspective. Yet, current teaching practices pertaining to child play including guided child play and free play, in fact, serve the interest of adults and society at large. By acknowledging and deconstructing the prevalence of 'evidence-based best practice' in early childhood education field within western society, reconstruction of child-adult power relation could be achieved and alternative truth could be found in early childhood education. To support the argument of this paper, an on-going observational case study is conducted in a preschool setting in the United States. Age range of children is 2.5 to 4 years old. Approximately 10 children (5 boys) are participating in this case study. Observation is conducted throughout the weekdays as children follow through the classroom routine with a lead and an assistant teacher. Classroom teachers are interviewed pertaining to their classroom management strategies. Preliminary research finding of this case study suggested that preschool teachers tended to utilize scenarios from preschoolers’ dramatic play to impart core cultural values to young children. These values were pre-determined by adults. In addition, if young children have failed to follow teachers' guidance in terms of playing in a correct way, children ran the risk of being excluded from the play scenario by peers and adults. Furthermore, this study tended to indicate that through child play, preschoolers are obliged to develop an internal violence system, that is self-regulation skill to regulate their own behavior; and if this internal system is unestablished based on various assessments by adults, then potentially there will be consequences of negative labeling and disabling toward young children intended by adults. In conclusion, this paper applies Foucauldian analysis into the context of child play. At present, within preschool, child play is not free as it seems to be. Young children are expected to perform cultural tasks through their play activities designed by adults. Adults utilize child play as technologies of governmentality to further predict and regulate future society at large.

Keywords: child play, developmentally appropriate practice, DAP, poststructuralism, technologies of governmentality

Procedia PDF Downloads 155
315 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 160
314 Cultivating Concentration and Flow: Evaluation of a Strategy for Mitigating Digital Distractions in University Education

Authors: Vera G. Dianova, Lori P. Montross, Charles M. Burke

Abstract:

In the digital age, the widespread and frequently excessive use of mobile phones amongst university students is recognized as a significant distractor which interferes with their ability to enter a deep state of concentration during studies and diminishes their prospects of experiencing the enjoyable and instrumental state of flow, as defined and described by psychologist M. Csikszentmihalyi. This study has targeted 50 university students with the aim of teaching them to cultivate their ability to engage in deep work and to attain the state of flow, fostering more effective and enjoyable learning experiences. Prior to the start of the intervention, all participating students completed a comprehensive survey based on a variety of validated scales assessing their inclination toward lifelong learning, frequency of flow experiences during study, frustration tolerance, sense of agency, as well as their love of learning and daily time devoted to non-academic mobile phone activities. Several days after this initial assessment, students received a 90-minute lecture on the principles of flow and deep work, accompanied by a critical discourse on the detrimental effects of excessive mobile phone usage. They were encouraged to practice deep work and strive for frequent flow states throughout the semester. Subsequently, students submitted weekly surveys, including the 10-item CORE Dispositional Flow Scale, a 3-item agency scale and furthermore disclosed their average daily hours spent on non-academic mobile phone usage. As a final step, at the end of the semester students engaged in reflective report writing, sharing their experiences and evaluating the intervention's effectiveness. They considered alterations in their love of learning, reflected on the implications of their mobile phone usage, contemplated improvements in their tolerance for boredom and perseverance in complex tasks, and pondered the concept of lifelong learning. Additionally, students assessed whether they actively took steps towards managing their recreational phone usage and towards improving their commitment to becoming lifelong learners. Employing a mixed-methods approach our study offers insights into the dynamics of concentration, flow, mobile phone usage and attitudes towards learning among undergraduate and graduate university students. The findings of this study aim to promote profound contemplation, on the part of both students and instructors, on the rapidly evolving digital-age higher education environment. In an era defined by digital and AI advancements, the ability to concentrate, to experience the state of flow, and to love learning has never been more crucial. This study underscores the significance of addressing mobile phone distractions and providing strategies for cultivating deep concentration. The insights gained can guide educators in shaping effective learning strategies for the digital age. By nurturing a love for learning and encouraging lifelong learning, educational institutions can better prepare students for a rapidly changing labor market, where adaptability and continuous learning are paramount for success in a dynamic career landscape.

Keywords: deep work, flow, higher education, lifelong learning, love of learning

Procedia PDF Downloads 68
313 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis

Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy

Abstract:

Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.

Keywords: associated cervical cancer, data mining, random forest, logistic regression

Procedia PDF Downloads 83
312 Working Without a Safety Net: Exploring Struggles and Dilemmas Faced by Greek Orthodox Married Clergy Through a Mental Health Lens, in the Australian Context

Authors: Catherine Constantinidis (Nee Tsacalos)

Abstract:

This paper presents one aspect of the larger Masters qualitative study exploring the roles of married Greek Orthodox clergy, the Priest and Presbytera, under the wing of the Greek Orthodox Archdiocese of Australia. This ground breaking research necessitated the creation of primary data within a phenomenological paradigm drawing from lived experiences of the Priests and Presbyteres in contemporary society. As a Social Worker, a bilingual (Greek/English) Mental Health practitioner and a Presbytera, the questions constantly raised and pondered are: Who do the Priest and Presbytera turn to when they experience difficulties or problems? Where do they go for support? What is in place for their emotional and psychological health and well-being? Who cares for the spiritual carer? Who is there to catch our falling clergy and their wives? What is their 'safety net'? Identified phenomena of angst, stress, frustration and confusion experienced by the Priest and (by extension) the Presbytera, within their position, coupled with basic assumptions, perceptions and expectations about their roles, the role of the organisation (the Church), and their role as spouse often caused confusion and in some cases conflict. Unpacking this complex and multi-dimensional relationship highlighted not only the roller coaster of emotions, potentially affecting their physical and mental health, but also the impact on the interwoven relationships of marriage and ministry. The author considers these phenomena in the light of bilingual cultural and religious organisational practice frameworks, specifically the Greek Orthodox Church, whilst filtering these findings through a mental health lens. One could argue that it is an expectation that clergy (and by default their wives) take on the responsibility to be kind, nurturing and supportive to others. However, when it comes to taking care of self, they are not nearly as kind. This research looks at a recurrent theme throughout the interviews where all participants talked about limited support systems and poor self care strategies and the impact this has on their ministry, mental, emotional, and physical health and ultimately on their relationships with self and others. The struggle all participants encountered at some point in their ministry was physical, spiritual and psychological burn out. The overall aim of the researcher is to provide a voice for the Priest and the Presbytera painting a clearer picture of these roles and facilitating an awareness of struggles and dilemmas faced in their ministry. It is hoped these identified gaps in self care strategies and support systems will provide solid foundations for building a culturally sensitive, empathetic and effective support system framework, incorporating the spiritual and psychological well-being of the Priest and Presbytera, a ‘safety net’. A supplementary aim is to inform and guide ministry practice frameworks for clergy, spouses, the church hierarchy and religious organisations on a local and global platform incorporating some sort of self-care system.

Keywords: care for the carer, mental health, Priest, Presbytera, religion, support system

Procedia PDF Downloads 392
311 ExactData Smart Tool For Marketing Analysis

Authors: Aleksandra Jonas, Aleksandra Gronowska, Maciej Ścigacz, Szymon Jadczak

Abstract:

Exact Data is a smart tool which helps with meaningful marketing content creation. It helps marketers achieve this by analyzing the text of an advertisement before and after its publication on social media sites like Facebook or Instagram. In our research we focus on four areas of natural language processing (NLP): grammar correction, sentiment analysis, irony detection and advertisement interpretation. Our research has identified a considerable lack of NLP tools for the Polish language, which specifically aid online marketers. In light of this, our research team has set out to create a robust and versatile NLP tool for the Polish language. The primary objective of our research is to develop a tool that can perform a range of language processing tasks in this language, such as sentiment analysis, text classification, text correction and text interpretation. Our team has been working diligently to create a tool that is accurate, reliable, and adaptable to the specific linguistic features of Polish, and that can provide valuable insights for a wide range of marketers needs. In addition to the Polish language version, we are also developing an English version of the tool, which will enable us to expand the reach and impact of our research to a wider audience. Another area of focus in our research involves tackling the challenge of the limited availability of linguistically diverse corpora for non-English languages, which presents a significant barrier in the development of NLP applications. One approach we have been pursuing is the translation of existing English corpora, which would enable us to use the wealth of linguistic resources available in English for other languages. Furthermore, we are looking into other methods, such as gathering language samples from social media platforms. By analyzing the language used in social media posts, we can collect a wide range of data that reflects the unique linguistic characteristics of specific regions and communities, which can then be used to enhance the accuracy and performance of NLP algorithms for non-English languages. In doing so, we hope to broaden the scope and capabilities of NLP applications. Our research focuses on several key NLP techniques including sentiment analysis, text classification, text interpretation and text correction. To ensure that we can achieve the best possible performance for these techniques, we are evaluating and comparing different approaches and strategies for implementing them. We are exploring a range of different methods, including transformers and convolutional neural networks (CNNs), to determine which ones are most effective for different types of NLP tasks. By analyzing the strengths and weaknesses of each approach, we can identify the most effective techniques for specific use cases, and further enhance the performance of our tool. Our research aims to create a tool, which can provide a comprehensive analysis of advertising effectiveness, allowing marketers to identify areas for improvement and optimize their advertising strategies. The results of this study suggest that a smart tool for advertisement analysis can provide valuable insights for businesses seeking to create effective advertising campaigns.

Keywords: NLP, AI, IT, language, marketing, analysis

Procedia PDF Downloads 85