Search results for: scientific answers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2485

Search results for: scientific answers

85 Design of Evaluation for Ehealth Intervention: A Participatory Study in Italy, Israel, Spain and Sweden

Authors: Monika Jurkeviciute, Amia Enam, Johanna Torres Bonilla, Henrik Eriksson

Abstract:

Introduction: Many evaluations of eHealth interventions conclude that the evidence for improved clinical outcomes is limited, especially when the intervention is short, such as one year. Often, evaluation design does not address the feasibility of achieving clinical outcomes. Evaluations are designed to reflect upon clinical goals of intervention without utilizing the opportunity to illuminate effects on organizations and cost. A comprehensive design of evaluation can better support decision-making regarding the effectiveness and potential transferability of eHealth. Hence, the purpose of this paper is to present a feasible and comprehensive design of evaluation for eHealth intervention, including the design process in different contexts. Methodology: The situation of limited feasibility of clinical outcomes was foreseen in the European Union funded project called “DECI” (“Digital Environment for Cognitive Inclusion”) that is run under the “Horizon 2020” program with an aim to define and test a digital environment platform within corresponding care models that help elderly people live independently. A complex intervention of eHealth implementation into elaborate care models in four different countries was planned for one year. To design the evaluation, a participative approach was undertaken using Pettigrew’s lens of change and transformations, including context, process, and content. Through a series of workshops, observations, interviews, and document analysis, as well as a review of scientific literature, a comprehensive design of evaluation was created. Findings: The findings indicate that in order to get evidence on clinical outcomes, eHealth interventions should last longer than one year. The content of the comprehensive evaluation design includes a collection of qualitative and quantitative methods for data gathering which illuminates non-medical aspects. Furthermore, it contains communication arrangements to discuss the results and continuously improve the evaluation design, as well as procedures for monitoring and improving the data collection during the intervention. The process of the comprehensive evaluation design consists of four stages: (1) analysis of a current state in different contexts, including measurement systems, expectations and profiles of stakeholders, organizational ambitions to change due to eHealth integration, and the organizational capacity to collect data for evaluation; (2) workshop with project partners to discuss the as-is situation in relation to the project goals; (3) development of general and customized sets of relevant performance measures, questionnaires and interview questions; (4) setting up procedures and monitoring systems for the interventions. Lastly, strategies are presented on how challenges can be handled during the design process of evaluation in four different countries. The evaluation design needs to consider contextual factors such as project limitations, and differences between pilot sites in terms of eHealth solutions, patient groups, care models, national and organizational cultures and settings. This implies a need for the flexible approach to evaluation design to enable judgment over the effectiveness and potential for adoption and transferability of eHealth. In summary, this paper provides learning opportunities for future evaluation designs of eHealth interventions in different national and organizational settings.

Keywords: ehealth, elderly, evaluation, intervention, multi-cultural

Procedia PDF Downloads 324
84 Typology of Fake News Dissemination Strategies in Social Networks in Social Events

Authors: Mohadese Oghbaee, Borna Firouzi

Abstract:

The emergence of the Internet and more specifically the formation of social media has provided the ground for paying attention to new types of content dissemination. In recent years, Social media users share information, communicate with others, and exchange opinions on social events in this space. Many of the information published in this space are suspicious and produced with the intention of deceiving others. These contents are often called "fake news". Fake news, by disrupting the circulation of the concept and similar concepts such as fake news with correct information and misleading public opinion, has the ability to endanger the security of countries and deprive the audience of the basic right of free access to real information; Competing governments, opposition elements, profit-seeking individuals and even competing organizations, knowing about this capacity, act to distort and overturn the facts in the virtual space of the target countries and communities on a large scale and influence public opinion towards their goals. This process of extensive de-truthing of the information space of the societies has created a wave of harm and worries all over the world. The formation of these concerns has led to the opening of a new path of research for the timely containment and reduction of the destructive effects of fake news on public opinion. In addition, the expansion of this phenomenon has the potential to create serious and important problems for societies, and its impact on events such as the 2016 American elections, Brexit, 2017 French elections, 2019 Indian elections, etc., has caused concerns and led to the adoption of approaches It has been dealt with. In recent years, a simple look at the growth trend of research in "Scopus" shows an increasing increase in research with the keyword "false information", which reached its peak in 2020, namely 524 cases, reached, while in 2015, only 30 scientific-research contents were published in this field. Considering that one of the capabilities of social media is to create a context for the dissemination of news and information, both true and false, in this article, the classification of strategies for spreading fake news in social networks was investigated in social events. To achieve this goal, thematic analysis research method was chosen. In this way, an extensive library study was first conducted in global sources. Then, an in-depth interview was conducted with 18 well-known specialists and experts in the field of news and media in Iran. These experts were selected by purposeful sampling. Then by analyzing the data using the theme analysis method, strategies were obtained; The strategies achieved so far (research is in progress) include unrealistically strengthening/weakening the speed and content of the event, stimulating psycho-media movements, targeting emotional audiences such as women, teenagers and young people, strengthening public hatred, calling the reaction legitimate/illegitimate. events, incitement to physical conflict, simplification of violent protests and targeted publication of images and interviews were introduced.

Keywords: fake news, social network, social events, thematic analysis

Procedia PDF Downloads 63
83 Contemporary Paradoxical Expectations of the Nursing Profession and Revisiting the ‘Nurses’ Disciplinary Boundaries: India’s Historical and Gendered Perspective

Authors: Neha Adsul, Rohit Shah

Abstract:

Background: The global history of nursing is exclusively a history of deep contradictions as it seeks to negotiate inclusion in an already gendered world. Although a powerful 'clinical gaze exists, nurses have toiled to re-negotiate and subvert the 'medical gaze' by practicing the 'therapeutic gaze' to tether back 'care into nursing practice.' This helps address the duality of the 'body' and 'mind' wherein the patient is not just limited to being an object of medical inquiry. Nevertheless, there has been a consistent effort to fit 'nursing' into being an art or an emerging science over the years. Especially with advances in hospital-based techno-centric medical practices, the boundaries between technology and nursing practices are becoming more blurred as the technical process becomes synonymous with nursing, eroding the essence of nursing care. Aim: This paper examines the history of nursing and offers insights into how gendered relations and the ideological belief of 'nursing as gendered work' have propagated to the subjugation of the nursing profession. It further aims to provide insights into the patriarchally imbibed techno-centrism that negates the gendered caregiving which lies at the crux of a nurse's work. Method: A literature search was carried out using Google Scholar, Web of Science and PubMed databases. Search words included: technology and nursing, medical technology and nursing, history of nursing, sociology and nursing and nursing care. The history of nursing is presented in a discussion that weaves together the historical events of the 'Birth of the Clinic' and the shift from 'bed-side medicine' to 'hospital-based medicine' that legitimizes exploitation of the bodies of patients to the 'medical gaze while the emergence of nursing as acquiescent to instrumental, technical, positivist and dominant views of medicine. The resultant power asymmetries, wherein in contemporary nursing, the constant struggle of nurses to juggle between being the physicians "operational right arm" to harboring that subjective understanding of the patients to refrain from de-humanizing nursing-care. Findings: The nursing profession suffers from being rendered invisible due to gendered relations having patrifocal societal roots. This perpetuates a notion rooted in the idea that emphasizes empiricism and has resulted in theoretical and epistemological fragmentation of the understanding of body and mind as separate entities. Nurses operate within this structure while constantly being at the brink of being pushed beyond the legitimate professional boundaries while being labeled as being 'unscientific' as the work does not always corroborate and align with the existing dominant positivist lines of inquiries. Conclusion: When understood in this broader context of how nursing as a practice has evolved over the years, it provides a particularly crucial testbed for understanding contemporary gender relations. Not because nurses like to live in a gendered work trap but because the gendered relations at work are written in a covert narcissistic patriarchal milieu that fails to recognize the value of intangible yet utmost necessary 'caring work in nursing. This research urges and calls for preserving and revering the humane aspect of nursing care alongside the emerging tech-savvy expectations from nursing work.

Keywords: nursing history, technocentric, power relations, scientific duality

Procedia PDF Downloads 145
82 A Proposal of a Strategic Framework for the Development of Smart Cities: The Argentinian Case

Authors: Luis Castiella, Mariano Rueda, Catalina Palacio

Abstract:

The world’s rapid urbanisation represents an excellent opportunity to implement initiatives that are oriented towards a country’s general development. However, this phenomenon has created considerable pressure on current urban models, pushing them nearer to a crisis. As a result, several factors usually associated with underdevelopment have been steadily rising. Moreover, actions taken by public authorities have not been able to keep up with the speed of urbanisation, which has impeded them from meeting the demands of society, responding with reactionary policies instead of with coordinated, organised efforts. In contrast, the concept of a Smart City which emerged around two decades ago, in principle, represents a city that utilises innovative technologies to remedy the everyday issues of the citizen, empowering them with the newest available technology and information. This concept has come to adopt a wider meaning, including human and social capital, as well as productivity, economic growth, quality of life, environment and participative governance. These developments have also disrupted the management of institutions such as academia, which have become key in generating scientific advancements that can solve pressing problems, and in forming a specialised class that is able to follow up on these breakthroughs. In this light, the Ministry of Modernisation of the Argentinian Nation has created a model that is rooted in the concept of a ‘Smart City’. This effort considered all the dimensions that are at play in an urban environment, with careful monitoring of each sub-dimensions in order to establish the government’s priorities and improving the effectiveness of its operations. In an attempt to ameliorate the overall efficiency of the country’s economic and social development, these focused initiatives have also encouraged citizen participation and the cooperation of the private sector: replacing short-sighted policies with some that are coherent and organised. This process was developed gradually. The first stage consisted in building the model’s structure; the second, at applying the method created on specific case studies and verifying that the mechanisms used respected the desired technical and social aspects. Finally, the third stage consists in the repetition and subsequent comparison of this experiment in order to measure the effects on the ‘treatment group’ over time. The first trial was conducted on 717 municipalities and evaluated the dimension of Governance. Results showed that levels of governmental maturity varied sharply with relation to size: cities with less than 150.000 people had a strikingly lower level of governmental maturity than cities with more than 150.000 people. With the help of this analysis, some important trends and target population were made apparent, which enabled the public administration to focus its efforts and increase its probability of being successful. It also permitted to cut costs, time, and create a dynamic framework in tune with the population’s demands, improving quality of life with sustained efforts to develop social and economic conditions within the territorial structure.

Keywords: composite index, comprehensive model, smart cities, strategic framework

Procedia PDF Downloads 176
81 (Anti)Depressant Effects of Non-Steroidal Antiinflammatory Drugs in Mice

Authors: Horia Păunescu

Abstract:

Purpose: The study aimed to assess the depressant or antidepressant effects of several Nonsteroidal Anti-Inflammatory Drugs (NSAIDs) in mice: the selective cyclooxygenase-2 (COX-2) inhibitor meloxicam, and the non-selective COX-1 and COX-2 inhibitors lornoxicam, sodium metamizole, and ketorolac. The current literature data regarding such effects of these agents are scarce. Materials and methods: The study was carried out on NMRI mice weighing 20-35 g, kept in a standard laboratory environment. The study was approved by the Ethics Committee of the University of Medicine and Pharmacy „Carol Davila”, Bucharest. The study agents were injected intraperitoneally, 10 mL/kg body weight (bw) 1 hour before the assessment of the locomotor activity by cage testing (n=10 mice/ group) and 2 hours before the forced swimming tests (n=15). The study agents were dissolved in normal saline (meloxicam, sodium metamizole), ethanol 11.8% v/v in normal saline (ketorolac), or water (lornoxicam), respectively. Negative and positive control agents were also given (amitryptilline in the forced swimming test). The cage floor used in the locomotor activity assessment was divided into 20 equal 10 cm squares. The forced swimming test involved partial immersion of the mice in cylinders (15/9cm height/diameter) filled with water (10 cm depth at 28C), where they were left for 6 minutes. The cage endpoint used in the locomotor activity assessment was the number of treaded squares. Four endpoints were used in the forced swimming test (immobility latency for the entire 6 minutes, and immobility, swimming, and climbing scores for the final 4 minutes of the swimming session), recorded by an observer that was "blinded" to the experimental design. The statistical analysis used the Levene test for variance homogeneity, ANOVA and post-hoc analysis as appropriate, Tukey or Tamhane tests.Results: No statistically significant increase or decrease in the number of treaded squares was seen in the locomotor activity assessment of any mice group. In the forced swimming test, amitryptilline showed an antidepressant effect in each experiment, at the 10 mg/kg bw dosage. Sodium metamizole was depressant at 100 mg/kg bw (increased the immobility score, p=0.049, Tamhane test), but not in lower dosages as well (25 and 50 mg/kg bw). Ketorolac showed an antidepressant effect at the intermediate dosage of 5 mg/kg bw, but not so in the dosages of 2.5 and 10 mg/kg bw, respectively (increased the swimming score, p=0.012, Tamhane test). Meloxicam and lornoxicam did not alter the forced swimming endpoints at any dosage level. Discussion: 1) Certain NSAIDs caused changes in the forced swimming patterns without interfering with locomotion. 2) Sodium metamizole showed a depressant effect, whereas ketorolac proved antidepressant. Conclusion: NSAID-induced mood changes are not class effects of these agents and apparently are independent of the type of inhibited cyclooxygenase (COX-1 or COX-2). Disclosure: This paper was co-financed from the European Social Fund, through the Sectorial Operational Programme Human Resources Development 2007-2013, project number POSDRU /159 /1.5 /S /138907 "Excellence in scientific interdisciplinary research, doctoral and postdoctoral, in the economic, social and medical fields -EXCELIS", coordinator The Bucharest University of Economic Studies.

Keywords: antidepressant, depressant, forced swim, NSAIDs

Procedia PDF Downloads 235
80 Impact of Anthropogenic Stresses on Plankton Biodiversity in Indian Sundarban Megadelta: An Approach towards Ecosystem Conservation and Sustainability

Authors: Dibyendu Rakshit, Santosh K. Sarkar

Abstract:

The study illustrates a comprehensive account of large-scale changes plankton community structure in relevance to water quality characteristics due to anthropogenic stresses, mainly concerned for Annual Gangasagar Festival (AGF) at the southern tip of Sagar Island of Indian Sundarban wetland for 3-year duration (2012-2014; n=36). This prograding, vulnerable and tide-dominated megadelta has been formed in the estuarine phase of the Hooghly Estuary infested by largest continuous tract of luxurious mangrove forest, enriched with high native flora and fauna. The sampling strategy was designed to characterize the changes in plankton community and water quality considering three diverse phases, namely during festival period (January) and its pre - (December) as well as post (February) events. Surface water samples were collected for estimation of different environmental variables as well as for phytoplankton and microzooplankton biodiversity measurement. The preservation and identification techniques of both biotic and abiotic parameters were carried out by standard chemical and biological methods. The intensive human activities lead to sharp ecological changes in the context of poor water quality index (WQI) due to high turbidity (14.02±2.34 NTU) coupled with low chlorophyll a (1.02±0.21 mg m-3) and dissolved oxygen (3.94±1.1 mg l-1), comparing to pre- and post-festival periods. Sharp reduction in abundance (4140 to 2997 cells l-1) and diversity (H′=2.72 to 1.33) of phytoplankton and microzooplankton tintinnids (450 to 328 ind l-1; H′=4.31 to 2.21) was very much pronounced. The small size tintinnid (average lorica length=29.4 µm; average LOD=10.5 µm) composed of Tintinnopsis minuta, T. lobiancoi, T. nucula, T. gracilis are predominant and reached some of the greatest abundances during the festival period. Results of ANOVA revealed a significant variation in different festival periods with phytoplankton (F= 1.77; p=0.006) and tintinnid abundance (F= 2.41; P=0.022). RELATE analyses revealed a significant correlation between the variations of planktonic communities with the environmental data (R= 0.107; p= 0.005). Three distinct groups were delineated from principal component analysis, in which a set of hydrological parameters acted as the causative factor(s) for maintaining diversity and distribution of the planktonic organisms. The pronounced adverse impact of anthropogenic stresses on plankton community could lead to environmental deterioration, disrupting the productivity of benthic and pelagic ecosystems as well as fishery potentialities which directly related to livelihood services. The festival can be considered as multiple drivers of changes in relevance to beach erosion, shoreline changes, pollution from discarded plastic and electronic wastes and destruction of natural habitats resulting loss of biodiversity. In addition, deterioration in water quality was also evident from immersion of idols, causing detrimental effects on aquatic biota. The authors strongly recommend for adopting integrated scientific and administrative strategies for resilience, sustainability and conservation of this megadelta.

Keywords: Gangasagar festival, phytoplankton, Sundarban megadelta, tintinnid

Procedia PDF Downloads 234
79 Accessing Motional Quotient for All Round Development

Authors: Zongping Wang, Chengjun Cui, Jiacun Wang

Abstract:

The concept of intelligence has been widely used to access an individual's cognitive abilities to learn, form concepts, understand, apply logic, and reason. According to the multiple intelligence theory, there are eight distinguished types of intelligence. One of them is the bodily-kinaesthetic intelligence that links to the capacity of an individual controlling his body and working with objects. Motor intelligence, on the other hand, reflects the capacity to understand, perceive and solve functional problems by motor behavior. Both bodily-kinaesthetic intelligence and motor intelligence refer directly or indirectly to bodily capacity. Inspired by these two intelligence concepts, this paper introduces motional intelligence (MI). MI is two-fold. (1) Body strength, which is the capacity of various organ functions manifested by muscle activity under the control of the central nervous system during physical exercises. It can be measured by the magnitude of muscle contraction force, the frequency of repeating a movement, the time to finish a movement of body position, the duration to maintain muscles in a working status, etc. Body strength reflects the objective of MI. (2) Level of psychiatric willingness to physical events. It is a subjective thing and determined by an individual’s self-consciousness to physical events and resistance to fatigue. As such, we call it subjective MI. Subjective MI can be improved through education and proper social events. The improvement of subjective MI can lead to that of objective MI. A quantitative score of an individual’s MI is motional quotient (MQ). MQ is affected by several factors, including genetics, physical training, diet and lifestyle, family and social environment, and personal awareness of the importance of physical exercise. Genes determine one’s body strength potential. Physical training, in general, makes people stronger, faster and swifter. Diet and lifestyle have a direct impact on health. Family and social environment largely affect one’s passion for physical activities, so does personal awareness of the importance of physical exercise. The key to the success of the MQ study is developing an acceptable and efficient system that can be used to assess MQ objectively and quantitatively. We should apply different accessing systems to different groups of people according to their ages and genders. Field test, laboratory test and questionnaire are among essential components of MQ assessment. A scientific interpretation of MQ score is part of an MQ assessment system as it will help an individual to improve his MQ. IQ (intelligence quotient) and EQ (emotional quotient) and their test have been studied intensively. We argue that IQ and EQ study alone is not sufficient for an individual’s all round development. The significance of MQ study is that it offsets IQ and EQ study. MQ reflects an individual’s mental level as well as bodily level of intelligence in physical activities. It is well-known that the American Springfield College seal includes the Luther Gulick triangle with the words “spirit,” “mind,” and “body” written within it. MQ, together with IQ and EQ, echoes this education philosophy. Since its inception in 2012, the MQ research has spread rapidly in China. By now, six prestigious universities in China have established research centers on MQ and its assessment.

Keywords: motional Intelligence, motional quotient, multiple intelligence, motor intelligence, all round development

Procedia PDF Downloads 162
78 A Community Solution to Address Extensive Nitrate Contamination in the Lower Yakima Valley Aquifer

Authors: Melanie Redding

Abstract:

Historic widespread nitrate contamination of the Lower Yakima Valley aquifer in Washington State initiated a community-based effort to reduce nitrate concentrations to below-drinking water standards. This group commissioned studies on characterizing local nitrogen sources, deep soil assessments, drinking water, and assessing nitrate concentrations at the water table. Nitrate is the most prevalent groundwater contaminant with common sources from animal and human waste, fertilizers, plants and precipitation. It is challenging to address groundwater contamination when common sources, such as agriculture, on-site sewage systems, and animal production, are widespread. Remediation is not possible, so mitigation is essential. The Lower Yakima Valley is located over 175,000 acres, with a population of 56,000 residents. Approximately 25% of the population do not have access to safe, clean drinking water, and 20% of the population is at or below the poverty level. Agriculture is the primary economic land-use activity. Irrigated agriculture and livestock production make up the largest percentage of acreage and nitrogen load. Commodities include apples, grapes, hops, dairy, silage corn, triticale, alfalfa and cherries. These commodities are important to the economic viability of the residents of the Lower Yakima Valley, as well as Washington State. Mitigation of nitrate in groundwater is challenging. The goal is to ensure everyone has safe drinking water. There are no easy remedies due to the extensive and pervasiveness of the contamination. Monitoring at the water table indicates that 45% of the 30 spatially distributed monitoring wells exceeded the drinking water standard. This indicates that there are multiple sources that are impacting water quality. Washington State has several areas which have extensive groundwater nitrate contamination. The groundwater in these areas continues to degrade over time. However, the Lower Yakima Valley is being successful in addressing this health issue because of the following reasons: the community is engaged and committed; there is one common goal; there has been extensive public education and outreach to citizens; and generating credible data using sound scientific methods. Work in this area is continuing as an ambient groundwater monitoring network is established to assess the condition of the aquifer over time. Nitrate samples are being collected from 170 wells, spatially distributed across the aquifer. This research entails quarterly sampling for two years to characterize seasonal variability and then continue annually afterward. This assessment will provide the data to statistically determine trends in nitrate concentrations across the aquifer, over time. Thirty-three of these wells are monitoring wells that are screened across the aquifer. The water quality from these wells are indicative of activities at the land surface. Additional work is being conducted to identify land use management practices that are effective in limiting nitrate migration through the soil column. Tracking nitrate in the soil column every season is an important component of bridging land-use practices with the fate and transport of nitrate through the subsurface. Patience, tenacity, and the ability to think outside the box are essential for dealing with widespread nitrate contamination of groundwater.

Keywords: community, groundwater, monitoring, nitrate

Procedia PDF Downloads 177
77 Active Learning Methods in Mathematics

Authors: Daniela Velichová

Abstract:

Plenty of ideas on how to adopt active learning methods in education are available nowadays. Mathematics is a subject where the active involvement of students is required in particular in order to achieve desirable results regarding sustainable knowledge and deep understanding. The present article is based on the outcomes of an Erasmus+ project DrIVE-MATH, that was aimed at developing a novel and integrated framework to teach maths classes in engineering courses at the university level. It is fundamental for students from the early years of their academic life to have agile minds. They must be prepared to adapt to their future working environments, where enterprises’ views are always evolving, where all collaborate in teams, and relations between peers are thought for the well-being of the whole - workers and company profit. This reality imposes new requirements on higher education in terms of adaptation of different pedagogical methods, such as project-based and active-learning methods used within the course curricula. Active learning methodologies are regarded as an effective way to prepare students to meet the challenges posed by enterprises and to help them in building critical thinking, analytic reasoning, and insight to the solved complex problems from different perspectives. Fostering learning-by-doing activities in the pedagogical process can help students to achieve learning independence, as they could acquire deeper conceptual understanding by experimenting with the abstract concept in a more interesting, useful, and meaningful way. Clear information about learning outcomes and goals might help students to take more responsibility for their learning results. Active learning methods implemented by the project team members in their teaching practice, eduScrum and Jigsaw in particular, proved to provide better scientific and soft skills support to students than classical teaching methods. EduScrum method enables teachers to generate a working environment that stimulates students' working habits and self-initiative as they become aware of their responsibilities within the team, their own acquired knowledge, and their abilities to solve problems independently, though in collaboration with other team members. This method enhances collaborative learning, as students are working in teams towards a common goal - knowledge acquisition, while they are interacting with each other and evaluated individually. Teams consisting of 4-5 students work together on a list of problems - sprint; each member is responsible for solving one of them, while the group leader – a master, is responsible for the whole team. A similar principle is behind the Jigsaw technique, where the classroom activity makes students dependent on each other to succeed. Students are divided into groups, and assignments are split into pieces, which need to be assembled by the whole group to complete the (Jigsaw) puzzle. In this paper, analysis of students’ perceptions concerning the achievement of deeper conceptual understanding in mathematics and the development of soft skills, such as self-motivation, critical thinking, flexibility, leadership, responsibility, teamwork, negotiation, and conflict management, is presented. Some new challenges are discussed as brought by introducing active learning methods in the basic mathematics courses. A few examples of sprints developed and used in teaching basic maths courses at technical universities are presented in addition.

Keywords: active learning methods, collaborative learning, conceptual understanding, eduScrum, Jigsaw, soft skills

Procedia PDF Downloads 54
76 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean

Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin

Abstract:

Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean  standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.

Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content

Procedia PDF Downloads 260
75 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 94
74 An Evaluation of a Prototype System for Harvesting Energy from Pressurized Pipeline Networks

Authors: Nicholas Aerne, John P. Parmigiani

Abstract:

There is an increasing desire for renewable and sustainable energy sources to replace fossil fuels. This desire is the result of several factors. First, is the role of fossil fuels in climate change. Scientific data clearly shows that global warming is occurring. It has also been concluded that it is highly likely human activity; specifically, the combustion of fossil fuels, is a major cause of this warming. Second, despite the current surplus of petroleum, fossil fuels are a finite resource and will eventually become scarce and alternatives, such as clean or renewable energy will be needed. Third, operations to obtain fossil fuels such as fracking, off-shore oil drilling, and strip mining are expensive and harmful to the environment. Given these environmental impacts, there is a need to replace fossil fuels with renewable energy sources as a primary energy source. Various sources of renewable energy exist. Many familiar sources obtain renewable energy from the sun and natural environments of the earth. Common examples include solar, hydropower, geothermal heat, ocean waves and tides, and wind energy. Often obtaining significant energy from these sources requires physically-large, sophisticated, and expensive equipment (e.g., wind turbines, dams, solar panels, etc.). Other sources of renewable energy are from the man-made environment. An example is municipal water distribution systems. The movement of water through the pipelines of these systems typically requires the reduction of hydraulic pressure through the use of pressure reducing valves. These valves are needed to reduce upstream supply-line pressures to levels suitable downstream users. The energy associated with this reduction of pressure is significant but is currently not harvested and is simply lost. While the integrity of municipal water supplies is of paramount importance, one can certainly envision means by which this lost energy source could be safely accessed. This paper provides a technical description and analysis of one such means by the technology company InPipe Energy to generate hydroelectricity by harvesting energy from municipal water distribution pressure reducing valve stations. Specifically, InPipe Energy proposes to install hydropower turbines in parallel with existing pressure reducing valves in municipal water distribution systems. InPipe Energy in partnership with Oregon State University has evaluated this approach and built a prototype system at the O. H. Hinsdale Wave Research Lab. The Oregon State University evaluation showed that the prototype system rapidly and safely initiates, maintains, and ceases power production as directed. The outgoing water pressure remained constant at the specified set point throughout all testing. The system replicates the functionality of the pressure reducing valve and ensures accurate control of down-stream pressure. At a typical water-distribution-system pressure drop of 60 psi the prototype, operating at an efficiency 64%, produced approximately 5 kW of electricity. Based on the results of this study, this proposed method appears to offer a viable means of producing significant amounts of clean renewable energy from existing pressure reducing valves.

Keywords: pressure reducing valve, renewable energy, sustainable energy, water supply

Procedia PDF Downloads 204
73 The Role of Temples Redevelopment for Informal Sector Business Development in India

Authors: Prashant Gupta

Abstract:

Throughout India, temples have served as cultural centers, commerce hubs, art galleries, educational institutions, and social centers in addition to being places of worship since centuries. Across the country, there are over two million temples, which are crucial economic hubs, attracting devotees and tourists worldwide. In India, we have 53 temples per each 100,000 Indians. As per NSSO survey, the temple economy is worth about $40 billion and 2.32 per cent of GDP based on major temple’s survey, which only includes formal sector. It could be much larger as an actual estimation has not been done yet. In India, 43.1% of total economy represents informal sector. Over 10 billion domestic tourists visit to new destinations every year within India. Even 20 per cent of the 90 million foreign tourists visited Madurai and Mahabalipuram temples which became the most visited tourist spot in 2022. Recently the current central government in power have started revitalizing the ancient Indian civilization by reconstructing and beautifying the major temples of India i.e., Kashi Vishwanath Corridor, Mahakaleshwara Temple, Kedarnath, Ayodhya etc. The reason researcher chose Kashi as a case study because it is known as a Spiritual Capital of India, which is also the abode for the spread of Hinduism, Buddhism, Jainism and Sikkism, which are core Sanatan Dharmic practices. 17,800 Million INR Amount was spend to redevelop Kashi Vishwanath Corridor since 2019. RESEARCH OBJECTIVES 1. To assess historical contribution of temples in socio economic development and revival of Indic Civilization. 2. To examine the role of temples redevelopment for informal sector businesses. 3. To identify the sub-sectors of informal sector businesses 4. To identify products and services of informal businesses for investigation of marketing strategies and business development. PROPOSED METHODS AND PROCEDURES This study will follow a mixed approach, employing both qualitative and quantitative methods of research. To conduct the study, data will be collected from 500 informal business owners through structured questionnaire and interview instruments. The informal business owners will be selected using a systematic random sampling technique. In addition, documents from government offices of the last 10 years of tax collection will be reviewed to substantiate the study. To analyze the study, descriptive and econometric analysis techniques will be employed. EXPECTED CONTRIBUTION OF THE PROPOSED STUDY By studying the contribution of temple re-development on informal business creation and growth, the study will be beneficial to the informal business owners and the government. For the government, scientific and empirical evidence on the contribution of temple re-development for informal business creation and growth to give evidence the study will give based infrastructural development and boosting tax collection. For informal businesses, the study will give them a detailed insight on the nature of their business and the possible future growth potential of their business, and the alternative products and services supplying to their customers in the future. Studying informal businesses will help to identify the key products and services which are majorly profitable and possess potential to multiply and grow through correct product marketing strategies and business development.

Keywords: business development, informal sector businesses, services and products marketing, temple economics

Procedia PDF Downloads 80
72 The Analysis of Noise Harmfulness in Public Utility Facilities

Authors: Monika Sobolewska, Aleksandra Majchrzak, Bartlomiej Chojnacki, Katarzyna Baruch, Adam Pilch

Abstract:

The main purpose of the study is to perform the measurement and analysis of noise harmfulness in public utility facilities. The World Health Organization reports that the number of people suffering from hearing impairment is constantly increasing. The most alarming is the number of young people occurring in the statistics. The majority of scientific research in the field of hearing protection and noise prevention concern industrial and road traffic noise as the source of health problems. As the result, corresponding standards and regulations defining noise level limits are enforced. However, there is another field uncovered by profound research – leisure time. Public utility facilities such as clubs, shopping malls, sport facilities or concert halls – they all generate high-level noise, being out of proper juridical control. Among European Union Member States, the highest legislative act concerning noise prevention is the Environmental Noise Directive 2002/49/EC. However, it omits the problem discussed above and even for traffic, railway and aircraft noise it does not set limits or target values, leaving these issues to the discretion of the Member State authorities. Without explicit and uniform regulations, noise level control at places designed for relaxation and entertainment is often in the responsibility of people having little knowledge of hearing protection, unaware of the risk the noise pollution poses. Exposure to high sound levels in clubs, cinemas, at concerts and sports events may result in a progressive hearing loss, especially among young people, being the main target group of such facilities and events. The first step to change this situation and to raise the general awareness is to perform reliable measurements the results of which will emphasize the significance of the problem. This project presents the results of more than hundred measurements, performed in most types of public utility facilities in Poland. As the most suitable measuring instrument for such a research, personal noise dosimeters were used to collect the data. Each measurement is presented in the form of numerical results including equivalent and peak sound pressure levels and a detailed description considering the type of the sound source, size and furnishing of the room and the subjective sound level evaluation. In the absence of a straight reference point for the interpretation of the data, the limits specified in EU Directive 2003/10/EC were used for comparison. They set the maximum sound level values for workers in relation to their working time length. The analysis of the examined problem leads to the conclusion that during leisure time, people are exposed to noise levels significantly exceeding safe values. As the hearing problems are gradually progressing, most people underplay the problem, ignoring the first symptoms. Therefore, an effort has to be made to specify the noise regulations for public utility facilities. Without any action, in the foreseeable future the majority of Europeans will be dealing with serious hearing damage, which will have a negative impact on the whole societies.

Keywords: hearing protection, noise level limits, noise prevention, noise regulations, public utility facilities

Procedia PDF Downloads 223
71 A Randomized, Controlled Trial to Test Behavior Change Techniques to Improve Low Intensity Physical Activity in Older Adults

Authors: Ciaran Friel, Jerry Suls, Mark Butler, Patrick Robles, Samantha Gordon, Frank Vicari, Karina W. Davidson

Abstract:

Physical activity guidelines focus on increasing moderate-intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence supports that any increase in physical activity is positively correlated with health benefits. Behavior change techniques (BCTs) have demonstrated effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a Personalized Trials (N-of-1) design to evaluate the efficacy of using four BCTs to promote an increase in low-intensity physical activity (2,000 steps of walking per day) in adults aged 45-75 years old. The 4 BCTs tested were goal setting, action planning, feedback, and self-monitoring. BCTs were tested in random order and delivered by text message prompts requiring participant engagement. The study recruited health system employees in the target age range, without mobility restrictions and demonstrating interest in increasing their daily activity by a minimum of 2,000 steps per day for a minimum of five days per week. Participants were sent a Fitbit® fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7 but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by Fitbit for two weeks. In the 8-week intervention phase of the study, participants received each of the four BCTs, in random order, for a two-week period. Text message prompts were delivered daily each morning at a consistent time. All prompts required participant engagement to acknowledge receipt of the BCT message. Engagement is dependent upon the BCT message and may have included recording that a detailed plan for walking has been made or confirmed a daily step goal (action planning, goal setting). Additionally, participants may have been directed to a study dashboard to view their step counts or compare themselves to their baseline average step count (self-monitoring, feedback). At the end of each two-week testing interval, participants were asked to complete the Self-Efficacy for Walking Scale (SEW_Dur), a validated measure that assesses the participant’s confidence in walking incremental distances, and a survey measuring their satisfaction with the individual BCT that they tested. At the end of their trial, participants received a personalized summary of their step data in response to each individual BCT. The analysis will examine the novel individual-level heterogeneity of treatment effect made possible by N-of-1 design and pool results across participants to efficiently estimate the overall efficacy of the selected behavioral change techniques in increasing low-intensity walking by 2,000 steps, five days per week. Self-efficacy will be explored as the likely mechanism of action prompting behavior change. This study will inform the providers and demonstrate the feasibility of an N-of-1 study design to effectively promote physical activity as a component of healthy aging.

Keywords: aging, exercise, habit, walking

Procedia PDF Downloads 92
70 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 72
69 Bridging Educational Research and Policymaking: The Development of Educational Think Tank in China

Authors: Yumei Han, Ling Li, Naiqing Song, Xiaoping Yang, Yuping Han

Abstract:

Educational think tank is agreeably regarded as significant part of a nation’s soft power to promote the scientific and democratic level of educational policy making, and it plays critical role of bridging educational research in higher institutions and educational policy making. This study explores the concept, functions and significance of educational think tank in China, and conceptualizes a three dimensional framework to analyze the approaches of transforming research-based higher institutions into effective educational think tanks to serve educational policy making in the nation wide. Since 2014, the Ministry of Education P.R. China has been promoting the strategy of developing new type of educational think tanks in higher institutions, and such a strategy has been put into the agenda for the 13th Five Year Plan for National Education Development released in 2017.In such context, increasing scholars conduct studies to put forth strategies of promoting the development and transformation of new educational think tanks to serve educational policy making process. Based on literature synthesis, policy text analysis, and analysis of theories about policy making process and relationship between educational research and policy-making, this study constructed a three dimensional conceptual framework to address the following questions: (a) what are the new features of educational think tanks in the new era comparing traditional think tanks, (b) what are the functional objectives of the new educational think tanks, (c) what are the organizational patterns and mechanism of the new educational think tanks, (d) in what approaches traditional research-based higher institutions can be developed or transformed into think tanks to effectively serve the educational policy making process. The authors adopted case study approach on five influential education policy study centers affiliated with top higher institutions in China and applied the three dimensional conceptual framework to analyze their functional objectives, organizational patterns as well as their academic pathways that researchers use to contribute to the development of think tanks to serve education policy making process.Data was mainly collected through interviews with center administrators, leading researchers and academic leaders in the institutions. Findings show that: (a) higher institution based think tanks mainly function for multi-level objectives, providing evidence, theoretical foundations, strategies, or evaluation feedbacks for critical problem solving or policy-making on the national, provincial, and city/county level; (b) higher institution based think tanks organize various types of research programs for different time spans to serve different phases of policy planning, decision making, and policy implementation; (c) in order to transform research-based higher institutions into educational think tanks, the institutions must promote paradigm shift that promotes issue-oriented field studies, large data mining and analysis, empirical studies, and trans-disciplinary research collaborations; and (d) the five cases showed distinguished features in their way of constructing think tanks, and yet they also exposed obstacles and challenges such as independency of the think tanks, the discourse shift from academic papers to consultancy report for policy makers, weakness in empirical research methods, lack of experience in trans-disciplinary collaboration. The authors finally put forth implications for think tank construction in China and abroad.

Keywords: education policy-making, educational research, educational think tank, higher institution

Procedia PDF Downloads 158
68 Selective Immobilization of Fructosyltransferase onto Glutaraldehyde Modified Support and Its Application in the Production of Fructo-Oligosaccharides

Authors: Milica B. Veljković, Milica B. Simović, Marija M. Ćorović, Ana D. Milivojević, Anja I. Petrov, Katarina M. Banjanac, Dejan I. Bezbradica

Abstract:

In recent decades, the scientific community has recognized the growing importance of prebiotics, and therefore, numerous studies are focused on their economic production due to their low presence in natural resources. It has been confirmed that prebiotics is a source of energy for probiotics in the gastrointestinal tract (GIT) and enable their proliferation, consequently leading to the normal functioning of the intestinal microbiota. Also, products of their fermentation are short-chain fatty acids (SCFA), which play a key role in maintaining and improving the health not only of the GIT but also of the whole organism. Among several confirmed prebiotics, fructooligosaccharides (FOS) are considered interesting candidates for use in a wide range of products in the food industry. They are characterized as low-calorie and non-cariogenic substances that represent an adequate sugar substitute and can be considered suitable for use in products intended for diabetics. The subject of this research will be the production of FOS by transforming sucrose using a fructosyltransferase (FTase) present in commercial preparation Pectinex® Ultra SP-L, with special emphasis on the development of adequate FTase immobilization method that would enable selective isolation of the enzyme responsible for the synthesis of FOS from the complex enzymatic mixture. This would lead to considerable enzyme purification and allow its direct incorporation into different sucrose-based products without the fear that the action of the other hydrolytic enzymes may adversely affect the products' functional characteristics. Accordingly, the possibility of selective immobilization of the enzyme using support with primary amino groups, Purolite® A109, which was previously activated and modified using glutaraldehyde (GA), was investigated. In the initial phase of the research, the effects of individual immobilization parameters such as pH, enzyme concentration, and immobilization time were investigated to optimize the process using support chemically activated with 15% and 0.5% GA to form dimers and monomers, respectively. It was determined that highly active immobilized preparations (371.8 IU/g of support - dimer and 213.8 IU/g of support – monomer) were achieved under acidic conditions (pH 4) provided that an enzyme concentration was 50 mg/g of support after 7 h and 3 h, respectively. Bearing in mind the obtained results of the expressed activity, it is noticeable that the formation of dimers showed higher reactivity compared to the form of monomers. Also, in the case of support modification using 15% GA, the value of the ratio of FTase and pectinase (as dominant enzyme mixture component) activity immobilization yields was 16.45, indicating the high feasibility of selective immobilization of FTase on modified polystyrene resin. After obtaining immobilized preparations of satisfactory features, they were tested in a reaction of FOS synthesis under determined optimal conditions. The maximum FOS yields of approximately 50% of total carbohydrates in the reaction mixture were recorded after 21 h. Finally, it can be concluded that the examined immobilization method yielded highly active, stable and, more importantly, refined enzyme preparation that can be further utilized on a larger scale for the development of continual processes for FOS synthesis, as well as for modification of different sucrose-based mediums.

Keywords: chemical modification, fructooligosaccharides, glutaraldehyde, immobilization of fructosyltransferase

Procedia PDF Downloads 186
67 The Preliminary Exposition of Soil Biological Activity, Microbial Diversity and Morpho-Physiological Indexes of Cucumber under Interactive Effect of Allelopathic Garlic Stalk: A Short-Term Dynamic Response in Replanted Alkaline Soil

Authors: Ahmad Ali, Muhammad Imran Ghani, Haiyan Ding, Zhihui Cheng, Muhammad Iqbal

Abstract:

Background and Aims: In recent years, protected cultivation trend, especially in the northern parts of China, spread dynamically where production area, structure, and crops diversity have expanded gradually under plastic greenhouse vegetable cropping (PGVC) system. Under this growing system, continuous monoculture with excessive synthetic fertilizers inputs are common cultivation practices frequently adopted by commercial producers. Such long-term cumulative wild exercise year after year sponsor the continuous cropping obstacles in PGVC soil, which have greatly threatened the regional soil eco-sustainability and further impose the continuous assault on soil ecological diversity leading to the exhaustion of agriculture productivity. The aim of this study was to develop new allelopathic insights by exploiting available biological resources in the favor of sustainable PGVC to illuminate the continuous obstacle factors in plastic greenhouse. Method: A greenhouse study was executed under plastic tunnel located at the Horticulture Experimental Station of the College of Horticulture, Northwest A&F University, Yangling, Shaanxi Province, one of the prominent regions for intensive commercial PGVC in China. Post-harvest garlic residues (stalk, leaves) mechanically smashed, homogenized into powder size and incorporated at the ratio of 1:100; 3:100; 5:100 as a soil amendment in a replanted soil that have been used for continuous cucumber monoculture for 7 years (annually double cropping system in a greenhouse). Results: Incorporated C-rich garlic stalk significantly influenced the soil condition through various ways; organic matter decomposition and mineralization, moderately adjusted the soil pH, enhanced the soil nutrient availability, increased enzymatic activities, and promoted 20% more cucumber yield in short-time. Using Illumina MiSeq sequencing analysis of bacterial 16S rRNA and fungal 18S rDNA genes, the current study revealed that addition of garlic stalk/residue could also improve the microbial abundance and community composition in extensively exploited soil, and contributed in soil functionality, caused prosper changes in soil characteristics, reinforced to good crop yield. Conclusion: Our study provided evidence that addition of garlic stalk as soil fertility amendment is a feasible, cost-effective and efficient resource utilization way for renovation of degraded soil health, ameliorate soil quality components and improve ecological environment in short duration. Our study may provide a better scientific understanding for efficient crop residue management typically from allelopathic source.

Keywords: garlic stalk, microbial community dynamics, plant growth, soil amendment, soil-plant system

Procedia PDF Downloads 135
66 The Impact of the Media in the Implementation of Qatar’s Foreign Policy on the Public Opinion of the People of the Middle East (2011-2023)

Authors: Negar Vkilbashi, Hassan Kabiri

Abstract:

Modern diplomacy, in its general form, refers to the people and not the governments, and diplomacy tactics are more addressed to the people than to the governments. Media diplomacy and cyber diplomacy are also one of the sub-branches of public diplomacy and, in fact, the role of media in the process of influencing public opinion and directing foreign policy. Mass media, including written, radio and television, theater, satellite, internet, and news agencies, transmit information and demands. What the Qatari government tried to implement in the countries of the region during the Arab Spring and after was through its important media, Al Jazeera. The embargo on Qatar began in 2017, when Saudi Arabia, the United Arab Emirates, Bahrain, and Egypt imposed a land, sea, and air blockade against the country. The media tool constitutes the cornerstone of soft power in the field of foreign policy, which Qatari leaders have consistently resorted to over the past two decades. Undoubtedly, the role it played in covering the events of the Arab Spring has created geopolitical tensions. The United Arab Emirates and other neighboring countries sometimes criticize Al Jazeera for providing a platform for the Muslim Brotherhood, Hamas, and other Islamists to promote their ideology. In 2011, at the same time as the Arab Spring, Al Jazeera reached the peak of its popularity. Al Jazeera's live coverage of protests in Tunisia, Egypt, Yemen, Libya, and Syria helped create a unified narrative of the Arab Spring, with audiences tuning in every Friday to watch simultaneous protests across the Middle East. Al Jazeera operates in three groups: First, it is a powerful base in the hands of the government so that it can direct and influence Arab public opinion. Therefore, this network has been able to benefit from the unlimited financial support of the Qatar government to promote its desired policies and culture. Second, it has provided an attractive platform for politicians and scientific and intellectual elites, thus attracting their support and defense from the government and its rulers. Third, during the last years of Prince Hamad's reign, the Al Jazeera network formed a deterrent weapon to counter the media and political struggle campaigns. The importance of the research is that this network covers a wide range of people in the Middle East and, therefore, has a high influence on the decision-making of countries. On the other hand, Al Jazeera is influential as a tool of public diplomacy and soft power in Qatar's foreign policy, and by studying it, the results of its effectiveness in the past years can be examined. Using a qualitative method, this research analyzes the impact of the media on the implementation of Qatar's foreign policy on the public opinion of the people of the Middle East. Data collection has been done by the secondary method, that is, reading related books, magazine articles, newspaper reports and articles, and analytical reports of think tanks. The most important findings of the research are that Al Jazeera plays an important role in Qatar's foreign policy in Qatar's public diplomacy. So that, in 2011, 2017 and 2023, it played an important role in Qatar's foreign policy in various crises. Also, the people of Arab countries use Al-Jazeera as their first reference.

Keywords: Al Jazeera, Qatar, media, diplomacy

Procedia PDF Downloads 78
65 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 228
64 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data

Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira

Abstract:

Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.

Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC

Procedia PDF Downloads 126
63 A Snapshot of Agricultural Waste in the European Union

Authors: Margarida Soares, Zlatina Genisheva, Lucas Nascimento, André Ribeiro, Tiago Miranda, Eduardo Pereira, Joana Carvalho

Abstract:

In the current global context, we face a significant challenge: the rapid population increase combined with the pressing need for sustainable management of agro-industrial waste. Beyond understanding how population growth impacts waste generation, it is essential to first identify the primary types of waste produced and the countries responsible to guide targeted actions. This study presents key statistical data on waste production from the agriculture, forestry, and fishing sectors across the European Union, alongside information on the agricultural areas dedicated to crop production in each European Union country. These insights will form the basis for future research into waste production by crop type and country to improve waste management practices and promote recovery methods that are vital for environmental sustainability. The agricultural sector must stay at the forefront of scientific and technological advancements to meet climate change challenges, protect the environment, and ensure food and health security. The study's findings indicate that population growth significantly increases pressure on natural resources, leading to a rise in agro-industrial waste production. EUROSTAT data shows that, in 2020, the agriculture, forestry, and fishing sectors produced over 21 million tons of waste. Spain emerged as the largest producer, contributing nearly 30% of the EU's total waste in these sectors. Furthermore, five countries—Spain, the Netherlands, France, Sweden, and Germany—were responsible for producing more than two-thirds of the waste from these sectors. Regarding agricultural land use, the data for 2020 revealed that around two-thirds of the total agricultural area was concentrated in six countries: France, Spain, Germany, Poland, Romania, and Italy. Regarding waste production per capita, the Netherlands had the highest figures in the EU for 2020. The data presented in this study highlights the urgent need for action in managing agricultural waste in the EU. As population growth continues to drive up demand for agricultural products, waste generation will inevitably rise unless significant changes are made in managing of agro-industrial waste. The countries must lead the way in adopting technological waste management strategies that focus on reducing, reusing, and recycling waste to benefit both the environment and society. Equally important is the need to promote collaborative efforts between governments, industries, and research institutions to develop and implement technologies that transform waste into valuable resources. The insights from this study are critical for informing future strategies to improve the management and valorization of waste from the agro-industrial sector. One of the most promising approaches is adopting circular economy principles to create closed-loop systems that minimize environmental impacts. By rethinking waste as a valuable resource rather than a by-product, agricultural industries can contribute to more sustainable practices that support both environmental health and economic growth.

Keywords: agricultural area, agricultural waste, circular economy, environmental challenges, population growth

Procedia PDF Downloads 13
62 The Development of User Behavior in Urban Regeneration Areas by Utilizing the Floating Population Data

Authors: Jung-Hun Cho, Tae-Heon Moon, Sun-Young Heo

Abstract:

A lot of urban problems, caused by urbanization and industrialization, have occurred around the world. In particular, the creation of satellite towns, which was attributed to the explicit expansion of the city, has led to the traffic problems and the hollowization of old towns, raising the necessity of urban regeneration in old towns along with the aging of existing urban infrastructure. To select urban regeneration priority regions for the strategic execution of urban regeneration in Korea, the number of population, the number of businesses, and deterioration degree were chosen as standards. Existing standards had a limit in coping with solving urban problems fundamentally and rapidly changing reality. Therefore, it was necessary to add new indicators that can reflect the decline in relevant cities and conditions. In this regard, this study selected Busan Metropolitan City, Korea as the target area as a leading city, where urban regeneration such as an international port city has been activated like Yokohama, Japan. Prior to setting the urban regeneration priority region, the conditions of reality should be reflected because uniform and uncharacterized projects have been implemented without a quantitative analysis about population behavior within the region. For this reason, this study conducted a characterization analysis and type classification, based on the user behaviors by using representative floating population of the big data, which is a hot issue all over the society in recent days. The target areas were analyzed in this study. While 23 regions were classified as three types in existing Busan Metropolitan City urban regeneration priority region, 23 regions were classified as four types in existing Busan Metropolitan City urban regeneration priority region in terms of the type classification on the basis of user behaviors. Four types were classified as follows; type (Ⅰ) of young people - morning type, Type (Ⅱ) of the old and middle-aged- general type with sharp floating population, type (Ⅲ) of the old and middle aged-24hour-type, and type (Ⅳ) of the old and middle aged with less floating population. Characteristics were shown in each region of four types, and the study results of user behaviors were different from those of existing urban regeneration priority region. According to the results, in type (Ⅰ) young people were the majority around the existing old built-up area, where floating population at dawn is four times more than in other areas. In Type (Ⅱ), there were many old and middle-aged people around the existing built-up area and general neighborhoods, where the average floating population was more than in other areas due to commuting, while in type (Ⅲ), there was no change in the floating population throughout 24 hours, although there were many old and middle aged people in population around the existing general neighborhoods. Type (Ⅳ) includes existing economy-based type, central built-up area type, and general neighborhood type, where old and middle aged people were the majority as a general type of commuting with less floating population. Unlike existing urban regeneration priority region, these types were sub-divided according to types, and in this study, approach methods and basic orientations of urban regeneration were set to reflect the reality to a certain degree including the indicators of effective floating population to identify the dynamic activity of urban areas and existing regeneration priority areas in connection with urban regeneration projects by regions. Therefore, it is possible to make effective urban plans through offering the substantial ground by utilizing scientific and quantitative data. To induce more realistic and effective regeneration projects, the regeneration projects tailored to the present local conditions should be developed by reflecting the present conditions on the formulation of urban regeneration strategic plans.

Keywords: floating population, big data, urban regeneration, urban regeneration priority region, type classification

Procedia PDF Downloads 213
61 Formulation of a Submicron Delivery System including a Platelet Lysate to Be Administered in Damaged Skin

Authors: Sergio A. Bernal-Chavez, Sergio Alcalá-Alcalá, Doris A. Cerecedo-Mercado, Adriana Ganem-Rondero

Abstract:

The prevalence of people with chronic wounds has increased dramatically by many factors including smoking, obesity and chronic diseases, such as diabetes, that can slow the healing process and increase the risk of becoming chronic. Because of this situation, the improvement of chronic wound treatments is a necessity, which has led to the scientific community to focus on improving the effectiveness of current therapies and the development of new treatments. The wound formation is a physiological complex process, which is characterized by an inflammatory stage with the presence of proinflammatory cells that create a proteolytic microenvironment during the healing process, which includes the degradation of important growth factors and cytokines. This decrease of growth factors and cytokines provides an interesting strategy for wound healing if they are administered externally. The use of nanometric drug delivery systems, such as polymer nanoparticles (NP), also offers an interesting alternative around dermal systems. An interesting strategy would be to propose a formulation based on a thermosensitive hydrogel loaded with polymeric nanoparticles that allows the inclusion and application of a platelet lysate (PL) on damaged skin, with the aim of promoting wound healing. In this work, NP were prepared by a double emulsion-solvent evaporation technique, using polylactic-co-glycolic acid (PLGA) as biodegradable polymer. Firstly, an aqueous solution of PL was emulsified into a PLGA organic solution, previously prepared in dichloromethane (DCM). Then, this disperse system (W/O) was poured into a polyvinyl alcohol (PVA) solution to get the double emulsion (W/O/W), finally the DCM was evaporated by magnetic stirring resulting in the NP formation containing PL. Once the NP were obtained, these systems were characterized by morphology, particle size, Z-potential, encapsulation efficiency (%EE), physical stability, infrared spectrum, calorimetric studies (DSC) and in vitro release profile. The optimized nanoparticles were included in a thermosensitive gel formulation of Pluronic® F-127. The gel was prepared by the cold method at 4 °C and 20% of polymer concentration. Viscosity, sol-gel phase transition, time of no flow solid-gel at wound temperature, changes in particle size by temperature-effect using dynamic light scattering (DLS), occlusive effect, gel degradation, infrared spectrum and micellar point by DSC were evaluated in all gel formulations. PLGA NP of 267 ± 10.5 nm and Z-potential of -29.1 ± 1 mV were obtained. TEM micrographs verified the size of NP and evidenced their spherical shape. The %EE for the system was around 99%. Thermograms and in infrared spectra mark the presence of PL in NP. The systems did not show significant changes in the parameters mentioned above, during the stability studies. Regarding the gel formulation, the transition sol-gel occurred at 28 °C with a time of no flow solid-gel of 7 min at 33°C (common wound temperature). Calorimetric, DLS and infrared studies corroborated the physical properties of a thermosensitive gel, such as the micellar point. In conclusion, the thermosensitive gel described in this work, contains therapeutic amounts of PL and fulfills the technological properties to be used in damaged skin, with potential application in wound healing and tissue regeneration.

Keywords: growth factors, polymeric nanoparticles, thermosensitive hydrogels, tissue regeneration

Procedia PDF Downloads 172
60 Explanation of Sentinel-1 Sigma 0 by Sentinel-2 Products in Terms of Crop Water Stress Monitoring

Authors: Katerina Krizova, Inigo Molina

Abstract:

The ongoing climate change affects various natural processes resulting in significant changes in human life. Since there is still a growing human population on the planet with more or less limited resources, agricultural production became an issue and a satisfactory amount of food has to be reassured. To achieve this, agriculture is being studied in a very wide context. The main aim here is to increase primary production on a spatial unit while consuming as low amounts of resources as possible. In Europe, nowadays, the staple issue comes from significantly changing the spatial and temporal distribution of precipitation. Recent growing seasons have been considerably affected by long drought periods that have led to quantitative as well as qualitative yield losses. To cope with such kind of conditions, new techniques and technologies are being implemented in current practices. However, behind assessing the right management, there is always a set of the necessary information about plot properties that need to be acquired. Remotely sensed data had gained attention in recent decades since they provide spatial information about the studied surface based on its spectral behavior. A number of space platforms have been launched carrying various types of sensors. Spectral indices based on calculations with reflectance in visible and NIR bands are nowadays quite commonly used to describe the crop status. However, there is still the staple limit by this kind of data - cloudiness. Relatively frequent revisit of modern satellites cannot be fully utilized since the information is hidden under the clouds. Therefore, microwave remote sensing, which can penetrate the atmosphere, is on its rise today. The scientific literature describes the potential of radar data to estimate staple soil (roughness, moisture) and vegetation (LAI, biomass, height) properties. Although all of these are highly demanded in terms of agricultural monitoring, the crop moisture content is the utmost important parameter in terms of agricultural drought monitoring. The idea behind this study was to exploit the unique combination of SAR (Sentinel-1) and optical (Sentinel-2) data from one provider (ESA) to describe potential crop water stress during dry cropping season of 2019 at six winter wheat plots in the central Czech Republic. For the period of January to August, Sentinel-1 and Sentinel-2 images were obtained and processed. Sentinel-1 imagery carries information about C-band backscatter in two polarisations (VV, VH). Sentinel-2 was used to derive vegetation properties (LAI, FCV, NDWI, and SAVI) as support for Sentinel-1 results. For each term and plot, summary statistics were performed, including precipitation data and soil moisture content obtained through data loggers. Results were presented as summary layouts of VV and VH polarisations and related plots describing other properties. All plots performed along with the principle of the basic SAR backscatter equation. Considering the needs of practical applications, the vegetation moisture content may be assessed using SAR data to predict the drought impact on the final product quality and yields independently of cloud cover over the studied scene.

Keywords: precision agriculture, remote sensing, Sentinel-1, SAR, water content

Procedia PDF Downloads 125
59 International Indigenous Employment Empirical Research: A Community-Based Participatory Research Content Analysis

Authors: Melanie Grier, Adam Murry

Abstract:

Objective: Worldwide, Indigenous Peoples experience underemployment and poverty at disproportionately higher rates than non-Indigenous people, despite similar rates of employment seeking. Euro-colonial conquest and genocidal assimilation policies are implicated as perpetuating poverty, which research consistently links to health and wellbeing disparities. Many of the contributors to poverty, such as inadequate income and lack of access to medical care, can be directly or indirectly linked to underemployment. Calls have been made to prioritize Indigenous perspectives in Industrial-Organizational (I/O) psychology research, yet the literature on Indigenous employment remains scarce. What does exist is disciplinarily diverse, topically scattered, and lacking evidence of community-based participatory research (CBPR) practices, a research project approach which prioritizes community leadership, partnership, and betterment and reduces the potential for harm. Due to the harmful colonial legacy of extractive scientific inquiry "on" rather than "with" Indigenous groups, Indigenous leaders and research funding agencies advocate for academic researchers to adopt reparative research methodologies such as CBPR to be used when studying issues pertaining to Indigenous Peoples or individuals. However, the frequency and consistency of CBPR implementation within scholarly discourse are unknown. Therefore, this project’s goal is two-fold: (1) to understand what comprises CBPR in Indigenous research and (2) to determine if CBPR has been historically used in Indigenous employment research. Method: Using a systematic literature review process, sixteen articles about CBPR use with Indigenous groups were selected, and content was analyzed to identify key components comprising CBPR usage. An Indigenous CBPR components framework was constructed and subsequently utilized to analyze the Indigenous employment empirical literature. A similar systematic literature review process was followed to search for relevant empirical articles on Indigenous employment. A total of 120 articles were identified in six global regions: Australia, New Zealand, Canada, America, the Pacific Islands, and Greenland/Norway. Each empirical study was procedurally examined and coded for criteria inclusion using content analysis directives. Results: Analysis revealed that, in total, CBPR elements were used 14% of the time in Indigenous employment research. Most studies (n=69; 58%) neglected to mention using any CBPR components, while just two studies discussed implementing all sixteen (2%). The most significant determinant of overall CBPR use was community member partnership (CP) in the research process. Studies from New Zealand were most likely to use CBPR components, followed by Canada, Australia, and America. While CBPR use did increase slowly over time, meaningful temporal trends were not found. Further, CBPR use did not directly correspond with the total number of topical articles published that year. Conclusions: Community-initiated and engaged research approaches must be better utilized in employment studies involving Indigenous Peoples. Future research efforts must be particularly attentive to community-driven objectives and research protocols, emphasizing specific areas of concern relevant to the field of I/O psychology, such as organizational support, recruitment, and selection.

Keywords: community-based participatory research, content analysis, employment, indigenous research, international, reconciliation, recruitment, reparative research, selection, systematic literature review

Procedia PDF Downloads 74
58 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 276
57 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves

Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare

Abstract:

The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.

Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve

Procedia PDF Downloads 43
56 The Dark History of American Psychiatry: Racism and Ethical Provider Responsibility

Authors: Mary Katherine Hoth

Abstract:

Despite racial and ethnic disparities in American psychiatry being well-documented, there remains an apathetic attitude among nurses and providers within the field to engage in active antiracism and provide equitable, recovery-oriented care. It is insufficient to be a “colorblind” nurse or provider and state that call care provided is identical for every patient. Maintaining an attitude of “colorblindness” perpetuates the racism prevalent throughout healthcare and leads to negative patient outcomes. The purpose of this literature review is to highlight the how the historical beginnings of psychiatry have evolved into the disparities seen in today’s practice, as well as to provide some insight on methods that providers and nurses can employ to actively participate in challenging these racial disparities. Background The application of psychiatric medicine to White people versus Black, Indigenous, and other People of Color has been distinctly different as a direct result of chattel slavery and the development of pseudoscience “diagnoses” in the 19th century. This weaponization of the mental health of Black people continues to this day. Population The populations discussed are Black, Indigenous, and other People of Color, with a primary focus on Black people’s experiences with their mental health and the field of psychiatry. Methods A literature review was conducted using CINAHL, EBSCO, MEDLINE, and PubMed databases with the following terms: psychiatry, mental health, racism, substance use, suicide, trauma-informed care, disparities and recovery-oriented care. Articles were further filtered based on meeting the criteria of peer-reviewed, full-text availability, written in English, and published between 2018 and 2023. Findings Black patients are more likely to be diagnosed with psychotic disorders and prescribed antipsychotic medications compared to White patients who were more often diagnosed with mood disorders and prescribed antidepressants. This same disparity is also seen in children and adolescents, where Black children are more likely to be diagnosed with behavior problems such as Oppositional Defiant Disorder (ODD) and White children with the same presentation are more likely to be diagnosed with Attention Hyperactivity Disorder. Medications advertisements for antipsychotics like Haldol as recent as 1974 portrayed a Black man, labeled as “agitated” and “aggressive”, a trope we still see today in police violence cases. The majority of nursing and medical school programs do not provide education on racism and how to actively combat it in practice, leaving many healthcare professionals acutely uneducated and unaware of their own biases and racism, as well as structural and institutional racism. Conclusions Racism will continue to grow wherever it is given time, space, and energy. Providers and nurses have an ethical obligation to educate themselves, actively deconstruct their personal racism and bias, and continuously engage in active antiracism by dismantling racism wherever it is encountered, be it structural, institutional, or scientific racism. Agents of change at the patient care level not only improve the outcomes of Black patients, but it will also lead the way in ensuring Black, Indigenous, and other People of Color are included in research of methods and medications in psychiatry in the future.

Keywords: disparities, psychiatry, racism, recovery-oriented care, trauma-informed care

Procedia PDF Downloads 129