Search results for: exponentially moving average (EWMA)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5795

Search results for: exponentially moving average (EWMA)

125 Where do Pregnant Women Miss Out on Nutrition? Analysis of Survey Data from 22 Countries

Authors: Alexis D'Agostino, Celeste Sununtunasuk, Jack Fiedler

Abstract:

Background: Iron-folic acid (IFA) supplementation during antenatal care (ANC) has existed in many countries for decades. Despite this, low national coverage persists and women do not often consume appropriate amounts during pregnancy. USAID’s SPRING Project investigated pregnant women’s access to, and consumption of, IFA tablets through ANC. Cross-country analysis provided a global picture of the state of IFA-supplementation, while country-specific results noted key contextual issues, including geography, wealth, and ANC attendance. The analysis can help countries prioritize strategies for systematic performance improvements within one of the most common micronutrient supplementation programs aimed at reducing maternal anemia. Methodology: Using falter point analysis on Demographic and Health Survey (DHS) data collected from 162,958 women across 22 countries, SPRING identified four sequential falter points (ANC attendance, IFA receipt or purchase, IFA consumption, and number of tablets taken) where pregnant women fell out of the IFA distribution structure. SPRING analyzed data on IFA intake from DHS surveys with women of reproductive age. SPRING disaggregated these data by ANC participation during the most recent pregnancy, residency, and women’s socio-economic status. Results: Average sufficient IFA tablet use across all countries was only eight percent. Even in the best performing countries, only about one-third of pregnant women consumed 180 or more IFA tablets during their most recent pregnancy. ANC attendance was an important falter point for a quarter of women across all countries (with highest falter rates in Democratic Republic of the Congo, Nigeria, and Niger). Further analysis reveals patterns, with some countries having high ANC coverage but low IFA provision during ANC (DRC and Haiti), others having high ANC coverage and IFA provision but few women taking any tablets (Nigeria and Liberia), and countries that perform well in ANC, supplies, and initial consumption but where very few women consume the recommended 180 tablets (Malawi and Cambodia). Country-level analysis identifies further patterns of supplementation. In Indonesia, for example, only 62% of women in the poorest quintile took even one IFA tablet, while 86% of the wealthiest women did. This association between socioeconomic status and IFA intake held across nearly all countries where these data are available and was also visible in rural/urban comparisons. Analysis of ANC attendance data also suggests that higher numbers of ANC visits are associated with higher tablet intake. Conclusions: While it is difficult to disentangle which specific aspects of supply or demand cause the low rates of consumption, this tool allows policy-makers to identify major bottlenecks to scaling-up IFA supplementation during ANC. In turn, each falter point provides possible explanations of program performance and helps strategically identify areas for improved IFA supplementation. For example, improving the delivery of IFA supplementation in Ethiopia relies on increasing access to ANC, but also on identifying and addressing program gaps in IFA supply management and health workers’ practices in order to provide quality ANC services. While every country requires a customized approach to improving IFA supplementation, the multi-country analysis conducted by SPRING is a helpful first step in identifying country bottlenecks and prioritizing interventions.

Keywords: iron and folic acid, supplementation, antenatal care, micronutrient

Procedia PDF Downloads 397
124 Modeling of Hot Casting Technology of Beryllium Oxide Ceramics with Ultrasonic Activation

Authors: Zamira Sattinova, Tassybek Bekenov

Abstract:

The article is devoted to modeling the technology of hot casting of beryllium oxide ceramics. The stages of ultrasonic activation of beryllium oxide slurry in the plant vessel to improve the rheological property, hot casting in the moulding cavity with cooling and solidification of the casting are described. Thermoplastic slurry (hereinafter referred to as slurry) shows the rheology of a non-Newtonian fluid with yield and plastic viscosity. Cooling-solidification of the slurry in the forming cavity occurs in the liquid, taking into account crystallization and solid state. In this work is the method of calculation of hot casting of the slurry using the method of effective molecular viscosity of viscoplastic fluid. It is shown that the slurry near the cooled wall is in a state of crystallization and plasticity, and the rest may still be in the liquid phase. Nonuniform distribution of temperature, density and concentration of kinetically free binder takes place along the cavity section. This leads to compensation of shrinkage by the influx of slurry from the liquid into the crystallization zones and plasticity of the castings. In the plasticity zone, the shrinkage determined by the concentration of kinetically free binder is compensated under the action of the pressure gradient. The solidification mechanism, as well as the mechanical behavior of the casting mass during casting, the rheological and thermophysical properties of the thermoplastic BeO slurry due to ultrasound exposure have not been well studied. Nevertheless, experimental data allow us to conclude that the effect of ultrasonic vibrations on the slurry mass leads to it: a change in structure, an increase in technological properties, a decrease in heterogeneity and a change in rheological properties. In the course of experiments, the effect of ultrasonic treatment and its duration on the change in viscosity and ultimate shear stress of the slurry depending on temperature (55-75℃) and the mass fraction of the binder (10 - 11.7%) have been studied. At the same time, changes in these properties before and after ultrasound exposure have been analyzed, as well as the nature of the flow in the system under study. The experience of operating the unit with ultrasonic impact has shown that at the same time, the casting capacity of the slurry increases by an average of 15%, and the viscosity decreases by more than half. Experimental study of physicochemical properties and phase change with simultaneous consideration of all factors affecting the quality of products in the process of continuous casting is labor-intensive. Therefore, an effective way to control the physical processes occurring in the formation of articles with predetermined properties and shapes is to simulate the process and determine its basic characteristics. The results of the calculations show the whole stage of hot casting of beryllium oxide slurry, taking into account the change in its state of aggregation. Ultrasonic treatment improves rheological properties and increases the fluidity of the slurry in the forming cavity. Calculations show the influence of velocity, temperature factors and structural data of the cavity on the cooling-solidification process of the casting. In the calculations, conditions for molding with shrinkage of the slurry by hot casting have been found, which makes it possible to obtain a solidifying product with a uniform beryllium oxide structure at the outlet of the cavity.

Keywords: hot casting, thermoplastic slurry molding, shrinkage, beryllium oxide

Procedia PDF Downloads 24
123 Improving the Budget Distribution Procedure to Ensure Smooth and Efficient Public Service Delivery

Authors: Rizwana Tabassum

Abstract:

Introductive Statement: Delay in budget releases is often cited as one of the biggest bottlenecks to smooth and efficient service delivery. While budget release from the ministry of finance to the line ministries has been expedited by simplifying the procedure, budget distribution within the line ministries remains one of the major causes of slow budget utilization. While the budget preparation is a bottom-up process where all DDOs submit their proposals to their controlling officers (such as Upazila Civil Surgeon sends it to Director General Health), who consolidate the budget proposals in iBAS++ budget preparation module, the approved budget is not disaggregated by all DDOs. Instead, it is left to the discretion of the controlling officers to distribute the approved budget to their sub-ordinate offices over the course of the year. Though there are some need-based criteria/formulae to distribute the approved budget among DDOs in some sectors, there is little evidence that these criteria are actually used. This means that majority of the DDOs don’t know their yearly allocations upfront to enable yearly planning of activities and expenditures. This delays the implementation of critical activities and the payment to the suppliers of goods and services and sometimes leads to undocumented arrears to suppliers for essential goods/services. In addition, social sector budgets are fragmented because of the vertical programs and externally financed interventions that pose several management challenges at the level of the budget holders and frontline service providers. Slow procurement processes further delay the provision of necessary goods and services. For example, it takes an average of 15–18 months for drugs to reach the Upazila Health Complex and below, while it should not take more than 9 months in procuring and distributing these. Aim of the Study: This paper aims to investigate the budget distribution practices of an emerging economy, Bangladesh. The paper identifies challenges of timely distribution and ways to deal with problems as well. Methodology: The study draws conclusions on the basis of document analysis which is a branch of the qualitative research method. Major Findings: Upon approval of the National Budget, the Ministry of Finance is required to distribute the budget to budget holders at the department level; however, budget is distributed to drawing and disbursing officers much later. Conclusions: Timely and predictable budget releases assist completion of development schemes on time and on budget, with sufficient recurrent resources for effective operation. ADP implementation is usually very low at the beginning of the fiscal year and expedited dramatically during the last few months, leading to inefficient use of resources. The timely budget release will resolve this issue and deliver economic benefits faster, better, and more reliably. This will also give the project directors/DDOs the freedom to think and plan the budget execution in a predictable manner, thereby ensuring value for money by reducing time overrun and expediting the completion of capital investments, and improving infrastructure utilization through timely payment of recurrent costs.

Keywords: budget distribution, challenges, digitization, emerging economy, service delivery

Procedia PDF Downloads 80
122 Impact Of Anthropogenic Pressures On The Water Quality Of Hammams In The Municipality Of Dar Bouazza, Morocco

Authors: Nihad Chakri, Btissam El Amrani, Faouzi Berrada, Halima Jounaid, Fouad Amraoui

Abstract:

Public baths or hammams play an essential role in the Moroccan urban and peri-urban fabric, constituting part of the cultural heritage. Urbanization in Morocco has led to a significant increase in the number of these traditional hammams: between 6,000 and 15,000 units (to be updated) operate with a traditional heating system. Numerous studies on energy consumption indicate that a hammam consumes between 60 and 120m3 of water and one to two tons of wood per day. On average, one ton of wood costs 650 Moroccan dirhams (approximately 60 Euros), resulting in a daily fuel cost of around 1300 Moroccan dirhams (about 120 Euros). These high consumptions result in significant environmental nuisances generated by: Wastewater: in the case of hammams located on the outskirts of Casablanca, such as our study area, the Municipality of Dar Bouazza, most of these waters are directly discharged into the receiving environment without prior treatment because they are not connected to the sanitation network. Emissions of black smoke and ashes produced by the often incomplete combustion of wood. Reducing the liquid and gas emissions generated by these hammams thus poses an environmental and sustainable development challenge that needs to be addressed. In this context, we initiated the Eco-hammam project with the objective of implementing innovative and locally adapted solutions to limit the negative impacts of hammams on the environment and reduce water and wood energy consumption. This involves treating and reusing wastewater through a compact system with heat recovery and using alternative energy sources to increase and enhance the energy efficiency of these traditional hammams. To achieve this, on-site surveys of hammams in the Dar Bouazza Municipality and the application of statistical approaches to the results of the physico-chemical and bacteriological characterization of incoming and outgoing water from these units were conducted. This allowed us to establish an environmental diagnosis of these entities. In conclusion, the analysis of well water used by Dar Bouazza's hammams revealed the presence of certain parameters that could be hazardous to public health, such as total germs, total coliforms, sulfite-reducing spores, chromium, nickel, and nitrates. Therefore, this work primarily focuses on prospecting upstream of our study area to verify if other sources of pollution influence the quality of well water.

Keywords: public baths, hammams, cultural heritage, urbanization, water consumption, wood consumption, environmental nuisances, wastewater, environmental challenge, sustainable development, Eco-hammam project, innovative solutions, local adaptation, negative impacts, water conservation, wastewater treatment, heat recovery, alternative energy sources, on-site surveys, Dar Bouazza Municipality, statistical approaches, physico-chemical characterization, bacteriological characterization, environmental diagnosis, well water analysis, public health, pollution sources, well water quality

Procedia PDF Downloads 70
121 Assessment of Potential Chemical Exposure to Betamethasone Valerate and Clobetasol Propionate in Pharmaceutical Manufacturing Laboratories

Authors: Nadeen Felemban, Hamsa Banjer, Rabaah Jaafari

Abstract:

One of the most common hazards in the pharmaceutical industry is the chemical hazard, which can cause harm or develop occupational health diseases/illnesses due to chronic exposures to hazardous substances. Therefore, a chemical agent management system is required, including hazard identification, risk assessment, controls for specific hazards and inspections, to keep your workplace healthy and safe. However, routine management monitoring is also required to verify the effectiveness of the control measures. Moreover, Betamethasone Valerate and Clobetasol Propionate are some of the APIs (Active Pharmaceutical Ingredients) with highly hazardous classification-Occupational Hazard Category (OHC 4), which requires a full containment (ECA-D) during handling to avoid chemical exposure. According to Safety Data Sheet, those chemicals are reproductive toxicants (reprotoxicant H360D), which may affect female workers’ health and cause fatal damage to an unborn child, or impair fertility. In this study, qualitative (chemical Risk assessment-qCRA) was conducted to assess the chemical exposure during handling of Betamethasone Valerate and Clobetasol Propionate in pharmaceutical laboratories. The outcomes of qCRA identified that there is a risk of potential chemical exposure (risk rating 8 Amber risk). Therefore, immediate actions were taken to ensure interim controls (according to the Hierarchy of controls) are in place and in use to minimize the risk of chemical exposure. No open handlings should be done out of the Steroid Glove Box Isolator (SGB) with the required Personal Protective Equipment (PPEs). The PPEs include coverall, nitrile hand gloves, safety shoes and powered air-purifying respirators (PAPR). Furthermore, a quantitative assessment (personal air sampling) was conducted to verify the effectiveness of the engineering controls (SGB Isolator) and to confirm if there is chemical exposure, as indicated earlier by qCRA. Three personal air samples were collected using an air sampling pump and filter (IOM2 filters, 25mm glass fiber media). The collected samples were analyzed by HPLC in the BV lab, and the measured concentrations were reported in (ug/m3) with reference to Occupation Exposure Limits, 8hr OELs (8hr TWA) for each analytic. The analytical results are needed in 8hr TWA (8hr Time-weighted Average) to be analyzed using Bayesian statistics (IHDataAnalyst). The results of the Bayesian Likelihood Graph indicate (category 0), which means Exposures are de "minimus," trivial, or non-existent Employees have little to no exposure. Also, these results indicate that the 3 samplings are representative samplings with very low variations (SD=0.0014). In conclusion, the engineering controls were effective in protecting the operators from such exposure. However, routine chemical monitoring is required every 3 years unless there is a change in the processor type of chemicals. Also, frequent management monitoring (daily, weekly, and monthly) is required to ensure the control measures are in place and in use. Furthermore, a Similar Exposure Group (SEG) was identified in this activity and included in the annual health surveillance for health monitoring.

Keywords: occupational health and safety, risk assessment, chemical exposure, hierarchy of control, reproductive

Procedia PDF Downloads 173
120 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction

Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer

Abstract:

History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.

Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19

Procedia PDF Downloads 174
119 Food Consumption and Adaptation to Climate Change: Evidence from Ghana

Authors: Frank Adusah-Poku, John Bosco Dramani, Prince Boakye Frimpong

Abstract:

Climate change is considered a principal threat to human existence and livelihood. The persistence and intensity of droughts and floods in recent years have adversely affected food production systems and value chains, making it impossible to end global hunger by 2030. Thus, this study aims to examine the effect of climate change on food consumption for both farm and non-farm households in Ghana. An important focus of the analysis is to investigate how climate change affects alternative dimensions of food security, examine the extent to which these effects vary across heterogeneous groups, and explore the channels through which climate change affects food consumption. Finally, we conducted a pilot study to understand the significance of farm and non-farm diversification measures in reducing the harmful impact of climate change on farm households. The approach of this article is to use two secondary and one primary datasets. The first secondary dataset is the Ghana Socioeconomic Panel Survey (GSPS). The GSPS is a household panel dataset collected during the period 2009 to 2019. The second dataset is monthly district rainfall and temperature gridded data from the Ghana Meteorological Agency. This data was matched to the GSPS dataset at the district level. Finally, the primary data was obtained from a survey of farm and non-farm adaptation practices used by farmers in three regions in Northern Ghana. The study employed the household fixed effects model to estimate the effect of climate change (measured by temperature and rainfall) on food consumption in Ghana. Again, it used the spatial and temporal variation in temperature and rainfall across the districts in Ghana to estimate the household-level model. Evidence of potential mechanisms through which climate change affects food consumption was explored using two steps. First, the potential mechanism variables were regressed on temperature, rainfall, and the control variables. In the second and final step, the potential mechanism variables were included as extra covariates in the first model. The results revealed that extreme average temperature and drought had caused a decrease in food consumption as well as reduced the intake of important food nutrients such as carbohydrates, protein and vitamins. The results further indicated that low rainfall increased food insecurity among households with no education compared with those with primary and secondary education. Again, non-farm activity and silos have been revealed as the transmission pathways through which the effect of climate change on farm households can be moderated. Finally, the results indicated over 90% of the small-holder farmers interviewed had no farm diversification adaptation strategies for climate change, and a little over 50% of the farmers owned unskilled or manual non-farm economic ventures. This makes it very difficult for the majority of the farmers to withstand climate-related shocks. These findings suggest that achieving the Sustainable Development Goal of Zero Hunger by 2030 needs an integrated approach, such as reducing the over-reliance on rainfed agriculture, educating farmers, and implementing non-farm interventions to improve food consumption in Ghana.

Keywords: climate change, food consumption, Ghana, non-farm activity

Procedia PDF Downloads 9
118 Incorporating Spatial Transcriptome Data into Ligand-Receptor Analyses to Discover Regional Activation in Cells

Authors: Eric Bang

Abstract:

Interactions between receptors and ligands are crucial for many essential biological processes, including neurotransmission and metabolism. Ligand-receptor analyses that examine cell behavior and interactions often utilize cell type-specific RNA expressions from single-cell RNA sequencing (scRNA-seq) data. Using CellPhoneDB, a public repository consisting of ligands, receptors, and ligand-receptor interactions, the cell-cell interactions were explored in a specific scRNA-seq dataset from kidney tissue and portrayed the results with dot plots and heat maps. Depending on the type of cell, each ligand-receptor pair was aligned with the interacting cell type and calculated the positori probabilities of these associations, with corresponding P values reflecting average expression values between the triads and their significance. Using single-cell data (sample kidney cell references), genes in the dataset were cross-referenced with ones in the existing CellPhoneDB dataset. For example, a gene such as Pleiotrophin (PTN) present in the single-cell data also needed to be present in the CellPhoneDB dataset. Using the single-cell transcriptomics data via slide-seq and reference data, the CellPhoneDB program defines cell types and plots them in different formats, with the two main ones being dot plots and heat map plots. The dot plot displays derived measures of the cell to cell interaction scores and p values. For the dot plot, each row shows a ligand-receptor pair, and each column shows the two interacting cell types. CellPhoneDB defines interactions and interaction levels from the gene expression level, so since the p-value is on a -log10 scale, the larger dots represent more significant interactions. By performing an interaction analysis, a significant interaction was discovered for myeloid and T-cell ligand-receptor pairs, including those between Secreted Phosphoprotein 1 (SPP1) and Fibronectin 1 (FN1), which is consistent with previous findings. It was proposed that an effective protocol would involve a filtration step where cell types would be filtered out, depending on which ligand-receptor pair is activated in that part of the tissue, as well as the incorporation of the CellPhoneDB data in a streamlined workflow pipeline. The filtration step would be in the form of a Python script that expedites the manual process necessary for dataset filtration. Being in Python allows it to be integrated with the CellPhoneDB dataset for future workflow analysis. The manual process involves filtering cell types based on what ligand/receptor pair is activated in kidney cells. One limitation of this would be the fact that some pairings are activated in multiple cells at a time, so the manual manipulation of the data is reflected prior to analysis. Using the filtration script, accurate sorting is incorporated into the CellPhoneDB database rather than waiting until the output is produced and then subsequently applying spatial data. It was envisioned that this would reveal wherein the cell various ligands and receptors are interacting with different cell types, allowing for easier identification of which cells are being impacted and why, for the purpose of disease treatment. The hope is this new computational method utilizing spatially explicit ligand-receptor association data can be used to uncover previously unknown specific interactions within kidney tissue.

Keywords: bioinformatics, Ligands, kidney tissue, receptors, spatial transcriptome

Procedia PDF Downloads 139
117 The Incident of Concussion across Popular American Youth Sports: A Retrospective Review

Authors: Rami Hashish, Manon Limousis-Gayda, Caitlin H. McCleery

Abstract:

Introduction: A leading cause of emergency room visits among youth (in the United States), is sports-related traumatic brain injuries. Mild traumatic brain injuries (mTBIs), also called concussions, are caused by linear and/or angular acceleration experienced at the head and represent an increasing societal burden. Due to the developing nature of the brain in youth, there is a great risk for long-term neuropsychological deficiencies following a concussion. Accordingly, the purpose of this paper is to investigate incidence rates of concussion across gender for the five most common youth sports in the United States. These include basketball, track and field, soccer, baseball (boys), softball (girls), football (boys), and volleyball (girls). Methods: A PubMed search was performed for four search themes combined. The first theme identified the outcomes (concussion, brain injuries, mild traumatic brain injury, etc.). The second theme identified the sport (American football, soccer, basketball, softball, volleyball, track, and field, etc.). The third theme identified the population (adolescence, children, youth, boys, girls). The last theme identified the study design (prevalence, frequency, incidence, prospective). Ultimately, 473 studies were surveyed, with 15 fulfilling the criteria: prospective study presenting original data and incidence of concussion in the relevant youth sport. The following data were extracted from the selected studies: population age, total study population, total athletic exposures (AE) and incidence rate per 1000 athletic exposures (IR/1000). Two One-Way ANOVA and a Tukey’s post hoc test were conducted using SPSS. Results: From the 15 selected studies, statistical analysis revealed the incidence of concussion per 1000 AEs across the considered sports ranged from 0.014 (girl’s track and field) to 0.780 (boy’s football). Average IR/1000 across all sports was 0.483 and 0.268 for boys and girls, respectively; this difference in IR was found to be statistically significant (p=0.013). Tukey’s post hoc test showed that football had significantly higher IR/1000 than boys’ basketball (p=0.022), soccer (p=0.033) and track and field (p=0.026). No statistical difference was found for concussion incidence between girls’ sports. Removal of football was found to lower the IR/1000 for boys without a statistical difference (p=0.101) compared to girls. Discussion: Football was the only sport showing a statistically significant difference in concussion incidence rate relative to other sports (within gender). Males were overall more likely to be concussed than females when football was included (1.8x), whereas concussion was more likely for females when football was excluded. While the significantly higher rate of concussion in football is not surprising because of the nature and rules of the sport, it is concerning that research has shown higher incidence of concussion in practices than games. Interestingly, findings indicate that girls’ sports are more concussive overall when football is removed. This appears to counter the common notion that boys’ sports are more physically taxing and dangerous. Future research should focus on understanding the concussive mechanisms of injury in each sport to enable effective rule changes.

Keywords: gender, football, soccer, traumatic brain injury

Procedia PDF Downloads 141
116 Extremism among College and High School Students in Moscow: Diagnostics Features

Authors: Puzanova Zhanna Vasilyevna, Larina Tatiana Igorevna, Tertyshnikova Anastasia Gennadyevna

Abstract:

In this day and age, extremism in various forms of its manifestation is a real threat to the world community, the national security of a state and its territorial integrity, as well as to the constitutional rights and freedoms of citizens. Extremism, as it is known, in general terms described as a commitment to extreme views and actions, radically denying the existing social norms and rules. Supporters of extremism in the ideological and political struggles often adopt methods and means of psychological warfare, appeal not to reason and logical arguments, but to emotions and instincts of the people, to prejudices, biases, and a variety of mythological designs. They are dissatisfied with the established order and aim at increasing this dissatisfaction among the masses. Youth extremism holds a specific place among the existing forms and types of extremism. In this context in 2015, we conducted a survey among Moscow college and high school students. The aim of this study was to determine how great or small is the difference in understanding and attitudes towards extremism manifestations, inclination and readiness to take part in extremist activities and what causes this predisposition, if it exists. We performed multivariate analysis and found the Russian college and high school students' opinion about the extremism and terrorism situation in our country and also their cognition on these topics. Among other things, we showed, that the level of aggressiveness of young people were not above the average for the whole population. The survey was conducted using the questionnaire method. The sample included college and high school students in Moscow (642 and 382, respectively) by method of random selection. The questionnaire was developed by specialists of RUDN University Sociological Laboratory and included both original questions (projective questions, the technique of incomplete sentences), and the standard test Dayhoff S. to determine the level of internal aggressiveness. It is also used as an experiment, the technique of study option using of FACS and SPAFF to determine the psychotypes and determination of non-verbal manifestations of emotions. The study confirmed the hypothesis that in respondents’ opinion, the level of aggression is higher today than a few years ago. Differences were found in the understanding of and respect for such social phenomena as extremism, terrorism, and their danger and appeal for the two age groups of young people. Theory of psychotypes, SPAFF (specific affect cording system) and FACS (facial action cording system) are considered as additional techniques for the diagnosis of a tendency to extreme views. Thus, it is established that diagnostics of acceptance of extreme views among young people is possible thanks to simultaneous use of knowledge from the different fields of socio-humanistic sciences. The results of the research can be used in a comparative context with other countries and as a starting point for further research in the field, taking into account its extreme relevance.

Keywords: extremism, youth extremism, diagnostics of extremist manifestations, forecast of behavior, sociological polls, theory of psychotypes, FACS, SPAFF

Procedia PDF Downloads 337
115 Personality, Coping, Quality of Life, and Distress in Persons with Hearing Loss: A Cross-Sectional Study of Patients Referred to an Audiological Service

Authors: Oyvind Nordvik, Peder O. L. Heggdal, Jonas Brannstrom, Flemming Vassbotn, Anne Kari Aarstad, Hans Jorgen Aarstad

Abstract:

Background: Hearing Loss (HL) is a condition that may affect people in all stages of life, but the prevalence increases with age, mostly because of age-related HL, generally referred to as presbyacusis. As human speech is related to relatively high frequencies, even a limited hearing loss at high frequencies may cause impaired speech intelligibility. Being diagnosed with, treated for and living with a chronic condition such as HL, must for many be a disabling and stressful condition that put ones coping resources to test. Stress is a natural part of life and most people will experience stressful events or periods. Chronic diseases, such as HL, are risk factor for distress in individuals, causing anxiety and lowered mood. How an individual cope with HL may be closely connected to the level of distress he or she is experiencing and to personality, which can be defined as those characteristics of a person that account for consistent patterns of feelings, thinking, and behavior. Thus, as to distress in life, such as illness or disease, available coping strategies may be more important than the challenge itself. The same line of arguments applies to level of experienced health-related quality of life (HRQoL). Aim: The aim of this study was to investigate the relationship between distress, HRQoL, reported hearing loss, personality and coping in patients with HL. Method: 158 adult (aged 18-78 years) patients with HL, referred for hearing aid (HA) fitting at Haukeland University Hospital in western Norway, participated in the study. Both first-time users, as well as patients referred for HA renewals were included. First-time users had been pre-examined by an ENT-specialist. The questionnaires were answered before the actual HA fitting procedure. The pure-tone average (PTA; frequencies 0.5, 1, 2, and 4 kHz) was determined for each ear. The Eysenck personality inventory, neuroticism and lie scales, the Theoretically Originated Measure of the Cognitive Activation Theory of Stress (TOMCATS) measuring active coping, hopelessness and helplessness, as well as distress (General Health Questionnaire (GHQ) - 12 items) and the EORTC Quality of Life Questionnaire general part were answered. In addition, we used a revised and shortened version of the Abbreviated Profile of Hearing Aid Benefit (APHAB) as a measure of patient-reported hearing loss. Results: Significant correlations were determined between APHAB (weak), HRQoL scores (strong), distress scores (strong) on the one side and personality and choice of coping scores on the other side. As measured by stepwise regression analyses, the distress and HRQoL scores were scored secondary to the obtained personality and coping scores. The APHAB scores were as determined by regression analyses scored secondary to PTA (best ear), level of neuroticism and lie score. Conclusion: We found that reported employed coping style, distress/HRQoL and personality are closely connected to each other in this patient group. Patient-reported HL was associated to hearing level and personality. There is need for further investigations on these questions, and how these associations may influence the clinical context.

Keywords: coping, distress, hearing loss, personality

Procedia PDF Downloads 145
114 Expression of Fibrogenesis Markers after Mesenchymal Stem Cells Therapy for Experimental Liver Cirrhosis

Authors: Tatsiana Ihnatovich, Darya Nizheharodava, Mikalai Halabarodzka, Tatsiana Savitskaya, Marina Zafranskaya

Abstract:

Liver fibrosis is a complex of histological changes resulting from chronic liver disease accompanied by an excessive production and deposition of extracellular matrix components in the hepatic parenchyma. Liver fibrosis is a serious medical and social problem. Hepatic stellate cells (HSCs) make a significant contribution to the extracellular matrix deposition due to liver injury. Mesenchymal stem cells (MSCs) have a pronounced anti-inflammatory, regenerative and immunomodulatory effect; they are able to differentiate into hepatocytes and induce apoptosis of activated HSCs that opens the prospect of their use for preventing the excessive fibro-formation and the development of liver cirrhosis. The aim of the study is to evaluate the effect of MSCs therapy on the expression of fibrogenesis markers genes in liver tissue and HSCs cultures of rats with experimental liver cirrhosis (ELC). Materials and methods: ELC was induced by the common bile duct ligation (CBDL) in female Wistar rats (n = 19) with an average body weight of 250 (220 ÷ 270) g. Animals from the control group (n = 10) were sham-operated. On the 56th day after the CBDL, the rats of the experimental (n = 12) and the control (n = 5) groups received intraportal MSCs in concentration of 1×106 cells/animal (previously obtained from rat’s bone marrow) or saline, respectively. The animals were taken out of the experiment on the 21st day. HSCs were isolated by sequential liver perfusion in situ with following disaggregation, enzymatic treatment and centrifugation of cell suspension on a two-stage density gradient. The expression of collagen type I (Col1a1) and type III (Col3a1), matrix metalloproteinase type 2 (MMP2) and type 9 (MMP9), tissue inhibitor of matrix metalloproteinases type 1 (TIMP1), transforming growth factor β type 1 (TGFβ1) and type 3 (TGFβ3) was determined by real-time polymerase chain reaction. Statistical analysis was performed using Statistica 10.0. Results: In ELC rats compared to sham-operated animals, a significant increase of all studied markers expression was observed. The administration of MSCs led to a significant decrease of all detectable markers in the experimental group compared to rats without cell therapy. In ELC rats, an increased MMP9/TIMP1 ratio after cell therapy was also detected. The infusion of MSCs in the sham-operated animals did not lead to any changes. In the HSCs from ELC animals, the expression of Col1a1 and Col3a1 exceeded the similar parameters of the control group (p <0.05) and statistically decreased after the MSCs administration. The correlation between Col3a1 (Rs = 0.51, p <0.05), TGFβ1 (Rs = 0.6, p <0.01), and TGFβ3 (Rs = 0.75, p <0.001) expression in HSCs cultures and liver tissue has been found. Conclusion: Intraportal administration of MSCs to rats with ELC leads to a decreased Col1a1 and Col3a1, MMP2 and MMP9, TIMP1, TGFβ1 and TGFβ3 expression. The correlation between the expression of Col3a1, TGFβ1 and TGFβ3 in liver tissue and in HSCs cultures indicates the involvement of activated HSCs in the fibrogenesis that allows considering HSCs to be the main cell therapy target in ELC.

Keywords: cell therapy, experimental liver cirrhosis, hepatic stellate cells, mesenchymal stem cells

Procedia PDF Downloads 166
113 Multi-Modality Brain Stimulation: A Treatment Protocol for Tinnitus

Authors: Prajakta Patil, Yash Huzurbazar, Abhijeet Shinde

Abstract:

Aim: To develop a treatment protocol for the management of tinnitus through multi-modality brain stimulation. Methodology: Present study included 33 adults with unilateral (31 subjects) and bilateral (2 subjects) chronic tinnitus with and/or without hearing loss independent of their etiology. The Treatment protocol included 5 consecutive sessions with follow-up of 6 months. Each session was divided into 3 parts: • Pre-treatment: a) Informed consent b) Pitch and loudness matching. • Treatment: Bimanual paper pen task with tinnitus masking for 30 minutes. • Post-treatment: a) Pitch and loudness matching b) Directive counseling and obtaining feedback. Paper-pen task is to be performed bimanually that included carrying out two different writing activities in different context. The level of difficulty of the activities was increased in successive sessions. Narrowband noise of a frequency same as that of tinnitus was presented at 10 dBSL of tinnitus for 30 minutes simultaneously in the ear with tinnitus. Result: The perception of tinnitus was no longer present in 4 subjects while in remaining subjects it reduced to an intensity that its perception no longer troubled them without causing residual facilitation. In all subjects, the intensity of tinnitus decreased by an extent of 45 dB at an average. However, in few subjects, the intensity of tinnitus also decreased by more than 45 dB. The approach resulted in statistically significant reductions in Tinnitus Functional Index and Tinnitus Handicap Inventory scores. The results correlate with pre and post treatment score of Tinnitus Handicap Inventory that dropped from 90% to 0%. Discussion: Brain mapping(qEEG) Studies report that there is multiple parallel overlapping of neural subnetworks in the non-auditory areas of the brain which exhibits abnormal, constant and spontaneous neural activity involved in the perception of tinnitus with each subnetwork and area reflecting a specific aspect of tinnitus percept. The paper pen task and directive counseling are designed and delivered respectively in a way that is assumed to induce normal, rhythmically constant and premeditated neural activity and mask the abnormal, constant and spontaneous neural activity in the above-mentioned subnetworks and the specific non-auditory area. Counseling was focused on breaking the vicious cycle causing and maintaining the presence of tinnitus. Diverting auditory attention alone is insufficient to reduce the perception of tinnitus. Conscious awareness of tinnitus can be suppressed when individuals engage in cognitively demanding tasks of non-auditory nature as the paper pen task used in the present study. To carry out this task selective, divided, sustained, simultaneous and split attention act cumulatively. Bimanual paper pen task represents a top-down activity which underlies brain’s ability to selectively attend to the bimanual written activity as a relevant stimulus and to ignore tinnitus that is the irrelevant stimuli in the present study. Conclusion: The study suggests that this novel treatment approach is cost effective, time saving and efficient to vanish the tinnitus or to reduce the intensity of tinnitus to a negligible level and thereby eliminating the negative reactions towards tinnitus.

Keywords: multi-modality brain stimulation, neural subnetworks, non-auditory areas, paper-pen task, top-down activity

Procedia PDF Downloads 147
112 Performance of CALPUFF Dispersion Model for Investigation the Dispersion of the Pollutants Emitted from an Industrial Complex, Daura Refinery, to an Urban Area in Baghdad

Authors: Ramiz M. Shubbar, Dong In Lee, Hatem A. Gzar, Arthur S. Rood

Abstract:

Air pollution is one of the biggest environmental problems in Baghdad, Iraq. The Daura refinery located nearest the center of Baghdad, represents the largest industrial area, which transmits enormous amounts of pollutants, therefore study the gaseous pollutants and particulate matter are very important to the environment and the health of the workers in refinery and the people whom leaving in areas around the refinery. Actually, some studies investigated the studied area before, but it depended on the basic Gaussian equation in a simple computer programs, however, that kind of work at that time is very useful and important, but during the last two decades new largest production units were added to the Daura refinery such as, PU_3 (Power unit_3 (Boiler 11&12)), CDU_1 (Crude Distillation unit_70000 barrel_1), and CDU_2 (Crude Distillation unit_70000 barrel_2). Therefore, it is necessary to use new advanced model to study air pollution at the region for the new current years, and calculation the monthly emission rate of pollutants through actual amounts of fuel which consumed in production unit, this may be lead to accurate concentration values of pollutants and the behavior of dispersion or transport in study area. In this study to the best of author’s knowledge CALPUFF model was used and examined for first time in Iraq. CALPUFF is an advanced non-steady-state meteorological and air quality modeling system, was applied to investigate the pollutants concentration of SO2, NO2, CO, and PM1-10μm, at areas adjacent to Daura refinery which located in the center of Baghdad in Iraq. The CALPUFF modeling system includes three main components: CALMET is a diagnostic 3-dimensional meteorological model, CALPUFF (an air quality dispersion model), CALPOST is a post processing package, and an extensive set of preprocessing programs produced to interface the model to standard routinely available meteorological and geophysical datasets. The targets of this work are modeling and simulation the four pollutants (SO2, NO2, CO, and PM1-10μm) which emitted from Daura refinery within one year. Emission rates of these pollutants were calculated for twelve units includes thirty plants, and 35 stacks by using monthly average of the fuel amount consumption at this production units. Assess the performance of CALPUFF model in this study and detect if it is appropriate and get out predictions of good accuracy compared with available pollutants observation. CALPUFF model was investigated at three stability classes (stable, neutral, and unstable) to indicate the dispersion of the pollutants within deferent meteorological conditions. The simulation of the CALPUFF model showed the deferent kind of dispersion of these pollutants in this region depends on the stability conditions and the environment of the study area, monthly, and annual averages of pollutants were applied to view the dispersion of pollutants in the contour maps. High values of pollutants were noticed in this area, therefore this study recommends to more investigate and analyze of the pollutants, reducing the emission rate of pollutants by using modern techniques and natural gas, increasing the stack height of units, and increasing the exit gas velocity from stacks.

Keywords: CALPUFF, daura refinery, Iraq, pollutants

Procedia PDF Downloads 198
111 The End Justifies the Means: Using Programmed Mastery Drill to Teach Spoken English to Spanish Youngsters, without Relying on Homework

Authors: Robert Pocklington

Abstract:

Most current language courses expect students to be ‘vocational’, sacrificing their free time in order to learn. However, pupils with a full-time job, or bringing up children, hardly have a spare moment. Others just need the language as a tool or a qualification, as if it were book-keeping or a driving license. Then there are children in unstructured families whose stressful life makes private study almost impossible. And the countless parents whose evenings and weekends have become a nightmare, trying to get the children to do their homework. There are many arguments against homework being a necessity (rather than an optional extra for more ambitious or dedicated students), making a clear case for teaching methods which facilitate full learning of the key content within the classroom. A methodology which could be described as Programmed Mastery Learning has been used at Fluency Language Academy (Spain) since 1992, to teach English to over 4000 pupils yearly, with a staff of around 100 teachers, barely requiring homework. The course is structured according to the tenets of Programmed Learning: small manageable teaching steps, immediate feedback, and constant successful activity. For the Mastery component (not stopping until everyone has learned), the memorisation and practice are entrusted to flashcard-based drilling in the classroom, leading all students to progress together and develop a permanently growing knowledge base. Vocabulary and expressions are memorised using flashcards as stimuli, obliging the brain to constantly recover words from the long-term memory and converting them into reflex knowledge, before they are deployed in sentence building. The use of grammar rules is practised with ‘cue’ flashcards: the brain refers consciously to the grammar rule each time it produces a phrase until it comes easily. This automation of lexicon and correct grammar use greatly facilitates all other language and conversational activities. The full B2 course consists of 48 units each of which takes a class an average of 17,5 hours to complete, allowing the vast majority of students to reach B2 level in 840 class hours, which is corroborated by an 85% pass-rate in the Cambridge University B2 exam (First Certificate). In the past, studying for qualifications was just one of many different options open to young people. Nowadays, youngsters need to stay at school and obtain qualifications in order to get any kind of job. There are many students in our classes who have little intrinsic interest in what they are studying; they just need the certificate. In these circumstances and with increasing government pressure to minimise failure, teachers can no longer think ‘If they don’t study, and fail, its their problem’. It is now becoming the teacher’s problem. Teachers are ever more in need of methods which make their pupils successful learners; this means assuring learning in the classroom. Furthermore, homework is arguably the main divider between successful middle-class schoolchildren and failing working-class children who drop out: if everything important is learned at school, the latter will have a much better chance, favouring inclusiveness in the language classroom.

Keywords: flashcard drilling, fluency method, mastery learning, programmed learning, teaching English as a foreign language

Procedia PDF Downloads 110
110 Indigenous Pre-Service Teacher Education: Developing, Facilitating, and Maintaining Opportunities for Retention and Graduation

Authors: Karen Trimmer, Raelene Ward, Linda Wondunna-Foley

Abstract:

Within Australian tertiary institutions, the subject of Aboriginal and Torres Strait Islander education has been a major concern for many years. Aboriginal and Torres Strait Islander teachers are significantly under-represented in Australian schools and universities. High attrition rates in teacher education and in the teaching industry have contributed to a minimal growth rate in the numbers of Aboriginal and Torres Strait Islander teachers in previous years. There was an increase of 500 Indigenous teachers between 2001 and 2008 but these numbers still only account for one percent of teaching staff in government schools who identified as Aboriginal and Torres Strait Islander Australians (Ministerial Council for Education, Early Childhood Development and Youth Affairs 2010). Aboriginal and Torres Strait Islander teachers are paramount in fostering student engagement and improving educational outcomes for Indigenous students. Increasing the numbers of Aboriginal and Torres Strait Islander teachers is also a key factor in enabling all students to develop understanding of and respect for Aboriginal and Torres Strait Islander histories, cultures, and language. An ambitious reform agenda to improve the recruitment and retention of Aboriginal and Torres Strait Islander teachers will be effective only through national collaborative action and co-investment by schools and school authorities, university schools of education, professional associations, and Indigenous leaders and community networks. Whilst the University of Southern Queensland currently attracts Indigenous students to its teacher education programs (61 students in 2013 with an average of 48 enrollments each year since 2010) there is significant attrition during pre-service training. The annual rate of exiting before graduation remains high at 22% in 2012 and was 39% for the previous two years. These participation and retention rates are consistent with other universities across Australia. Whilst aspirations for a growing number of Indigenous people to be trained as teachers is present, there is a significant loss of students during their pre-service training and within the first five years of employment as a teacher. These trends also reflect the situation where Aboriginal and Torres Strait Islander teachers are significantly under-represented, making up less than 1% of teachers in schools across Australia. Through a project conducted as part the nationally funded More Aboriginal and Torres Strait Islander Teachers Initiative (MATSITI) we aim to gain an insight into the reasons that impact Aboriginal and Torres Strait Islander student’s decisions to exit their program. Through the conduct of focus groups and interviews with two graduating cohorts of self-identified Aboriginal and Torres Strait Islander students, rich data has been gathered to gain an understanding of the barriers and enhancers to the completion of pre-service qualification and transition to teaching. Having a greater understanding of these reasons then allows the development of collaborative processes and procedures to increase retention and completion rates of new Indigenous teachers. Analysis of factors impacting on exit decisions and transitions has provided evidence to support change of practice, redesign and enhancement of relevant courses and development of policy/procedures to address identified issues.

Keywords: graduation, indigenous, pre-service teacher education, retention

Procedia PDF Downloads 471
109 Structural Characterization and Hot Deformation Behaviour of Al3Ni2/Al3Ni in-situ Core-shell intermetallic in Al-4Cu-Ni Composite

Authors: Ganesh V., Asit Kumar Khanra

Abstract:

An in-situ powder metallurgy technique was employed to create Ni-Al3Ni/Al3Ni2 core-shell-shaped aluminum-based intermetallic reinforced composites. The impact of Ni addition on the phase composition, microstructure, and mechanical characteristics of the Al-4Cu-xNi (x = 0, 2, 4, 6, 8, 10 wt.%) in relation to various sintering temperatures was investigated. Microstructure evolution was extensively examined using X-ray diffraction (XRD), scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM-EDX), and transmission electron microscopy (TEM) techniques. Initially, under sintering conditions, the formation of "Single Core-Shell" structures was observed, consisting of Ni as the core with Al3Ni2 intermetallic, whereas samples sintered at 620°C exhibited both "Single Core-Shell" and "Double Core-Shell" structures containing Al3Ni2 and Al3Ni intermetallics formed between the Al matrix and Ni reinforcements. The composite achieved a high compressive yield strength of 198.13 MPa and ultimate strength of 410.68 MPa, with 24% total elongation for the sample containing 10 wt.% Ni. Additionally, there was a substantial increase in hardness, reaching 124.21 HV, which is 2.4 times higher than that of the base aluminum. Nanoindentation studies showed hardness values of 1.54, 4.65, 21.01, 13.16, 5.52, 6.27, and 8.39GPa corresponding to α-Al matrix, Ni, Al3Ni2, Ni and Al3Ni2 interface, Al3Ni, and their respective interfaces. Even at 200°C, it retained 54% of its room temperature strength (90.51 MPa). To investigate the deformation behavior of the composite material, experiments were conducted at deformation temperatures ranging from 300°C to 500°C, with strain rates varying from 0.0001s-1 to 0.1s-1. A sine-hyperbolic constitutive equation was developed to characterize the flow stress of the composite, which exhibited a significantly higher hot deformation activation energy of 231.44 kJ/mol compared to the self-diffusion of pure aluminum. The formation of Al2Cu intermetallics at grain boundaries and Al3Ni2/Al3Ni within the matrix hindered dislocation movement, leading to an increase in activation energy, which might have an adverse effect on high-temperature applications. Two models, the Strain-compensated Arrhenius model and the Artificial Neural Network (ANN) model, were developed to predict the composite's flow behavior. The ANN model outperformed the Strain-compensated Arrhenius model with a lower average absolute relative error of 2.266%, a smaller root means square error of 1.2488 MPa, and a higher correlation coefficient of 0.9997. Processing maps revealed that the optimal hot working conditions for the composite were in the temperature range of 420-500°C and strain rates between 0.0001s-1 and 0.001s-1. The changes in the composite microstructure were successfully correlated with the theory of processing maps, considering temperature and strain rate conditions. The uneven distribution in the shape and size of Core-shell/Al3Ni intermetallic compounds influenced the flow stress curves, leading to Dynamic Recrystallization (DRX), followed by partial Dynamic Recovery (DRV), and ultimately strain hardening. This composite material shows promise for applications in the automobile and aerospace industries.

Keywords: core-shell structure, hot deformation, intermetallic compounds, powder metallurgy

Procedia PDF Downloads 20
108 Cultivating Concentration and Flow: Evaluation of a Strategy for Mitigating Digital Distractions in University Education

Authors: Vera G. Dianova, Lori P. Montross, Charles M. Burke

Abstract:

In the digital age, the widespread and frequently excessive use of mobile phones amongst university students is recognized as a significant distractor which interferes with their ability to enter a deep state of concentration during studies and diminishes their prospects of experiencing the enjoyable and instrumental state of flow, as defined and described by psychologist M. Csikszentmihalyi. This study has targeted 50 university students with the aim of teaching them to cultivate their ability to engage in deep work and to attain the state of flow, fostering more effective and enjoyable learning experiences. Prior to the start of the intervention, all participating students completed a comprehensive survey based on a variety of validated scales assessing their inclination toward lifelong learning, frequency of flow experiences during study, frustration tolerance, sense of agency, as well as their love of learning and daily time devoted to non-academic mobile phone activities. Several days after this initial assessment, students received a 90-minute lecture on the principles of flow and deep work, accompanied by a critical discourse on the detrimental effects of excessive mobile phone usage. They were encouraged to practice deep work and strive for frequent flow states throughout the semester. Subsequently, students submitted weekly surveys, including the 10-item CORE Dispositional Flow Scale, a 3-item agency scale and furthermore disclosed their average daily hours spent on non-academic mobile phone usage. As a final step, at the end of the semester students engaged in reflective report writing, sharing their experiences and evaluating the intervention's effectiveness. They considered alterations in their love of learning, reflected on the implications of their mobile phone usage, contemplated improvements in their tolerance for boredom and perseverance in complex tasks, and pondered the concept of lifelong learning. Additionally, students assessed whether they actively took steps towards managing their recreational phone usage and towards improving their commitment to becoming lifelong learners. Employing a mixed-methods approach our study offers insights into the dynamics of concentration, flow, mobile phone usage and attitudes towards learning among undergraduate and graduate university students. The findings of this study aim to promote profound contemplation, on the part of both students and instructors, on the rapidly evolving digital-age higher education environment. In an era defined by digital and AI advancements, the ability to concentrate, to experience the state of flow, and to love learning has never been more crucial. This study underscores the significance of addressing mobile phone distractions and providing strategies for cultivating deep concentration. The insights gained can guide educators in shaping effective learning strategies for the digital age. By nurturing a love for learning and encouraging lifelong learning, educational institutions can better prepare students for a rapidly changing labor market, where adaptability and continuous learning are paramount for success in a dynamic career landscape.

Keywords: deep work, flow, higher education, lifelong learning, love of learning

Procedia PDF Downloads 68
107 The Role of Serum Fructosamine as a Monitoring Tool in Gestational Diabetes Mellitus Treatment in Vietnam

Authors: Truong H. Le, Ngoc M. To, Quang N. Tran, Luu T. Cao, Chi V. Le

Abstract:

Introduction: In Vietnam, the current monitoring and treatment for ordinary diabetic patient mostly based on glucose monitoring with HbA1c test for every three months (recommended goal is HbA1c < 6.5%~7%). For diabetes in pregnant women or Gestational diabetes mellitus (GDM), glycemic control until the time of delivery is extremly important because it could reduce significantly medical implications for both the mother and the child. Besides, GDM requires continuos glucose monitoring at least every two weeks and therefore an alternative marker of glycemia for short-term control is considering a potential tool for the healthcare providers. There are published studies have indicated that the glycosylated serum protein is a better indicator than glycosylated hemoglobin in GDM monitoring. Based on the actual practice in Vietnam, this study was designed to evaluate the role of serum fructosamine as a monitoring tool in GDM treament and its correlations with fasting blood glucose (G0), 2-hour postprandial glucose (G2) and glycosylated hemoglobin (HbA1c). Methods: A cohort study on pregnant women diagnosed with GDM by the 75-gram oralglucose tolerance test was conducted at Endocrinology Department, Cho Ray hospital, Vietnam from June 2014 to March 2015. Cho Ray hospital is the final destination for GDM patient in the southern of Vietnam, the study population has many sources from other pronvinces and therefore researchers belive that this demographic characteristic can help to provide the study result as a reflection for the whole area. In this study, diabetic patients received a continuos glucose monitoring method which consists of bi-weekly on-site visit every 2 weeks with glycosylated serum protein test, fasting blood glucose test and 2-hour postprandial glucose test; HbA1c test for every 3 months; and nutritious consultance for daily diet program. The subjects still received routine treatment at the hospital, with tight follow-up from their healthcare providers. Researchers recorded bi-weekly health conditions, serum fructosamine level and delivery outcome from the pregnant women, using Stata 13 programme for the analysis. Results: A total of 500 pregnant women was enrolled and follow-up in this study. Serum fructosamine level was found to have a light correlation with G0 ( r=0.3458, p < 0.001) and HbA1c ( r=0.3544, p < 0.001), and moderately correlated with G2 ( r=0.4379, p < 0.001). During study timeline, the delivery outcome of 287 women were recorded with the average age of 38.5 ± 1.5 weeks, 9% of them have macrosomia, 2.8% have premature birth before week 35th and 9.8% have premature birth before week 37th; 64.8% of cesarean section and none of them have perinatal or neonatal mortality. The study provides a reference interval of serum fructosamine for GDM patient was 112.9 ± 20.7 μmol/dL. Conclusion: The present results suggests that serum fructosamine is as effective as HbA1c as a reflection of blood glucose control in GDM patient, with a positive result in delivery outcome (0% perinatal or neonatal mortality). The reference value of serum fructosamine measurement provided a potential monitoring utility in GDM treatment for hospitals in Vietnam. Healthcare providers in Cho Ray hospital is considering to conduct more studies to test this reference as a target value in their GDM treatment and monitoring.

Keywords: gestational diabetes mellitus, monitoring tool, serum fructosamine, Vietnam

Procedia PDF Downloads 280
106 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines

Authors: Kamyar Tolouei, Ehsan Moosavi

Abstract:

In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.

Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization

Procedia PDF Downloads 105
105 Acceleration and Deceleration Behavior in the Vicinity of a Speed Camera, and Speed Section Control

Authors: Jean Felix Tuyisingize

Abstract:

Speeding or inappropriate speed is a major problem worldwide, contributing to 10-15% of road crashes and 30% of fatal injury crashes. The consequences of speeding put the driver's life at risk and the lives of other road users like motorists, cyclists, and pedestrians. To control vehicle speeds, governments, and traffic authorities enforced speed regulations through speed cameras and speed section control, which monitor all vehicle speeds and detect plate numbers to levy penalties. However, speed limit violations are prevalent, even on motorways with speed cameras. The problem with speed cameras is that they alter driver behaviors, and their effect declines with increasing distance from the speed camera location. Drivers decelerate short distances before the camera and vigorously accelerate above the speed limit just after passing by the camera. The sudden decelerating near cameras causes the drivers to try to make up for lost time after passing it, and they do this by speeding up, resulting in a phenomenon known as the "Kangaroo jump" or "V-profile" around camera/ASSC areas. This study investigated the impact of speed enforcement devices, specifically Average Speed Section Control (ASSCs) and fixed cameras, on acceleration and deceleration events within their vicinity. The research employed advanced statistical and Geographic Information System (GIS) analysis on naturalistic driving data, to uncover speeding patterns near the speed enforcement systems. The study revealed a notable concentration of events within a 600-meter radius of enforcement devices, suggesting their influence on driver behaviors within a specific range. However, most of these events are of low severity, suggesting that drivers may not significantly alter their speed upon encountering these devices. This behavior could be attributed to several reasons, such as consistently maintaining safe speeds or using real-time in-vehicle intervention systems. The complexity of driver behavior is also highlighted, indicating the potential influence of factors like traffic density, road conditions, weather, time of day, and driver characteristics. Further, the study highlighted that high-severity events often occurred outside speed enforcement zones, particularly around intersections, indicating these as potential hotspots for drastic speed changes. These findings call for a broader perspective on traffic safety interventions beyond reliance on speed enforcement devices. However, the study acknowledges certain limitations, such as its reliance on a specific geographical focus, which may impact the broad applicability of the findings. Additionally, the severity of speed modification events was categorized into low, medium, and high, which could oversimplify the continuum of speed changes and potentially mask trends within each category. This research contributes valuable insights to traffic safety and driver behavior literature, illuminating the complexity of driver behavior and the potential influence of factors beyond the presence of speed enforcement devices. Future research directions may employ various categories of event severity. They may also explore the role of in-vehicle technologies, driver characteristics, and a broader set of environmental variables in driving behavior and traffic safety.

Keywords: acceleration, deceleration, speeding, inappropriate speed, speed enforcement cameras

Procedia PDF Downloads 32
104 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow

Authors: Masood Otarod, Ronald M. Supkowski

Abstract:

This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.

Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations

Procedia PDF Downloads 269
103 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 163
102 The High Potential and the Little Use of Brazilian Class Actions for Prevention and Penalization Due to Workplace Accidents in Brazil

Authors: Sandra Regina Cavalcante, Rodolfo A. G. Vilela

Abstract:

Introduction: Work accidents and occupational diseases are a big problem for public health around the world and the main health problem of workers with high social and economic costs. Brazil has shown progress over the last years, with the development of the regulatory system to improve safety and quality of life in the workplace. However, the situation is far from acceptable, because the occurrences remain high and there is a great gap between legislation and reality, generated by the low level of voluntary compliance with the law. Brazilian laws provide procedural legal instruments for both, to compensate the damage caused to the worker's health and to prevent future injuries. In the Judiciary, the prevention idea is in the collective action, effected through Brazilian Class Actions. Inhibitory guardianships may impose both, improvements to the working environment, as well as determine the interruption of activity or a ban on the machine that put workers at risk. Both the Labor Prosecution and trade unions have to stand to promote this type of action, providing payment of compensation for collective moral damage. Objectives: To verify how class actions (known as ‘public civil actions’), regulated in Brazilian legal system to protect diffuse, collective and homogeneous rights, are being used to protect workers' health and safety. Methods: The author identified and evaluated decisions of Brazilian Superior Court of Labor involving collective actions and work accidents. The timeframe chosen was December 2015. The online jurisprudence database was consulted in page available for public consultation on the court website. The categorization of the data was made considering the result (court application was rejected or accepted), the request type, the amount of compensation and the author of the cause, besides knowing the reasoning used by the judges. Results: The High Court issued 21,948 decisions in December 2015, with 1448 judgments (6.6%) about work accidents and only 20 (0.09%) on collective action. After analyzing these 20 decisions, it was found that the judgments granted compensation for collective moral damage (85%) and/or obligation to make, that is, changes to improve prevention and safety (71%). The processes have been filed mainly by the Labor Prosecutor (83%), and also appeared lawsuits filed by unions (17%). The compensation for collective moral damage had average of 250,000 reais (about US$65,000), but it should be noted that there is a great range of values found, also are several situations repaired by this compensation. This is the last instance resource for this kind of lawsuit and all decisions were well founded and received partially the request made for working environment protection. Conclusions: When triggered, the labor court system provides the requested collective protection in class action. The values of convictions arbitrated in collective actions are significant and indicate that it creates social and economic repercussions, stimulating employers to improve the working environment conditions of their companies. It is necessary to intensify the use of collective actions, however, because they are more efficient for prevention than reparatory individual lawsuits, but it has been underutilized, mainly by Unions.

Keywords: Brazilian Class Action, collective action, work accident penalization, workplace accident prevention, workplace protection law

Procedia PDF Downloads 274
101 Challenging Convections: Rethinking Literature Review Beyond Citations

Authors: Hassan Younis

Abstract:

Purpose: The objective of this study is to review influential papers in the sustainability and supply chain studies domain, leveraging insights from this review to develop a structured framework for academics and researchers. This framework aims to assist scholars in identifying the most impactful publications for their scholarly pursuits. Subsequently, the study will apply and trial the developed framework on selected scholarly articles within the sustainability and supply chain studies domain to evaluate its efficacy, practicality, and reliability. Design/Methodology/Approach: Utilizing the "Publish or Perish" tool, a search was conducted to locate papers incorporating "sustainability" and "supply chain" in their titles. After rigorous filtering steps, a panel of university professors identified five crucial criteria for evaluating research robustness: average yearly citation counts (25%), scholarly contribution (25%), alignment of findings with objectives (15%), methodological rigor (20%), and journal impact factor (15%). These five evaluation criteria are abbreviated as “ACMAJ" framework. Each paper then received a tiered score (1-3) for each criterion, normalized within its category, and summed using weighted averages to calculate a Final Normalized Score (FNS). This systematic approach allows for objective comparison and ranking of the research based on its impact, novelty, rigor, and publication venue. Findings: The study's findings highlight the lack of structured frameworks for assessing influential sustainability research in supply chain management, which often results in a dependence on citation counts. A complete model that incorporates five essential criteria has been suggested as a response. By conducting a methodical trial on specific academic articles in the field of sustainability and supply chain studies, the model demonstrated its effectiveness as a tool for identifying and selecting influential research papers that warrant additional attention. This work aims to fill a significant deficiency in existing techniques by providing a more comprehensive approach to identifying and ranking influential papers in the field. Practical Implications: The developed framework helps scholars identify the most influential sustainability and supply chain publications. Its validation serves the academic community by offering a credible tool and helping researchers, students, and practitioners find and choose influential papers. This approach aids field literature reviews and study suggestions. Analysis of major trends and topics deepens our grasp of this critical study area's changing terrain. Originality/Value: The framework stands as a unique contribution to academia, offering scholars an important and new tool to identify and validate influential publications. Its distinctive capacity to efficiently guide scholars, learners, and professionals in selecting noteworthy publications, coupled with the examination of key patterns and themes, adds depth to our understanding of the evolving landscape in this critical field of study.

Keywords: supply chain management, sustainability, framework, model

Procedia PDF Downloads 52
100 Health Advocacy in Medical School: An American Survey on Attitudes and Engagement in Clerkships

Authors: Rachel S. Chang, Samuel P. Massion, Alan Z. Grusky, Heather A. Ridinger

Abstract:

Introduction Health advocacy is defined as activities that improve access to care, utilize resources, address health disparities, and influence health policy. Advocacy is increasingly being recognized as a critical component of a physician’s role, as understanding social determinants of health and improving patient care are important aspects within the American Medical Association’s Health Systems Science framework. However, despite this growing prominence, educational interventions that address advocacy topics are limited and variable across medical school curricula. Furthermore, few recent studies have evaluated attitudes toward health advocacy among physicians-in-training in the United States. This study examines medical student attitudes towards health advocacy, along with perceived knowledge, ability, and current level of engagement with health advocacy during their clerkships. Methods This study employed a cross-sectional survey design using a single anonymous, self-report questionnaire to all second-year medical students at Vanderbilt University School of Medicine (n=96) in December 2020 during clerkship rotations. The survey had 27 items with 5-point Likert scale (15), multiple choice (11), and free response questions (1). Descriptive statistics and thematic analysis were utilized to analyze responses. The study was approved by the Vanderbilt University Institutional Review Board. Results There was an 88% response rate among second-year clerkship medical students. A majority (83%) agreed that formal training in health advocacy should be a mandatory part of the medical student curriculum Likewise, 83% of respondents felt that acting as a health advocate or patients should be part of their role as a clerkship student. However, a minority (25%) felt adequately prepared. While 72% of respondents felt able to identify a psychosocial need, 18% felt confident navigating the healthcare system and only 9% felt able to connect a patient to a psychosocial resource to fill that gap. 44% of respondents regularly contributed to conversations with their medical teams when discussing patients’ social needs, such as housing insecurity, financial insecurity, or legal needs. On average, respondents reported successfully connecting patients to psychosocial resources 1-2 times per 8-week clerkship block. Barriers to participating in health advocacy included perceived time constraints, lack of awareness of resources, lower emphasis among medical teams, and scarce involvement with social work teams. Conclusions In this single-institutional study, second-year medical students on clerkships recognize the importance of advocating for patients and support advocacy training within their medical school curriculum. However, their perceived lack of ability to navigate the healthcare system and connect patients to psychosocial resources, result in students feeling unprepared to advocate as effectively as they hoped during their clerkship rotations. Our results support the ongoing need to equip medical students with training and resources necessary for them to effectively act as advocates for patients.

Keywords: clerkships, medical students, patient advocacy, social medicine

Procedia PDF Downloads 130
99 Combustion Variability and Uniqueness in Cylinders of a Radial Aircraft Piston Engine

Authors: Michal Geca, Grzegorz Baranski, Ksenia Siadkowska

Abstract:

The work is a part of the project which aims at developing innovative power and control systems for the high power aircraft piston engine ASz62IR. Developed electronically controlled ignition system will reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. The tested unit is an air-cooled four-stroke gasoline engine of 9 cylinders in a radial setup, mechanically charged by a radial compressor powered by the engine crankshaft. The total engine cubic capac-ity is 29.87 dm3, and the compression ratio is 6.4:1. The maximum take-off power is 1000 HP at 2200 rpm. The maximum fuel consumption is 280 kg/h. Engine powers aircrafts: An-2, M-18 „Dromader”, DHC-3 „OTTER”, DC-3 „Dakota”, GAF-125 „HAWK” i Y5. The main problems of the engine includes the imbalanced work of cylinders. The non-uniformity value in each cylinder results in non-uniformity of their work. In radial engine cylinders arrangement causes that the mixture movement that takes place in accordance (lower cylinder) or the opposite (upper cylinders) to the direction of gravity. Preliminary tests confirmed the presence of uneven workflow of individual cylinders. The phenomenon is most intense at low speed. The non-uniformity is visible on the waveform of cylinder pressure. Therefore two studies were conducted to determine the impact of this phenomenon on the engine performance: simulation and real tests. Simplified simulation was conducted on the element of the intake system coated with fuel film. The study shows that there is an effect of gravity on the movement of the fuel film inside the radial engine intake channels. Both in the lower and the upper inlet channels the film flows downwards. It follows from the fact that gravity assists the movement of the film in the lower cylinder channels and prevents the movement in the upper cylinder channels. Real tests on aircraft engine ASz62IR was conducted in transients condition (rapid change of the excess air in each cylinder were performed. Calculations were conducted for mass of fuel reaching the cylinders theoretically and really and on this basis, the factors of fuel evaporation “x” were determined. Therefore a simplified model of the fuel supply to cylinder was adopted. Model includes time constant of the fuel film τ, the number of engine transport cycles of non-evaporating fuel along the intake pipe γ and time between next cycles Δt. The calculation results of identification of the model parameters are presented in the form of radar graphs. The figures shows the averages declines and increases of the injection time and the average values for both types of stroke. These studies shown, that the change of the position of the cylinder will cause changes in the formation of fuel-air mixture and thus changes in the combustion process. Based on the results of the work of simulation and experiments was possible to develop individual algorithms for ignition control. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: radial engine, ignition system, non-uniformity, combustion process

Procedia PDF Downloads 366
98 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals

Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen

Abstract:

Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.

Keywords: care, learning, nursing, technology

Procedia PDF Downloads 136
97 Mathematics Professional Development: Uptake and Impacts on Classroom Practice

Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier

Abstract:

Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. Included is a close-up examination of a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two US states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data were collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used. The full paper will include the case study of Ana to illustrate the factors involved in what teachers take up and use from participating in the LTG PD.

Keywords: geometry, mathematics professional development, pedagogical content knowledge, teacher learning

Procedia PDF Downloads 125
96 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP

Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang

Abstract:

Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.

Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species

Procedia PDF Downloads 68