Search results for: regression determination
486 Patterns of Associations between Child Maltreatment, Maternal Childhood Adversity, and Maternal Mental Well-Being: A Cross-Sectional Study in Tirana, Albania
Authors: Klea Ramaj
Abstract:
Objectives: There have recently been increasing calls to better understand the intergenerational transmission of adverse childhood experiences (ACEs). In particular, little is known about the links between maternal (ACEs), maternal stress, maternal depression, and child abuse against toddlers in countries in South-East Europe. This paper, therefore, aims to present new descriptive data on the epidemiology of maternal mental well-being and maternal ACEs in the capital of Albania, Tirana. It also aims to advance our understanding of the overlap between maternal stress, maternal depression, maternal exposure to ACEs, and child abuse toward two-to-three-year-old. Methods: This is a cross-sectional study conducted with a representative sample of 328 mothers of two-to-three-year-olds, recruited through public nurseries located in 8 diverse socio-economic and geographical areas in Tirana, Albania. Maternal stress was measured through the perceived stress scale (α = 0.78); maternal depression was measured via the patient health questionnaire (α = 0.77); maternal exposure to ACEs was captured via the ACEs international questionnaire (α = 0.77); and child maltreatment was captured via ISPCAN ICAST-P (α = 0.66). The main outcome examined here will be child maltreatment. The paper will first present estimates of maternal stress, depression, and child maltreatment by demographic groups. It will then use multiple regression to examine associations between child maltreatment and risk factors in the domains of maternal stress, maternal depression, and maternal ACEs. Results: Mothers' mean age was 32.3 (SD = 4.24), 87.5% were married, 51% had one child, and 83.5% had completed higher education. Analyses show high levels of stress and exposure to childhood adversity among mothers in Tirana. 97.5% of mothers perceived stress during the last month, and 89% had experienced at least one childhood adversity as measured by the ACE questionnaire, with 20.2% having experienced 4+ ACEs. Analyses show significant positive associations between maternal ACEs and maternal stress r(325) = 0.25, p = 0.00. Mothers with a high number of ACEs were more likely to abuse their children r(327) = .43, p = 0.00. 32% of mothers have used physical discipline with their 2–3-year-old, 84% have used psychological discipline, and 35% have neglected their toddler at least once or twice. The mothers’ depression levels were also positively and significantly associated with child maltreatment r(327) = .34, p = 0.00. Conclusions: This study provides cross-sectional data on the link between maternal exposure to early adversity, maternal mental well-being, and child maltreatment within the context of Tirana, Albania. The results highlight the importance of establishing policies that encourage maternal support, positive parenting, and family well-being in order to help break the cycle of transgenerational violence.Keywords: child maltreatment, maternal mental well-being, intergenerational abuse, Tirana, Albania
Procedia PDF Downloads 124485 The Role of Cognitive Control and Social Camouflage Associated with Social Anxiety Autism Spectrum Conditions
Authors: Siqing Guan, Fumiyo Oshima, Eiji Shimizu, Nozomi Tomita, Toru Takahashi, Hiroaki Kumano
Abstract:
Risk factors for social anxiety in autism spectrum conditions involve executive attention, emotion regulation, and thought regulation as processes of cognitive dysregulation. Social camouflaging behaviors as strategies used to mask and/or compensate for autism characteristics during social interactions in autism spectrum conditions have also been emphasized. However, the role of cognitive dysregulation and social camouflaging related to social anxiety in autism spectrum conditions has not been clarified. Whether these factors are specific to social anxiety in autism spectrum conditions or common to social anxiety independent of autism spectrum conditions needs to be clarified. Here, we explored risk factors specific to social anxiety in autism spectrum conditions and general risk factors for social anxiety independent of autism spectrum conditions. From the Japanese participants in early adulthood (age=18~39) of the online survey in Japan, those who exceeded the Japanese version Autism-Spectrum Quotient cutoff (33 points or more )were divided into the autism spectrum conditions group (ASC; N=255, mean age=32.08, SD age=5.16)and those who did not exceed the cutoff were divided into the non-autism spectrum conditions group (Non-ASC; N=255, mean age=31.70, SD age=5.09). Using the Japanese versions of the Social Phobia Scale, the Social Interaction Anxiety Scale, and the Short Fear of Negative Evaluation Scale, a composite score for social anxiety was calculated using a method of principal. We also measured emotional control difficulties using the Difficulties in Emotion Regulation Scale, executive attention using the Effortful Control Scale for Adults, rumination using the Rumination-Reflection Questionnaire, and worry using the Penn State Worry Questionnaire. This study was passed through the review of the Ethics Committee. No conflicts of interest. Multiple regression analysis with forced entry method was used to predict social anxiety in the ASC and non-ASC groups separately, based on executive attention, emotion dysregulation, worry, rumination, and social camouflage. In the ASC group, emotion dysregulation (β=.277, p<.001), worry (β=.162, p<.05), assimilation (β=.308, p<.001) and masking (β=.275, p<.001) were significant predictors of social anxiety (F (7,247) = 45.791, p <.001, R2=.565). In the non-ASC groups,emotion dysregulation (β=.171, p<.05), worry (β=.344,p <.001), assimilation (β=.366,p <.001) and executive attention (β=-.132,p <.05) were significant predictors of social anxiety (F (7,207) =47.333, p <.001, R2=.615).The findings suggest that masking was shown to be a risk factor for social anxiety specific to autism spectrum conditions, while emotion dysregulation, worry, and assimilation were shown to be common risk factors for social anxiety, regardless of autism spectrum conditions. In addition, executive attention is a risk factor for social anxiety without autism spectrum conditions.Keywords: autism spectrum, cognitive control, social anxiety, social camouflaging
Procedia PDF Downloads 205484 Impact of Traffic Restrictions due to Covid19, on Emissions from Freight Transport in Mexico City
Authors: Oscar Nieto-Garzón, Angélica Lozano
Abstract:
In urban areas, on-road freight transportation creates several social and environmental externalities. Then, it is crucial that freight transport considers not only economic aspects, like retailer distribution cost reduction and service improvement, but also environmental effects such as global CO2 and local emissions (e.g. Particulate Matter, NOX, CO) and noise. Inadequate infrastructure development, high rate of urbanization, the increase of motorization, and the lack of transportation planning are characteristics that urban areas from developing countries share. The Metropolitan Area of Mexico City (MAMC), the Metropolitan Area of São Paulo (MASP), and Bogota are three of the largest urban areas in Latin America where air pollution is often a problem associated with emissions from mobile sources. The effect of the lockdown due to COVID-19 was analyzedfor these urban areas, comparing the same period (January to August) of years 2016 – 2019 with 2020. A strong reduction in the concentration of primary criteria pollutants emitted by road traffic were observed at the beginning of 2020 and after the lockdown measures.Daily mean concentration of NOx decreased 40% in the MAMC, 34% in the MASP, and 62% in Bogota. Daily mean ozone levels increased after the lockdown measures in the three urban areas, 25% in MAMC, 30% in the MASP and 60% in Bogota. These changes in emission patterns from mobile sources drastically changed the ambient atmospheric concentrations of CO and NOX. The CO/NOX ratioat the morning hours is often used as an indicator of mobile sources emissions. In 2020, traffic from cars and light vehicles was significantly reduced due to the first lockdown, but buses and trucks had not restrictions. In theory, it implies a decrease in CO and NOX from cars or light vehicles, maintaining the levels of NOX by trucks(or lower levels due to the congestion reduction). At rush hours, traffic was reduced between 50% and 75%, so trucks could get higher speeds, which would reduce their emissions. By means an emission model, it was found that an increase in the average speed (75%) would reduce the emissions (CO, NOX, and PM) from diesel trucks by up to 30%. It was expected that the value of CO/NOXratio could change due to thelockdownrestrictions. However, although there was asignificant reduction of traffic, CO/NOX kept its trend, decreasing to 8-9 in 2020. Hence, traffic restrictions had no impact on the CO/NOX ratio, although they did reduce vehicle emissions of CO and NOX. Therefore, these emissions may not adequately represent the change in the vehicle emission patterns, or this ratio may not be a good indicator of emissions generated by vehicles. From the comparison of the theoretical data and those observed during the lockdown, results that the real NOX reduction was lower than the theoretical reduction. The reasons could be that there are other sources of NOX emissions, so there would be an over-representation of NOX emissions generated by diesel vehicles, or there is an underestimation of CO emissions. Further analysis needs to consider this ratioto evaluate the emission inventories and then to extend these results forthe determination of emission control policies to non-mobile sources.Keywords: COVID-19, emissions, freight transport, latin American metropolis
Procedia PDF Downloads 135483 Thermosensitive Hydrogel Development for Its Possible Application in Cardiac Cell Therapy
Authors: Lina Paola Orozco Marin, Yuliet Montoya Osorio, John Bustamante Osorno
Abstract:
Ischemic events can culminate in acute myocardial infarction by irreversible cardiac lesions that cannot be restored due to the limited regenerative capacity of the heart. Cell therapy seeks to replace these injured or necrotic cells by transplanting healthy and functional cells. The therapeutic alternatives proposed by tissue engineering and cardiovascular regenerative medicine are the use of biomaterials to mimic the native extracellular medium, which is full of proteins, proteoglycans, and glycoproteins. The selected biomaterials must provide structural support to the encapsulated cells to avoid their migration and death in the host tissue. In this context, the present research work focused on developing a natural thermosensitive hydrogel, its physical and chemical characterization, and the determination of its biocompatibility in vitro. The hydrogel was developed by mixing hydrolyzed bovine and porcine collagen at 2% w/v, chitosan at 2.5% w/v, and beta-glycerolphosphate at 8.5% w/w and 10.5% w/w in magnetic stirring at 4°C. Once obtained, the thermosensitivity and gelation time were determined, incubating the samples at 37°C and evaluating them through the inverted tube method. The morphological characterization of the hydrogels was carried out through scanning electron microscopy. Chemical characterization was carried out employing infrared spectroscopy. The biocompatibility was determined using the MTT cytotoxicity test according to the ISO 10993-5 standard for the hydrogel’s precursors using the fetal human ventricular cardiomyocytes cell line RL-14. The RL-14 cells were also seeded on the top of the hydrogels, and the supernatants were subculture at different periods to their observation under a bright field microscope. Four types of thermosensitive hydrogels were obtained, which differ in their composition and concentration, called A1 (chitosan/bovine collagen/beta-glycerolphosphate 8.5%w/w), A2 (chitosan/porcine collagen/beta-glycerolphosphate 8.5%), B1 (chitosan/bovine collagen/beta-glycerolphosphate 10.5%) and B2 (chitosan/porcine collagen/beta-glycerolphosphate 10.5%). A1 and A2 had a gelation time of 40 minutes, and B1 and B2 had a gelation time of 30 minutes at 37°C. Electron micrographs revealed a three-dimensional internal structure with interconnected pores for the four types of hydrogels. This facilitates the exchange of nutrients, oxygen, and the exit of metabolites, allowing to preserve a microenvironment suitable for cell proliferation. In the infrared spectra, it was possible to observe the interaction that occurs between the amides of polymeric compounds with the phosphate groups of beta-glycerolphosphate. Finally, the biocompatibility tests indicated that cells in contact with the hydrogel or with each of its precursors are not affected in their proliferation capacity for a period of 16 days. These results show the potential of the hydrogel to increase the cell survival rate in the cardiac cell therapies under investigation. Moreover, the results lay the foundations for its characterization and biological evaluation in both in vitro and in vivo models.Keywords: cardiac cell therapy, cardiac ischemia, natural polymers, thermosensitive hydrogel
Procedia PDF Downloads 189482 Effect of Different Contaminants on Mineral Insulating Oil Characteristics
Authors: H. M. Wilhelm, P. O. Fernandes, L. P. Dill, C. Steffens, K. G. Moscon, S. M. Peres, V. Bender, T. Marchesan, J. B. Ferreira Neto
Abstract:
Deterioration of insulating oil is a natural process that occurs during transformers operation. However, this process can be accelerated by some factors, such as oxygen, high temperatures, metals and, moisture, which rapidly reduce oil insulating capacity and favor transformer faults. Parts of building materials of a transformer can be degraded and yield soluble compounds and insoluble particles that shorten the equipment life. Physicochemical tests, dissolved gas analysis (including propane, propylene and, butane), volatile and furanic compounds determination, besides quantitative and morphological analyses of particulate are proposed in this study in order to correlate transformers building materials degradation with insulating oil characteristics. The present investigation involves tests of medium temperature overheating simulation by means of an electric resistance wrapped with the following materials immersed in mineral insulating oil: test I) copper, tin, lead and, paper (heated at 350-400 °C for 8 h); test II) only copper (at 250 °C for 11 h); and test III) only paper (at 250 °C for 8 h and at 350 °C for 8 h). A different experiment is the simulation of electric arc involving copper, using an electric welding machine at two distinct energy sets (low and high). Analysis results showed that dielectric loss was higher in the sample of test I, higher neutralization index and higher values of hydrogen and hydrocarbons, including propane and butane, were also observed. Test III oil presented higher particle count, in addition, ferrographic analysis revealed contamination with fibers and carbonized paper. However, these particles had little influence on the oil physicochemical parameters (dielectric loss and neutralization index) and on the gas production, which was very low. Test II oil showed high levels of methane, ethane, and propylene, indicating the effect of metal on oil degradation. CO2 and CO gases were formed in the highest concentration in test III, as expected. Regarding volatile compounds, in test I acetone, benzene and toluene were detected, which are oil oxidation products. Regarding test III, methanol was identified due to cellulose degradation, as expected. Electric arc simulation test showed the highest oil oxidation in presence of copper and at high temperature, since these samples had huge concentration of hydrogen, ethylene, and acetylene. Particle count was also very high, showing the highest release of copper in such conditions. When comparing high and low energy, the first presented more hydrogen, ethylene, and acetylene. This sample had more similar results to test I, pointing out that the generation of different particles can be the cause for faults such as electric arc. Ferrography showed more evident copper and exfoliation particles than in other samples. Therefore, in this study, by using different combined analytical techniques, it was possible to correlate insulating oil characteristics with possible contaminants, which can lead to transformers failure.Keywords: Ferrography, gas analysis, insulating mineral oil, particle contamination, transformer failures
Procedia PDF Downloads 223481 Six Years Antimicrobial Resistance Trends among Bacterial Isolates in Amhara National Regional State, Ethiopia
Authors: Asrat Agalu Abejew
Abstract:
Background: Antimicrobial resistance (AMR) is a silent tsunami and one of the top global threats to health care and public health. It is one of the common agendas globally and in Ethiopia. Emerging AMR will be a double burden to Ethiopia, which is facing a series of problems from infectious disease morbidity and mortality. In Ethiopia, although there are attempts to document AMR in healthcare institutions, comprehensive and all-inclusive analysis is still lacking. Thus, this study is aimed to determine trends in AMR from 2016-2021. Methods: A retrospective analysis of secondary data recorded in the Amhara Public Health Institute (APHI) from 2016 to 2021 G.C was conducted. Blood, Urine, Stool, Swabs, Discharge, body effusions, and other Microbiological specimens were collected from each study participants, and Bacteria identification and Resistance tests were done using the standard microbiologic procedure. Data was extracted from excel in August 2022, Trends in AMR were analyzed, and the results were described. In addition, the chi-square (X2) test and binary logistic regression were used, and a P. value < 0.05 was used to determine a significant association. Results: During 6 years period, there were 25143 culture and susceptibility tests. Overall, 265 (46.2%) bacteria were resistant to 2-4 antibiotics, 253 (44.2%) to 5-7 antibiotics, and 56 (9.7%) to >=8 antibiotics. The gram-negative bacteria were 166 (43.9%), 155 (41.5%), and 55 (14.6%) resistant to 2-4, 5-7, and ≥8 antibiotics, respectively, whereas 99(50.8%), 96(49.2% and 1 (0.5%) of gram-positive bacteria were resistant to 2-4, 5-7 and ≥8 antibiotics respectively. K. pneumonia 3783 (15.67%) and E. coli 3199 (13.25%) were the most commonly isolated bacteria, and the overall prevalence of AMR was 2605 (59.9%), where K. pneumonia 743 (80.24%), E. cloacae 196 (74.81%), A. baumannii 213 (66.56%) being the most common resistant bacteria for antibiotics tested. Except for a slight decline during 2020 (6469 (25.4%)), the overall trend of AMR is rising from year to year, with a peak in 2019 (8480 (33.7%)) and in 2021 (7508 (29.9%). If left un-intervened, the trend in AMR will increase by 78% of variation from the study period, as explained by the differences in years (R2=0.7799). Ampicillin, Augmentin, ciprofloxacin, cotrimoxazole, tetracycline, and Tobramycin were almost resistant to common bacteria they were tested. Conclusion: AMR is linearly increasing during the last 6 years. If left as it is without appropriate intervention after 15 years (2030 E.C), AMR will increase by 338.7%. A growing number of multi-drug resistant bacteria is an alarm to awake policymakers and those who do have the concern to intervene before it is too late. This calls for a periodic, integrated, and continuous system to determine the prevalence of AMR in commonly used antibiotics.Keywords: AMR, trend, pattern, MDR
Procedia PDF Downloads 76480 Storage of Organic Carbon in Chemical Fractions in Acid Soil as Influenced by Different Liming
Authors: Ieva Jokubauskaite, Alvyra Slepetiene, Danute Karcauskiene, Inga Liaudanskiene, Kristina Amaleviciute
Abstract:
Soil organic carbon (SOC) is the key soil quality and ecological stability indicator, therefore, carbon accumulation in stable forms not only supports and increases the organic matter content in the soil, but also has a positive effect on the quality of soil and the whole ecosystem. Soil liming is one of the most common ways to improve the carbon sequestration in the soil. Determination of the optimum intensity and combinations of liming in order to ensure the optimal carbon quantitative and qualitative parameters is one of the most important tasks of this work. The field experiments were carried out at the Vezaiciai Branch of Lithuanian Research Centre for Agriculture and Forestry (LRCAF) during the 2011–2013 period. The effect of liming with different intensity (at a rate 0.5 every 7 years and 2.0 every 3-4 years) was investigated in the topsoil of acid moraine loam Bathygleyic Dystric Glossic Retisol. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LRCAF. Soil samples for chemical analyses were taken from the topsoil after harvesting. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) at 590 nm wavelength using glucose standards. SOC fractional composition was determined by Ponomareva and Plotnikova version of classical Tyurin method. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR in water extract at soil-water ratio 1:5. Spectral properties (E4/E6 ratio) of humic acids were determined by measuring the absorbance of humic and fulvic acids solutions at 465 and 665 nm. Our study showed a negative statistically significant effect of periodical liming (at 0.5 and 2.0 liming rates) on SOC content in the soil. The content of SOC was 1.45% in the unlimed treatment, while in periodically limed at 2.0 liming rate every 3–4 years it was approximately by 0.18 percentage points lower. It was revealed that liming significantly decreased the DOC concentration in the soil. The lowest concentration of DOC (0.156 g kg-1) was established in the most intensively limed (2.0 liming rate every 3–4 years) treatment. Soil liming exerted an increase of all humic acids and fulvic acid bounded with calcium fractions content in the topsoil. Soil liming resulted in the accumulation of valuable humic acids. Due to the applied liming, the HR/FR ratio, indicating the quality of humus increased to 1.08 compared with that in unlimed soil (0.81). Intensive soil liming promoted the formation of humic acids in which groups of carboxylic and phenolic compounds predominated. These humic acids are characterized by a higher degree of condensation of aromatic compounds and in this way determine the intensive organic matter humification processes in the soil. The results of this research provide us with the clear information on the characteristics of SOC change, which could be very useful to guide the climate policy and sustainable soil management.Keywords: acid soil, carbon sequestration, long–term liming, soil organic carbon
Procedia PDF Downloads 224479 How to Evaluate Resting and Walking Energy Expenditures of Individuals with Different Body Mass Index
Authors: Zeynep Altinkaya, Ugur Dal, Figen Dag, Dilan D. Koyuncu, Merve Turkegun
Abstract:
Obesity is defined as abnormal fat-tissue accumulation as a result of imbalance between energy intake and expenditure. Since 50-70% daily energy expenditure of sedantary individuals is consumed as resting energy expenditure (REE), it takes an important place in the evaluation of new methods for obesity treatment. Also, it is known that walking is a prevalent activity in the prevention of obesity. The primary purpose of this study is to evaluate and compare the resting and walking energy expenditures of individuals with different body mass index (BMI). In this research, 4 groups are formed as underweight (BMI < 18,5 kg/m2), normal (BMI=18,5-24,9 kg/m2), overweight (BMI=25-29,9 kg/m2), and obese (BMI ≥ 30) according to BMI of individuals. 64 healthy young adults (8 man and 8 woman per group, age 18-30 years) with no known gait disabilities were recruited in this study. The body compositions of all participants were measured via bioelectric empedance analysis method. The energy expenditure of individuals was measured with indirect calorimeter method as inspired and expired gas samples are collected breath-by-breath through a special facemask. The preferred walking speed (PWS) of each subject was determined by using infrared sensors placed in 2nd and 12th meters of 14 m walkway. The REE was measured for 15 min while subjects were lying, and walking energy expenditure was measured during subjects walk in their PWS on treadmill. The gross REE was significantly higher in obese subjects compared to underweight and normal subjects (p < 0,0001). When REE was normalized to body weight, it was higher in underweight and normal groups than overweight and obese groups (p < 0,0001). However, when REE was normalized to fat-free mass, it did not differ significantly between groups. The gross walking energy expenditure in PWS was higher in obese and overweight groups than underweight and normal groups (p < 0,0001). The regression coefficient between gross walking energy expenditure and body weight was significiant among normal and obese groups (p < 0.05). It accounted for 70,5% of gross walking energy expenditure in normal group, and 57,9% of gross walking energy expenditure in obese group. It is known that obese individuals have more metabolically inactive fat-tissue compared to other groups. While excess fat-tissue increases total body weight, it does not contribute much to REE. Therefore, REE results normalized to body weight could lead to misleading results. In order to eliminate fat-mass effect on REE of obese individuals, REE normalized to fat-free mass should be used to acquire more accurate results. On the other hand, the fat-mass increasement raises energy requirement while walking to retain the body balance. Thus, gross walking energy expenditure should be taken into consideration for the evaluating energy expenditure of walking.Keywords: body composition, obesity, resting energy expenditure, walking energy expenditure
Procedia PDF Downloads 387478 The Magnitude and Associated Factors of Immune Hemolytic Anemia among Human Immuno Deficiency Virus Infected Adults Attending University of Gondar Comprehensive Specialized Hospital North West Ethiopia 2021 GC, Cross Sectional Study Design
Authors: Samul Sahile Kebede
Abstract:
Back ground: -Immune hemolytic anemia commonly affects human immune deficiency, infected individuals. Among anemic HIV patients in Africa, the burden of IHA due to autoantibody was ranged from 2.34 to 3.06 due to the drug was 43.4%. IHA due to autoimmune is potentially a fatal complication of HIV, which accompanies the greatest percent from acquired hemolytic anemia. Objective: -The main aim of this study was to determine the magnitude and associated factors of immune hemolytic anemia among human immuno deficiency virus infected adults at the university of Gondar comprehensive specialized hospital north west Ethiopia from March to April 2021. Methods: - An institution-based cross-sectional study was conducted on 358 human immunodeficiency virus-infected adults selected by systematic random sampling at the University of Gondar comprehensive specialized hospital from March to April 2021. Data for socio-demography, dietary and clinical data were collected by structured pretested questionnaire. Five ml of venous blood was drawn from each participant and analyzed by Unicel DHX 800 hematology analyzer, blood film examination, and antihuman globulin test were performed to the diagnosis of immune hemolytic anemia. Data was entered into Epidata version 4.6 and analyzed by STATA version 14. Descriptive statistics were computed and firth penalized logistic regression was used to identify predictors. P value less than 0.005 interpreted as significant. Result; - The overall prevalence of immune hemolytic anemia was 2.8 % (10 of 358 participants). Of these, 5 were males, and 7 were in the 31 to 50 year age group. Among individuals with immune hemolytic anemia, 40 % mild and 60 % moderate anemia. The factors that showed association were family history of anemia (AOR 8.30 at 95% CI 1.56, 44.12), not eating meat (AOR 7.39 at 95% CI 1.25, 45.0), and high viral load 6.94 at 95% CI (1.13, 42.6). Conclusion and recommendation; Immune hemolytic anemia is less frequent condition in human immunodeficiency virus infected adults, and moderate anemia was common in this population. The prevalence was increased with a high viral load, a family history of anemia, and not eating meat. In these patients, early detection and treatment of immune hemolytic anemia is necessary.Keywords: anemia, hemolytic, immune, auto immune, HIV/AIDS
Procedia PDF Downloads 104477 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis
Authors: Elcin Timur Cakmak, Ayse Oguzlar
Abstract:
This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.Keywords: classification algorithms, machine learning, sentiment analysis, Twitter
Procedia PDF Downloads 73476 A Cognitive Training Program in Learning Disability: A Program Evaluation and Follow-Up Study
Authors: Krisztina Bohacs, Klaudia Markus
Abstract:
To author’s best knowledge we are in absence of studies on cognitive program evaluation and we are certainly short of programs that prove to have high effect sizes with strong retention results. The purpose of our study was to investigate the effectiveness of a comprehensive cognitive training program, namely BrainRx. This cognitive rehabilitation program target and remediate seven core cognitive skills and related systems of sub-skills through repeated engagement in game-like mental procedures delivered one-on-one by a clinician, supplemented by digital training. A larger sample of children with learning disability were given pretest and post-test cognitive assessments. The experimental group completed a twenty-week cognitive training program in a BrainRx center. A matched control group received another twenty-week intervention with Feuerstein’s Instrumental Enrichment programs. A second matched control group did not receive training. As for pre- and post-test, we used a general intelligence test to assess IQ and a computer-based test battery for assessing cognition across the lifespan. Multiple regression analyses indicated that the experimental BrainRx treatment group had statistically significant higher outcomes in attention, working memory, processing speed, logic and reasoning, auditory processing, visual processing and long-term memory compared to the non-treatment control group with very large effect sizes. With the exception of logic and reasoning, the BrainRx treatment group realized significantly greater gains in six of the above given seven cognitive measures compared to the Feuerstein control group. Our one-year retention measures showed that all the cognitive training gains were above ninety percent with the greatest retention skills in visual processing, auditory processing, logic, and reasoning. The BrainRx program may be an effective tool to establish long-term cognitive changes in case of students with learning disabilities. Recommendations are made for treatment centers and special education institutions on the cognitive training of students with special needs. The importance of our study is that targeted, systematic, progressively loaded and intensive brain training approach may significantly change learning disabilities.Keywords: cognitive rehabilitation training, cognitive skills, learning disability, permanent structural cognitive changes
Procedia PDF Downloads 200475 Gauging Floral Resources for Pollinators Using High Resolution Drone Imagery
Authors: Nicholas Anderson, Steven Petersen, Tom Bates, Val Anderson
Abstract:
Under the multiple-use management regime established in the United States for federally owned lands, government agencies have come under pressure from commercial apiaries to grant permits for the summer pasturing of honeybees on government lands. Federal agencies have struggled to integrate honeybees into their management plans and have little information to make regulations that resolve how many colonies should be allowed in a single location and at what distance sets of hives should be placed. Many conservation groups have voiced their concerns regarding the introduction of honeybees to these natural lands, as they may outcompete and displace native pollinating species. Assessing the quality of an area in regard to its floral resources, pollen, and nectar can be important when attempting to create regulations for the integration of commercial honeybee operations into a native ecosystem. Areas with greater floral resources may be able to support larger numbers of honeybee colonies, while poorer resource areas may be less resilient to introduced disturbances. Attempts are made in this study to determine flower cover using high resolution drone imagery to help assess the floral resource availability to pollinators in high elevation, tall forb communities. This knowledge will help in determining the potential that different areas may have for honeybee pasturing and honey production. Roughly 700 images were captured at 23m above ground level using a drone equipped with a Sony QX1 RGB 20-megapixel camera. These images were stitched together using Pix4D, resulting in a 60m diameter high-resolution mosaic of a tall forb meadow. Using the program ENVI, a supervised maximum likelihood classification was conducted to calculate the percentage of total flower cover and flower cover by color (blue, white, and yellow). A complete vegetation inventory was taken on site, and the major flowers contributing to each color class were noted. An accuracy assessment was performed on the classification yielding an 89% overall accuracy and a Kappa Statistic of 0.855. With this level of accuracy, drones provide an affordable and time efficient method for the assessment of floral cover in large areas. The proximal step of this project will now be to determine the average pollen and nectar loads carried by each flower species. The addition of this knowledge will result in a quantifiable method of measuring pollen and nectar resources of entire landscapes. This information will not only help land managers determine stocking rates for honeybees on public lands but also has applications in the agricultural setting, aiding producers in the determination of the number of honeybee colonies necessary for proper pollination of fruit and nut crops.Keywords: honeybee, flower, pollinator, remote sensing
Procedia PDF Downloads 140474 The Mediating Role of Positive Psychological Capital in the Relationship between Self-Leadership and Career Maturity among Korean University Students
Authors: Lihyo Sung
Abstract:
Background: Children and teens in Korea experience extreme levels of academic stress. To perform better on the college entrance exam and gain admission to Korea’s most prestigious universities, they devote a significant portion of their early lives to studying. Because of their excessive preparation for entrance exams, students have become accustomed to passive and involuntary engagement. Any student starting university, however, faces new challenges that require more active involvement and self-regulated practice. As a way to tackle this issue, the study focuses on investigating the mediating effects of positive psychological capital on the relationship between self-leadership and career maturity among Korean university students. Objectives and Hypotheses: The long term goal of this study is to offer insights that promote the use of positive psychological interventions in the development and adaptation of career maturity. The current objective is to assess the role of positive psychological capital as a mediator between self-leadership and career maturity among Korean university students. Based on previous research, the hypotheses are: (a) self-leadership will be positively associated with indices of career maturity, and (b) positive psychological capital will partially or fully mediate the relationship between self-leadership and career maturity. Sample Characteristics and Sample Size: Participants in the current study consisted of undergraduate students enrolled in various courses at 5 large universities in Korea. A total of 181 students participated in the study. Methodology: A quantitative research design was adopted to test the hypotheses proposed in the current study. By using a cross-sectional approach to research, a self-administered questionnaire was used to collect data on indices of positive psychological capital, self-leadership, and career maturity. The data were analyzed by means of Cronbach's alpha, Pierson correlation test, multiple regression, path analysis, and SPSS for Windows version 22.0 using descriptive statistics. Results: Findings showed that positive psychological capital fully mediated the relationship between self-leadership and career maturity. Self-leadership significantly impacted positive psychological capital and career maturity, respectively. Scientific Contribution: The results of the current study provided useful insights into the role of psychological strengths such as positive psychological capital in improving self-leadership and career maturity. Institutions can assist in increasing positive psychological capital through the creation of positive experiences for undergraduate students, such as opportunities for coaching and mentoring.Keywords: career maturity, mediating role, positive psychological capital, self-leadership
Procedia PDF Downloads 124473 Shoreline Variation with Construction of a Pair of Training Walls, Ponnani Inlet, Kerala, India
Authors: Jhoga Parth, T. Nasar, K. V. Anand
Abstract:
An idealized definition of shoreline is that it is the zone of coincidence of three spheres such as atmosphere, lithosphere, and hydrosphere. Despite its apparent simplicity, this definition in practice a challenge to apply. In reality, the shoreline location deviates continually through time, because of various dynamic factors such as wave characteristics, currents, coastal orientation and the bathymetry, which makes the shoreline volatile. This necessitates us to monitor the shoreline in a temporal basis. If shoreline’s nature is understood at particular coastal stretch, it need not be the same trend at the other location, though belonging to the same sea front. Shoreline change is hence a local phenomenon and has to be studied with great intensity considering as many factors involved as possible. Erosion and accretion of sediment are such natures of a shoreline, which needs to be quantified by comparing with its predeceasing variations and understood before implementing any coastal projects. In recent years, advent of Global Positioning System (GPS) and Geographic Information System (GIS) acts as an emerging tool to quantify the intra and inter annual sediment rate getting accreted or deposited compared to other conventional methods in regards with time was taken and man power. Remote sensing data, on the other hand, paves way to acquire historical sets of data where field data is unavailable with a higher resolution. Short term and long term period shoreline change can be accurately tracked and monitored using a software residing in GIS - Digital Shoreline Analysis System (DSAS) developed by United States Geological Survey (USGS). In the present study, using DSAS, End Point Rate (EPR) is calculated analyze the intra-annual changes, and Linear Rate Regression (LRR) is adopted to study inter annual changes of shoreline. The shoreline changes are quantified for the scenario during the construction of breakwater in Ponnani river inlet along Kerala coast, India. Ponnani is a major fishing and landing center located 10°47’12.81”N and 75°54’38.62”E in Malappuram district of Kerala, India. The rate of erosion and accretion is explored using satellite and field data. The full paper contains the rate of change of shoreline, and its analysis would provide us understanding the behavior of the inlet at the study area during the construction of the training walls.Keywords: DSAS, end point rate, field measurements, geo-informatics, shoreline variation
Procedia PDF Downloads 254472 Navigating AI in Higher Education: Exploring Graduate Students’ Perspectives on Teacher-Provided AI Guidelines
Authors: Mamunur Rashid, Jialin Yan
Abstract:
The current years have witnessed a rapid evolution and integration of artificial intelligence (AI) in various fields, prominently influencing the education industry. Acknowledging this transformative wave, AI tools like ChatGPT and Grammarly have undeniably introduced perspectives and skills, enriching the educational experiences of higher education students. The prevalence of AI utilization in higher education also drives an increasing number of researchers' attention in various dimensions. Departments, offices, and professors in universities also designed and released a set of policies and guidelines on using AI effectively. In regard to this, the study targets exploring and analyzing graduate students' perspectives regarding AI guidelines set by teachers. A mixed-methods study will be mainly conducted in this study, employing in-depth interviews and focus groups to investigate and collect students' perspectives. Relevant materials, such as syllabi and course instructions, will also be analyzed through the documentary analysis to facilitate understanding of the study. Surveys will also be used for data collection and students' background statistics. The integration of both interviews and surveys will provide a comprehensive array of student perspectives across various academic disciplines. The study is anchored in the theoretical framework of self-determination theory (SDT), which emphasizes and explains the students' perspective under the AI guidelines through three core needs: autonomy, competence, and relatedness. This framework is instrumental in understanding how AI guidelines influence students' intrinsic motivation and sense of empowerment in their learning environments. Through qualitative analysis, the study reveals a sense of confusion and uncertainty among students regarding the appropriate application and ethical considerations of AI tools, indicating potential challenges in meeting their needs for competence and autonomy. The quantitative data further elucidates these findings, highlighting a significant communication gap between students and educators in the formulation and implementation of AI guidelines. The critical findings of this study mainly come from two aspects: First, the majority of graduate students are uncertain and confused about relevant AI guidelines given by teachers. Second, this study also demonstrates that the design and effectiveness of course materials, such as the syllabi and instructions, also need to adapt in regard to AI policies. It indicates that certain of the existing guidelines provided by teachers lack consideration of students' perspectives, leading to a misalignment with students' needs for autonomy, competence, and relatedness. More emphasize and efforts need to be dedicated to both teacher and student training on AI policies and ethical considerations. To conclude, in this study, graduate students' perspectives on teacher-provided AI guidelines are explored and reflected upon, calling for additional training and strategies to improve how these guidelines can be better disseminated for their effective integration and adoption. Although AI guidelines provided by teachers may be helpful and provide new insights for students, educational institutions should take a more anchoring role to foster a motivating, empowering, and student-centered learning environment. The study also provides some relevant recommendations, including guidance for students on the ethical use of AI and AI policy training for teachers in higher education.Keywords: higher education policy, graduate students’ perspectives, higher education teacher, AI guidelines, AI in education
Procedia PDF Downloads 73471 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis
Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel
Abstract:
Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI
Procedia PDF Downloads 170470 Effect of Women`s Autonomy on Unmet Need for Contraception and Family Size in India
Authors: Anshita Sharma
Abstract:
India is one of the countries to initiate family planning with intention to control the growing population by reducing fertility. In effort to this, India had introduced the National family planning programme in 1952. The level of unmet need in India shows a reducing trend with increasing effectiveness of family planning services as in NFHS-1 the unmet need for limiting, spacing and total was 46 percent, 14 percent & 9 percent, respectively. The demand for spacing has reduced to at 8 percent, 8 percent for limiting and total unmet need was 16 percent in NFHS-2. The total unmet need has reduced to 13 percent in NFHS-3 for all currently married women and the demand for limiting and spacing is 7 percent and 6 percent respectively. The level of unmet need in India shows a reducing trend with increasing effectiveness of family planning services. Despite the progress, there is chunk of women who are deprived of controlling unintended and unwanted pregnancies. The present paper examines the socio-cultural and economic and demographic correlates of unmet need for contraception in India. It also examines the effect of women’s autonomy and unmet need for contraception on family size among different socio-economic groups of population. It uses data from national family health survey-3 carried out in 2005-06 and employs bi-variate techniques and multivariate techniques for analysis. The multiple regression analysis has done to seek the level and direction of relationship among various socio-economic and demographic factors. The result reveals that women with higher level of education and economic status have low level of unmet need for family planning. Women living in non-nuclear family have high unmet need for spacing and women living in nuclear family have high unmet need for limiting and family size is slightly higher of women of nuclear family. In India, the level of autonomy varies at different life point; usually women with higher age enjoy higher autonomy than their junior female member in the family. The finding shows that women with higher autonomy have large family size counter to women with low autonomy have low family size. Unmet need for family planning decrease with women’s increasing exposure to mass- media. The demographic factors like experience of child loss are directly related to family size. Women who experience higher child loss have low unmet need for spacing and limiting. Thus, It is established with the help that women’s autonomy status play substantial role in fulfilling demand of contraception for limiting and spacing which affect the family size.Keywords: family size, socio-economic correlates, unmet need for limiting, unmet need for spacing, women`s autonomy
Procedia PDF Downloads 266469 Fracture Toughness Characterizations of Single Edge Notch (SENB) Testing Using DIC System
Authors: Amr Mohamadien, Ali Imanpour, Sylvester Agbo, Nader Yoosef-Ghodsi, Samer Adeeb
Abstract:
The fracture toughness resistance curve (e.g., J-R curve and crack tip opening displacement (CTOD) or δ-R curve) is important in facilitating strain-based design and integrity assessment of oil and gas pipelines. This paper aims to present laboratory experimental data to characterize the fracture behavior of pipeline steel. The influential parameters associated with the fracture of API 5L X52 pipeline steel, including different initial crack sizes, were experimentally investigated for a single notch edge bend (SENB). A total of 9 small-scale specimens with different crack length to specimen depth ratios were conducted and tested using single edge notch bending (SENB). ASTM E1820 and BS7448 provide testing procedures to construct the fracture resistance curve (Load-CTOD, CTOD-R, or J-R) from test results. However, these procedures are limited by standard specimens’ dimensions, displacement gauges, and calibration curves. To overcome these limitations, this paper presents the use of small-scale specimens and a 3D-digital image correlation (DIC) system to extract the parameters required for fracture toughness estimation. Fracture resistance curve parameters in terms of crack mouth open displacement (CMOD), crack tip opening displacement (CTOD), and crack growth length (∆a) were carried out from test results by utilizing the DIC system, and an improved regression fitting resistance function (CTOD Vs. crack growth), or (J-integral Vs. crack growth) that is dependent on a variety of initial crack sizes was constructed and presented. The obtained results were compared to the available results of the classical physical measurement techniques, and acceptable matchings were observed. Moreover, a case study was implemented to estimate the maximum strain value that initiates the stable crack growth. This might be of interest to developing more accurate strain-based damage models. The results of laboratory testing in this study offer a valuable database to develop and validate damage models that are able to predict crack propagation of pipeline steel, accounting for the influential parameters associated with fracture toughness.Keywords: fracture toughness, crack propagation in pipeline steels, CTOD-R, strain-based damage model
Procedia PDF Downloads 62468 Association of Vulnerability and Behavioural Outcomes of FSWs Linked with TI Prevention HIV Program: An Evidence from Cross-Sectional Behavioural Study in Thane District of Maharashtra
Authors: Jayanta Bora, Sukhvinder Kaur, Ashok Agarwal, Sangeeta Kaul
Abstract:
Background: It is important for targeted interventions to consider vulnerabilities of female sex workers (FSWs) such as poverty, work-related mobility and literacy for effective human immunodeficiency virus (HIV) prevention. This paper examines the association between vulnerability and behavioural outcomes among FSWs in Thane district, Maharashtra under USAID PHFI-PIPPSE project. Methods: Data were used from the Behavioural Tracking Survey, a cross-sectional behavioural study conducted in 2015 with 503 FSWs randomly selected from 12 TI-NGOs which were functioning and providing services to FSWs in Thane district prior to April 2014 in Thane district of Maharashtra. We have created the “vulnerability index”, a composite index of literacy, factors of dependence (alternative livelihood options, current debt), and aspects of sex work (mobility and duration in sex work) as a dependent variable. The key independent measures used were program exposure to intervention, service uptake, self-confidence, and self-identity. Bi-variate and multivariate logistic regressions were used to examine the study objectives. Results: A higher proportion of FSWs who were in the age-group 18–25 years from brothel/street /home/ lodge-based were categorized as highly vulnerable to HIV risk as compared to bar-based sex worker (74.1% versus 59.8%, P,0.002); regression analysis highlighted lower odds of vulnerability among FSWs who were aware of services and visited NGO clinic for medical check-up and counselling for STI [AOR= 0.092, 95% CI 0.018-0.460; P,0.004], However, lower odds of vulnerability on confident in supporting fellow sex worker in crisis [AOR= 0.601, 95% CI 0.476-0.758; P, 0.000] and were able to turn away clients when they refused to use a condom during sex [AOR= 0.524, 95% CI 0.342-0.802; P, 0.003]. Conclusion: The results highlight that FSWs associated with TIs and getting services are less vulnerable and highly empowered. As a result of behavioural change communication and other services provided by TIs, FSWs were able to successfully negotiate about condom use with their clients and manage solidarity in the crisis situation for fellow FSWs. Therefore, it is evident from study paper that TI prevention programs may transform the lives of masses considerably and may open a window of opportunity to infuse the information and awareness about HIV risk.Keywords: female sex worker, HIV prevention, HIV service uptake, vulnerability
Procedia PDF Downloads 253467 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter
Procedia PDF Downloads 328466 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 263465 A Study on the Effect of the Work-Family Conflict on Work Engagement: A Mediated Moderation Model of Emotional Exhaustion and Positive Psychology Capital
Authors: Sungeun Hyun, Sooin Lee, Gyewan Moon
Abstract:
Work-Family Conflict has been an active research area for the past decades. Work-Family Conflict harms individuals and organizations, it is ultimately expected to bring the cost of losses to the company in the long run. WFC has mainly focused on effects of organizational effectiveness and job attitude such as Job Satisfaction, Organizational Commitment, and Turnover Intention variables. This study is different from consequence variable with previous research. For this purpose, we selected the positive job attitude 'Work Engagement' as a consequence of WFC. This research has its primary research purpose in identifying the negative effects of the Work-Family Conflict, and started out from the recognition of the problem that the research on the direct relationship on the influence of the WFC on Work Engagement is lacking. Based on the COR(Conservation of resource theory) and JD-R(Job Demand- Resource model), the empirical study model to examine the negative effects of WFC with Emotional Exhaustion as the link between WFC and Work Engagement was suggested and validated. Also, it was analyzed how much Positive Psychological Capital may buffer the negative effects arising from WFC within this relationship, and the Mediated Moderation model controlling the indirect effect influencing the Work Engagement by the Positive Psychological Capital mediated by the WFC and Emotional Exhaustion was verified. Data was collected by using questionnaires distributed to 500 employees engaged manufacturing, services, finance, IT industry, education services, and other sectors, of which 389 were used in the statistical analysis. The data are analyzed by statistical package, SPSS 21.0, SPSS macro and AMOS 21.0. The hierarchical regression analysis, SPSS PROCESS macro and Bootstrapping method for hypothesis testing were conducted. Results showed that all hypotheses are supported. First, WFC showed a negative effect on Work Engagement. Specifically, WIF appeared to be on more negative effects than FIW. Second, Emotional exhaustion found to mediate the relationship between WFC and Work Engagement. Third, Positive Psychological Capital showed to moderate the relationship between WFC and Emotional Exhaustion. Fourth, the effect of mediated moderation through the integration verification, Positive Psychological Capital demonstrated to buffer the relationship among WFC, Emotional Exhastion, and Work Engagement. Also, WIF showed a more negative effects than FIW through verification of all hypotheses. Finally, we discussed the theoretical and practical implications on research and management of the WFC, and proposed limitations and future research directions of research.Keywords: emotional exhaustion, positive psychological capital, work engagement, work-family conflict
Procedia PDF Downloads 221464 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates
Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe
Abstract:
Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.Keywords: machine learning, MTB, WGS, drug resistant TB
Procedia PDF Downloads 48463 Personality Composition in Senior Management Teams: The Importance of Homogeneity in Dynamic Managerial Capabilities
Authors: Shelley Harrington
Abstract:
As a result of increasingly dynamic business environments, the creation and fostering of dynamic capabilities, [those capabilities that enable sustained competitive success despite of dynamism through the awareness and reconfiguration of internal and external competencies], supported by organisational learning [a dynamic capability] has gained increased and prevalent momentum in the research arena. Presenting findings funded by the Economic Social Research Council, this paper investigates the extent to which Senior Management Team (SMT) personality (at the trait and facet level) is associated with the creation of dynamic managerial capabilities at the team level, and effective organisational learning/knowledge sharing within the firm. In doing so, this research highlights the importance of micro-foundations in organisational psychology and specifically dynamic capabilities, a field which to date has largely ignored the importance of psychology in understanding these important and necessary capabilities. Using a direct measure of personality (NEO PI-3) at the trait and facet level across 32 high technology and finance firms in the UK, their CEOs (N=32) and their complete SMTs [N=212], a new measure of dynamic managerial capabilities at the team level was created and statistically validated for use within the work. A quantitative methodology was employed with regression and gap analysis being used to show the empirical foundations of personality being positioned as a micro-foundation of dynamic capabilities. The results of this study found that personality homogeneity within the SMT was required to strengthen the dynamic managerial capabilities of sensing, seizing and transforming, something which was required to reflect strong organisational learning at middle management level [N=533]. In particular, it was found that the greater the difference [t-score gaps] between the personality profiles of a Chief Executive Officer (CEO) and their complete, collective SMT, the lower the resulting self-reported nature of dynamic managerial capabilities. For example; the larger the difference between a CEOs level of dutifulness, a facet contributing to the definition of conscientiousness, and their SMT’s level of dutifulness, the lower the reported level of transforming, a capability fundamental to strategic change in a dynamic business environment. This in turn directly questions recent trends, particularly in upper echelons research highlighting the need for heterogeneity within teams. In doing so, it successfully positions personality as a micro-foundation of dynamic capabilities, thus contributing to recent discussions from within the strategic management field calling for the need to empirically explore dynamic capabilities at such a level.Keywords: dynamic managerial capabilities, senior management teams, personality, dynamism
Procedia PDF Downloads 268462 Utilization Of Medical Plants Tetrastigma glabratum (Blume) Planch from Mount Prau in the Blumah, Central Java
Authors: A. Lianah, B. Peter Sopade, C. Krisantini
Abstract:
Walikadep/Tetrastigma glabratum (Blume) Planch is a traditional herb that has been used by people of Blumah village; it is believed to have a stimulant effect and ailments for many illnesses. Our survey demonstrated that the people of Blumah village has exploited walikadep from Protected Forest of Mount Prau. More than 10% of 448 households at Blumah village have used walikadep as traditional herb or jamu. Part of the walikadep plants used is the liquid extract of the stem. The population of walikadep is getting scarce and it is rarely found now. The objectives of this study are to examine the stimulant effect of walikadep, to measure growth and exploitation rate of walikadep, and to find ways to effectively propagate the plants, as well as identifying the impact on the environment through field experiments and explorative survey. Stimulant effect was tested using open-field and hole-board test. Data were collected through field observation and experiment, and data were analysed using lab test and Anova. Rate of exploitation and plant growth was measured using Regression analysis; comparison of plant growth in-situ and ex-situ used descriptive analysis. The environmental impact was measured by population structure vegetation analysis method by Shannon Weinner. The study revealed that the walikadep exudates did not have a stimulant effect. Exploitation of walikadep and the long time required to reach harvestable size resulted in the scarcity of the plant in the natural habitat. Plant growth was faster in-situ than ex-situ; and fast growth was obtained from middle part cuttings treated with vermicompost. Biodiversity index after exploitation was higher than before exploitation, possibly due to the toxic and allellopathic effect (phenolics) of the plant. Based on these findings, further research is needed to examine the toxic effects of the leave and stem extract of walikadep and their allelopathic effects. We recommend that people of Blumah village to stop using walikadep as the stimulant. The local people, village government in the regional and central levels, and perhutani should do an integrated efforts to conserve walikadep through Pengamanan Terpadu Konservasi Walikadep Lestari (PTKWL) program, so this population of this plant in the natural habitat can be maintained.Keywords: utilization, medical plants, traditional, Tetastigma glabratum
Procedia PDF Downloads 280461 Determination Optimum Strike Price of FX Option Call Spread with USD/IDR Volatility and Garman–Kohlhagen Model Analysis
Authors: Bangkit Adhi Nugraha, Bambang Suripto
Abstract:
On September 2016 Bank Indonesia (BI) release regulation no.18/18/PBI/2016 that permit bank clients for using the FX option call spread USD/IDR. Basically, this product is a combination between clients buy FX call option (pay premium) and sell FX call option (receive premium) to protect against currency depreciation while also capping the potential upside with cheap premium cost. BI classifies this product as a structured product. The structured product is combination at least two financial instruments, either derivative or non-derivative instruments. The call spread is the first structured product against IDR permitted by BI since 2009 as response the demand increase from Indonesia firms on FX hedging through derivative for protecting market risk their foreign currency asset or liability. The composition of hedging products on Indonesian FX market increase from 35% on 2015 to 40% on 2016, the majority on swap product (FX forward, FX swap, cross currency swap). Swap is formulated by interest rate difference of the two currency pairs. The cost of swap product is 7% for USD/IDR with one year USD/IDR volatility 13%. That cost level makes swap products seem expensive for hedging buyers. Because call spread cost (around 1.5-3%) cheaper than swap, the most Indonesian firms are using NDF FX call spread USD/IDR on offshore with outstanding amount around 10 billion USD. The cheaper cost of call spread is the main advantage for hedging buyers. The problem arises because BI regulation requires the call spread buyer doing the dynamic hedging. That means, if call spread buyer choose strike price 1 and strike price 2 and volatility USD/IDR exchange rate surpass strike price 2, then the call spread buyer must buy another call spread with strike price 1’ (strike price 1’ = strike price 2) and strike price 2’ (strike price 2’ > strike price 1‘). It could make the premium cost of call spread doubled or even more and dismiss the purpose of hedging buyer to find the cheapest hedging cost. It is very crucial for the buyer to choose best optimum strike price before entering into the transaction. To help hedging buyer find the optimum strike price and avoid expensive multiple premium cost, we observe ten years 2005-2015 historical data of USD/IDR volatility to be compared with the price movement of the call spread USD/IDR using Garman–Kohlhagen Model (as a common formula on FX option pricing). We use statistical tools to analysis data correlation, understand nature of call spread price movement over ten years, and determine factors affecting price movement. We select some range of strike price and tenor and calculate the probability of dynamic hedging to occur and how much it’s cost. We found USD/IDR currency pairs is too uncertain and make dynamic hedging riskier and more expensive. We validated this result using one year data and shown small RMS. The study result could be used to understand nature of FX call spread and determine optimum strike price for hedging plan.Keywords: FX call spread USD/IDR, USD/IDR volatility statistical analysis, Garman–Kohlhagen Model on FX Option USD/IDR, Bank Indonesia Regulation no.18/18/PBI/2016
Procedia PDF Downloads 376460 The Impact Of Environmental Management System ISO 14001 Adoption on Firm Performance
Authors: Raymond Treacy, Paul Humphreys, Ronan McIvor, Trevor Cadden, Alan McKittrick
Abstract:
This study employed event study methodology to examine the role of institutions, resources and dynamic capabilities in the relationship between the Environmental Management System ISO 14001 adoption and firm performance. Utilising financial data from 140 ISO 14001 certified firms and 320 non-certified firms, the results of the study suggested that the UK and Irish manufacturers were not implementing ISO 14001 solely to gain legitimacy. In contrast, the results demonstrated that firms were fully integrating the ISO 14001 standard within their operations as certified firms were able to improve both financial and operating performance when compared to non-certified firms. However, while there were significant and long lasting improvements for employee productivity, manufacturing cost efficiency, return on assets and sales turnover, the sample firms operating cycle and fixed asset efficiency displayed evidence of diminishing returns in the long-run, underlying the observation that no operating advantage based on incremental improvements can be everlasting. Hence, there is an argument for investing in dynamic capabilities which help renew and refresh the resource base and help the firm adapt to changing environments. Indeed, the results of the regression analysis suggest that dynamic capabilities for innovation acted as a moderator in the relationship between ISO 14001 certification and firm performance. This, in turn, will have a significant and symbiotic influence on sustainability practices within the participating organisations. The study not only provides new and original insights, but demonstrates pragmatically how firms can take advantage of environmental management systems as a moderator to significantly enhance firm performance. However, while it was shown that firm innovation aided both short term and long term ROA performance, adaptive market capabilities only aided firms in the short-term at the marketing strategy deployment stage. Finally, the results have important implications for firms operating in an economic recession as the results suggest that firms should scale back investment in R&D while operating in an economic downturn. Conversely, under normal trading conditions, consistent and long term investments in R&D was found to moderate the relationship between ISO 14001 certification and firm performance. Hence, the results of the study have important implications for academics and management alike.Keywords: supply chain management, environmental management systems, quality management, sustainability, firm performance
Procedia PDF Downloads 307459 Implementation Of Evidence Based Nursing Practice And Associated Factors Among Nurses Working In Jimma Zone Public Hospitals, Southwest Ethiopia
Authors: Dawit Hoyiso, Abinet Arega, Terefe Markos
Abstract:
Background: - In spite of all the various programs and strategies to promote the use of research finding there is still gap between theory and practice. Difference in outcomes, health inequalities, and poorly performing health service continue to present a challenge to all nurses. A number of studies from various countries have reported that nurses’ experience of evidence-based practice is low. In Ethiopia there is an information gap on the extent of evidence based nursing practice and its associated factors. Objective: - the study aims to assess the implementation of evidence based nursing practice and associated factors among nurses in Jimma zone public hospitals. Method: - Institution based cross-sectional study was conducted from March 1-30/2015. A total of 333 sampled nurses for quantitative and 8 in-depth interview of key informants were involved in the study. Semi-structured questionnaire was adapted from funk’s BARRIER scale and Friedman’s test. Multivariable Linear regression was used to determine significance of association between dependent and independent variables. Pretest was done on 17 nurses of Bedele hospital. Ethical issue was secured. Result:-Of 333 distributed questionnaires 302 were completed, giving 90.6% response rate. Of 302 participants 245 were involved in EBP activities to different level (from seldom to often). About forty five(18.4%) of the respondents had implemented evidence based practice to low level (sometimes), one hundred three (42 %) of respondents had implemented evidence based practice to medium level and ninety seven (39.6 %) of respondents had implemented evidence based practice to high level(often). The first greatest perceived barrier was setting characteristic (mean score=26.60±7.08). Knowledge about research evidence was positively associated with implementation of evidence based nursing practice (β=0.76, P=0.008). Similarly, Place where the respondent graduated was positively associated with implementation of evidence based nursing practice (β=2.270, P=0.047). Also availability of information resources was positively associated with implementation of evidence based practice (β=0.67, P= 0.006). Conclusion: -Even though larger portion of nurses in this study were involved in evidence-based practice whereas small number of participants had implemented frequently. Evidence-based nursing practice was positively associated with knowledge of research, place where respondents graduated, and the availability of information resources. Organizational factors were found to be the greatest perceived barrier. Intervention programs on awareness creation, training, resource provision, and curriculum issues to improve implementation of evidence based nursing practice by stakeholders are recommended.Keywords: evidence based practice, nursing practice, research utilization, Ethiopia
Procedia PDF Downloads 94458 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures
Authors: Francesca Marsili
Abstract:
The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures
Procedia PDF Downloads 336457 Production and Characterization of Biochars from Torrefaction of Biomass
Authors: Serdar Yaman, Hanzade Haykiri-Acma
Abstract:
Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.Keywords: biochar, biomass, fuel upgrade, torrefaction
Procedia PDF Downloads 373