Search results for: international standard industrial classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13003

Search results for: international standard industrial classification

823 Correlation Analysis between Sensory Processing Sensitivity (SPS), Meares-Irlen Syndrome (MIS) and Dyslexia

Authors: Kaaryn M. Cater

Abstract:

Students with sensory processing sensitivity (SPS), Meares-Irlen Syndrome (MIS) and dyslexia can become overwhelmed and struggle to thrive in traditional tertiary learning environments. An estimated 50% of tertiary students who disclose learning related issues are dyslexic. This study explores the relationship between SPS, MIS and dyslexia. Baseline measures will be analysed to establish any correlation between these three minority methods of information processing. SPS is an innate sensitivity trait found in 15-20% of the population and has been identified in over 100 species of animals. Humans with SPS are referred to as Highly Sensitive People (HSP) and the measure of HSP is a 27 point self-test known as the Highly Sensitive Person Scale (HSPS). A 2016 study conducted by the author established base-line data for HSP students in a tertiary institution in New Zealand. The results of the study showed that all participating HSP students believed the knowledge of SPS to be life-changing and useful in managing life and study, in addition, they believed that all tutors and in-coming students should be given information on SPS. MIS is a visual processing and perception disorder that is found in approximately 10% of the population and has a variety of symptoms including visual fatigue, headaches and nausea. One way to ease some of these symptoms is through the use of colored lenses or overlays. Dyslexia is a complex phonological based information processing variation present in approximately 10% of the population. An estimated 50% of dyslexics are thought to have MIS. The study exploring possible correlations between these minority forms of information processing is due to begin in February 2017. An invitation will be extended to all first year students enrolled in degree programmes across all faculties and schools within the institution. An estimated 900 students will be eligible to participate in the study. Participants will be asked to complete a battery of on-line questionnaires including the Highly Sensitive Person Scale, the International Dyslexia Association adult self-assessment and the adapted Irlen indicator. All three scales have been used extensively in literature and have been validated among many populations. All participants whose score on any (or some) of the three questionnaires suggest a minority method of information processing will receive an invitation to meet with a learning advisor, and given access to counselling services if they choose. Meeting with a learning advisor is not mandatory, and some participants may choose not to receive help. Data will be collected using the Question Pro platform and base-line data will be analysed using correlation and regression analysis to identify relationships and predictors between SPS, MIS and dyslexia. This study forms part of a larger three year longitudinal study and participants will be required to complete questionnaires at annual intervals in subsequent years of the study until completion of (or withdrawal from) their degree. At these data collection points, participants will be questioned on any additional support received relating to their minority method(s) of information processing. Data from this study will be available by April 2017.

Keywords: dyslexia, highly sensitive person (HSP), Meares-Irlen Syndrome (MIS), minority forms of information processing, sensory processing sensitivity (SPS)

Procedia PDF Downloads 221
822 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data

Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.

Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection

Procedia PDF Downloads 111
821 Polar Bears in Antarctica: An Analysis of Treaty Barriers

Authors: Madison Hall

Abstract:

The Assisted Colonization of Polar Bears to Antarctica requires a careful analysis of treaties to understand existing legal barriers to Ursus maritimus transport and movement. An absence of land-based migration routes prevent polar bears from accessing southern polar regions on their own. This lack of access is compounded by current treaties which limit human intervention and assistance to ford these physical and legal barriers. In a time of massive planetary extinctions, Assisted Colonization posits that certain endangered species may be prime candidates for relocation to hospitable environments to which they have never previously had access. By analyzing existing treaties, this paper will examine how polar bears are limited in movement by humankind’s legal barriers. International treaties may be considered codified reflections of anthropocentric values of the best knowledge and understanding of an identified problem at a set point in time, as understood through the human lens. Even as human social values and scientific insights evolve, so too must treaties evolve which specify legal frameworks and structures impacting keystone species and related biomes. Due to costs and other myriad difficulties, only a very select number of species will be given this opportunity. While some species move into new regions and are then deemed invasive, Assisted Colonization considers that some assistance may be mandated due to the nature of humankind’s role in climate change. This moral question and ethical imperative against the backdrop of escalating climate impacts, drives the question forward; what is the potential for successfully relocating a select handful of charismatic and ecologically important life forms? Is it possible to reimagine a different, but balanced Antarctic ecosystem? Listed as a threatened species under the U.S. Endangered Species Act, a result of the ongoing loss of critical habitat by melting sea ice, polar bears have limited options for long term survival in the wild. Our current regime for safeguarding animals facing extinction frequently utilizes zoos and their breeding programs, to keep alive the genetic diversity of the species until some future time when reintroduction, somewhere, may be attempted. By exploring the potential for polar bears to be relocated to Antarctica, we must analyze the complex ethical, legal, political, financial, and biological realms, which are the backdrop to framing all questions in this arena. Can we do it? Should we do it? By utilizing an environmental ethics perspective, we propose that the Ecological Commons of the Arctic and Antarctic should not be viewed solely through the lens of human resource management needs. From this perspective, polar bears do not need our permission, they need our assistance. Antarctica therefore represents a second, if imperfect chance, to buy time for polar bears, in a world where polar regimes, not yet fully understood, are themselves quickly changing as a result of climate change.

Keywords: polar bear, climate change, environmental ethics, Arctic, Antarctica, assisted colonization, treaty

Procedia PDF Downloads 404
820 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study

Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming

Abstract:

Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.

Keywords: binary outcomes, statistical methods, clinical trials, simulation study

Procedia PDF Downloads 98
819 Human Immuno-Deficiency Virus Co-Infection with Hepatitis B Virus and Baseline Cd4+ T Cell Count among Patients Attending a Tertiary Care Hospital, Nepal

Authors: Soma Kanta Baral

Abstract:

Background: Since 1981, when the first AIDS case was reported, worldwide, more than 34 million people have been infected with HIV. Almost 95 percent of the people infected with HIV live in developing countries. As HBV & HIV share similar routes of transmission by sexual intercourse or drug use by parenteral injection, co-infection is common. Because of the limited access to healthcare & HIV treatment in developing countries, HIV-infected individuals are present late for care. Enumeration of CD4+ T cell count at the time of diagnosis has been useful to initiate the therapy in HIV infected individuals. The baseline CD4+ T cell count shows high immunological variability among patients. Methods: This prospective study was done in the serology section of the Department of Microbiology over a period of one year from august 2012 to July 2013. A total of 13037 individuals subjected for HIV test were included in the study comprising of 4982 males & 8055 females. Blood sample was collected by vein puncture aseptically with standard operational procedure in clean & dry test-tube. All blood samples were screened for HIV as described by WHO algorithm by Immuno-chromatography rapid kits. Further confirmation was done by biokit ELISA method as per the manufacturer’s guidelines. After informed consent, HIV positive individuals were screened for HBsAg by Immuno-chromatography rapid kits (Hepacard). Further confirmation was done by biokit ELISA method as per the manufacturer’s guidelines. EDTA blood samples were collected from the HIV sero-positive individuals for baseline CD4+ T count. Then, CD4+ T cells count was determined by using FACS Calibur Flow Cytometer (BD). Results: Among 13037 individuals screened for HIV, 104 (0.8%) were found to be infected comprising of 69(66.34%) males & 35 (33.65%) females. The study showed that the high infection was noted in housewives (28.7%), active age group (30.76%), rural area (56.7%) & in heterosexual route (80.9%) of transmission. Out of total HIV infected individuals, distribution of HBV co-infection was found to be 6(5.7%). All co- infected individuals were married, male, above the age of 25 years & heterosexual route of transmission. Baseline CD4+ T cell count of HIV infected patient was found higher (mean CD4+ T cell count; 283cells/cu.mm) than HBV co-infected patients (mean CD4+ T cell count; 91 cells/cu.mm). Majority (77.2%) of HIV infected & all co-infected individuals were presented in our center late (CD4+ T cell count;< 350/cu. mm) for diagnosis and care. Majority of co- infected 4 (80%) were late presented with advanced AIDS stage (CD4+ count; <200/cu.mm). Conclusions: The study showed a high percentage of HIV sero-positive & co- infected individuals. Baseline CD4+ T cell count of majority of HIV infected individuals was found to be low. Hence, more sustained and vigorous awareness campaigns & counseling still need to be done in order to promote early diagnosis and management.

Keywords: HIV/AIDS, HBsAg, co-infection, CD4+

Procedia PDF Downloads 201
818 A Simulation-Based Investigation of the Smooth-Wall, Radial Gravity Problem of Granular Flow through a Wedge-Shaped Hopper

Authors: A. F. Momin, D. V. Khakhar

Abstract:

Granular materials consist of particulate particles found in nature and various industries that, due to gravity flow, behave macroscopically like liquids. A fundamental industrial unit operation is a hopper with inclined walls or a converging channel in which material flows downward under gravity and exits the storage bin through the bottom outlet. The simplest form of the flow corresponds to a wedge-shaped, quasi-two-dimensional geometry with smooth walls and radially directed gravitational force toward the apex of the wedge. These flows were examined using the Mohr-Coulomb criterion in the classic work of Savage (1965), while Ravi Prakash and Rao used the critical state theory (1988). The smooth-wall radial gravity (SWRG) wedge-shaped hopper is simulated using the discrete element method (DEM) to test existing theories. DEM simulations involve the solution of Newton's equations, taking particle-particle interactions into account to compute stress and velocity fields for the flow in the SWRG system. Our computational results are consistent with the predictions of Savage (1965) and Ravi Prakash and Rao (1988), except for the region near the exit, where both viscous and frictional effects are present. To further comprehend this behaviour, a parametric analysis is carried out to analyze the rheology of wedge-shaped hoppers by varying the orifice diameter, wedge angle, friction coefficient, and stiffness. The conclusion is that velocity increases as the flow rate increases but decreases as the wedge angle and friction coefficient increase. We observed no substantial changes in velocity due to varying stiffness. It is anticipated that stresses at the exit result from the transfer of momentum during particle collisions; for this reason, relationships between viscosity and shear rate are shown, and all data are collapsed into a single curve. In addition, it is demonstrated that viscosity and volume fraction exhibit power law correlations with the inertial number and that all the data collapse into a single curve. A continuum model for determining granular flows is presented using empirical correlations.

Keywords: discrete element method, gravity flow, smooth-wall, wedge-shaped hoppers

Procedia PDF Downloads 70
817 The Importance of SEEQ in Teaching Evaluation of Undergraduate Engineering Education in India

Authors: Aabha Chaubey, Bani Bhattacharya

Abstract:

Evaluation of the quality of teaching in engineering education in India needs to be conducted on a continuous basis to achieve the best teaching quality in technical education. Quality teaching is an influential factor in technical education which impacts largely on learning outcomes of the students. Present study is not exclusively theory-driven, but it draws on various specific concepts and constructs in the domain of technical education. These include teaching and learning in higher education, teacher effectiveness, and teacher evaluation and performance management in higher education. Student Evaluation of Education Quality (SEEQ) was proposed as one of the evaluation instruments of the quality teaching in engineering education. SEEQ is one of the popular and standard instrument widely utilized all over the world and bears the validity and reliability in educational world. The present study was designed to evaluate the teaching quality through SEEQ in the context of technical education in India, including its validity and reliability based on the collected data. The multiple dimensionality of SEEQ that is present in every teaching and learning process made it quite suitable to collect the feedback of students regarding the quality of instructions and instructor. The SEEQ comprises of 9 original constructs i.e.; learning value, teacher enthusiasm, organization, group interaction, and individual rapport, breadth of coverage, assessment, assignments and overall rating of particular course and instructor with total of 33 items. In the present study, a total of 350 samples comprising first year undergraduate students from Indian Institute of Technology, Kharagpur (IIT, Kharagpur, India) were included for the evaluation of the importance of SEEQ. They belonged to four different courses of different streams of engineering studies. The above studies depicted the validity and reliability of SEEQ was based upon the collected data. This further needs Confirmatory Factor Analysis (CFA) and Analysis of Moment structure (AMOS) for various scaled instrument like SEEQ Cronbach’s alpha which are associated with SPSS for the examination of the internal consistency. The evaluation of the effectiveness of SEEQ in CFA is implemented on the basis of fit indices such as CMIN/df, CFI, GFI, AGFI and RMSEA readings. The major findings of this study showed the fitness indices such as ChiSq = 993.664,df = 390,ChiSq/df = 2.548,GFI = 0.782,AGFI = 0.736,CFI = 0.848,RMSEA = 0.062,TLI = 0.945,RMR = 0.029,PCLOSE = 0.006. The final analysis of the fit indices presented positive construct validity and stability, on the other hand a higher reliability was also depicted which indicated towards internal consistency. Thus, the study suggests the effectivity of SEEQ as the indicator of the quality evaluation instrument in teaching-learning process in engineering education in India. Therefore, it is expected that with the continuation of this research in engineering education there remains a possibility towards the betterment of the quality of the technical education in India. It is also expected that this study will provide an empirical and theoretical logic towards locating a construct or factor related to teaching, which has the greatest impact on teaching and learning process in a particular course or stream in engineering education.

Keywords: confirmatory factor analysis, engineering education, SEEQ, teaching and learning process

Procedia PDF Downloads 408
816 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act

Authors: Maria Jędrzejczak, Patryk Pieniążek

Abstract:

The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.

Keywords: data protection law, personal data, AI law, personal data breach

Procedia PDF Downloads 46
815 Selective Separation of Amino Acids by Reactive Extraction with Di-(2-Ethylhexyl) Phosphoric Acid

Authors: Alexandra C. Blaga, Dan Caşcaval, Alexandra Tucaliuc, Madalina Poştaru, Anca I. Galaction

Abstract:

Amino acids are valuable chemical products used in in human foods, in animal feed additives and in the pharmaceutical field. Recently, there has been a noticeable rise of amino acids utilization throughout the world to include their use as raw materials in the production of various industrial chemicals: oil gelating agents (amino acid-based surfactants) to recover effluent oil in seas and rivers and poly(amino acids), which are attracting attention for biodegradable plastics manufacture. The amino acids can be obtained by biosynthesis or from protein hydrolysis, but their separation from the obtained mixtures can be challenging. In the last decades there has been a continuous interest in developing processes that will improve the selectivity and yield of downstream processing steps. The liquid-liquid extraction of amino acids (dissociated at any pH-value of the aqueous solutions) is possible only by using the reactive extraction technique, mainly with extractants of organophosphoric acid derivatives, high molecular weight amines and crown-ethers. The purpose of this study was to analyse the separation of nine amino acids of acidic character (l-aspartic acid, l-glutamic acid), basic character (l-histidine, l-lysine, l-arginine) and neutral character (l-glycine, l-tryptophan, l-cysteine, l-alanine) by reactive extraction with di-(2-ethylhexyl)phosphoric acid (D2EHPA) dissolved in butyl acetate. The results showed that the separation yield is controlled by the pH value of the aqueous phase: the reactive extraction of amino acids with D2EHPA is possible only if the amino acids exist in aqueous solution in their cationic forms (pH of aqueous phase below the isoeletric point). The studies for individual amino acids indicated the possibility of selectively separate different groups of amino acids with similar acidic properties as a function of aqueous solution pH-value: the maximum yields are reached for a pH domain of 2–3, then strongly decreasing with the pH increase. Thus, for acidic and neutral amino acids, the extraction becomes impossible at the isolelectric point (pHi) and for basic amino acids at a pH value lower than pHi, as a result of the carboxylic group dissociation. From the results obtained for the separation from the mixture of the nine amino acids, at different pH, it can be observed that all amino acids are extracted with different yields, for a pH domain of 1.5–3. Over this interval, the extract contains only the amino acids with neutral and basic character. For pH 5–6, only the neutral amino acids are extracted and for pH > 6 the extraction becomes impossible. Using this technique, the total separation of the following amino acids groups has been performed: neutral amino acids at pH 5–5.5, basic amino acids and l-cysteine at pH 4–4.5, l-histidine at pH 3–3.5 and acidic amino acids at pH 2–2.5.

Keywords: amino acids, di-(2-ethylhexyl) phosphoric acid, reactive extraction, selective extraction

Procedia PDF Downloads 414
814 Thermal Imaging of Aircraft Piston Engine in Laboratory Conditions

Authors: Lukasz Grabowski, Marcin Szlachetka, Tytus Tulwin

Abstract:

The main task of the engine cooling system is to maintain its average operating temperatures within strictly defined limits. Too high or too low average temperatures result in accelerated wear or even damage to the engine or its individual components. In order to avoid local overheating or significant temperature gradients, leading to high stresses in the component, the aim is to ensure an even flow of air. In the case of analyses related to heat exchange, one of the main problems is the comparison of temperature fields because standard measuring instruments such as thermocouples or thermistors only provide information about the course of temperature at a given point. Thermal imaging tests can be helpful in this case. With appropriate camera settings and taking into account environmental conditions, we are able to obtain accurate temperature fields in the form of thermograms. Emission of heat from the engine to the engine compartment is an important issue when designing a cooling system. Also, in the case of liquid cooling, the main sources of heat in the form of emissions from the engine block, cylinders, etc. should be identified. It is important to redesign the engine compartment ventilation system. Ensuring proper cooling of aircraft reciprocating engine is difficult not only because of variable operating range but mainly because of different cooling conditions related to the change of speed or altitude of flight. Engine temperature also has a direct and significant impact on the properties of engine oil, which under the influence of this parameter changes, in particular, its viscosity. Too low or too high, its value can be a result of fast wear of engine parts. One of the ways to determine the temperatures occurring on individual parts of the engine is the use of thermal imaging measurements. The article presents the results of preliminary thermal imaging tests of aircraft piston diesel engine with a maximum power of about 100 HP. In order to perform the heat emission tests of the tested engine, the ThermaCAM S65 thermovision monitoring system from FLIR (Forward-Looking Infrared) together with the ThermaCAM Researcher Professional software was used. The measurements were carried out after the engine warm up. The engine speed was 5300 rpm The measurements were taken for the following environmental parameters: air temperature: 17 °C, ambient pressure: 1004 hPa, relative humidity: 38%. The temperatures distribution on the engine cylinder and on the exhaust manifold were analysed. Thermal imaging tests made it possible to relate the results of simulation tests to the real object by measuring the rib temperature of the cylinders. The results obtained are necessary to develop a CFD (Computational Fluid Dynamics) model of heat emission from the engine bay. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: aircraft, piston engine, heat, emission

Procedia PDF Downloads 108
813 Impact of Maternal Nationality on Caesarean Section Rate Variation in a High-income Country

Authors: Saheed Shittu, Lolwa Alansari, Fahed Nattouf, Tawa Olukade, Naji Abdallah, Tamara Alshdafat, Sarra Amdouni

Abstract:

Cesarean sections (CS), a highly regarded surgical intervention for improving fetal-maternal outcomes and serving as an integral part of emergency obstetric services, are not without complications. Although CS has many advantages, it poses significant risks to both mother and child and increases healthcare expenditures in the long run. The escalating global prevalence of CS, coupled with variations in rates among immigrant populations, has prompted an inquiry into the correlation between CS rates and the nationalities of women undergoing deliveries at Al-Wakra Hospital (AWH), Qatar's second-largest public maternity hospital. This inquiry is motivated by the notable CS rate of 36%, deemed high in comparison to the 34% recorded across other Hamad Medical Corporation (HMC) maternity divisions This is Qatar's first comprehensive investigation of Caesarean section rates and nationalities. A retrospective cross-sectional study was conducted, and data for all births delivered in 2019 were retrieved from the hospital's electronic medical records. The CS rate, the crude rate, and adjusted risks of Caesarean delivery for mothers from each nationality were determined. The common indications for CS were analysed based on nationality. The association between nationality and Caesarean rates was examined using binomial logistic regression analysis considering Qatari women as a standard reference group. The correlation between the CS rate in the country of nationality and the observed CS rate in Qatar was also examined using Pearson's correlation. This study included 4,816 births from 69 different nationalities. CS was performed in 1767 women, equating to 36.5%. The nationalities with the highest CS rates were Egyptian (49.6%), Lebanese (45.5%), Filipino and Indian (both 42.2%). Qatari women recorded a CS rate of 33.4%. The major indication for elective CS was previous multiple CS (39.9%) and one prior CS, where the patient declined vaginal birth after the cesarean (VBAC) option (26.8%). A distinct pattern was noticed: elective CS was predominantly performed on Arab women, whereas emergency CS was common among women of Asian and Sub-Saharan African nationalities. Moreover, a significant correlation was found between the CS rates in Qatar and the women's countries of origin. Also, a high CS rate was linked to instances of previous CS. As a result of these insights, strategic interventions were successfully implemented at the facility to mitigate unwarranted CS, resulting in a notable reduction in CS rate from 36.5% in 2019 to 34% in 2022. This proves the efficacy of the meticulously researched approach. The focus has now shifted to reducing primary CS rates and facilitating well-informed decisions regarding childbirth methods.

Keywords: maternal nationality, caesarean section rate variation, migrants, high-income country

Procedia PDF Downloads 52
812 A 1H NMR-Linked PCR Modelling Strategy for Tracking the Fatty Acid Sources of Aldehydic Lipid Oxidation Products in Culinary Oils Exposed to Simulated Shallow-Frying Episodes

Authors: Martin Grootveld, Benita Percival, Sarah Moumtaz, Kerry L. Grootveld

Abstract:

Objectives/Hypotheses: The adverse health effect potential of dietary lipid oxidation products (LOPs) has evoked much clinical interest. Therefore, we employed a 1H NMR-linked Principal Component Regression (PCR) chemometrics modelling strategy to explore relationships between data matrices comprising (1) aldehydic LOP concentrations generated in culinary oils/fats when exposed to laboratory-simulated shallow frying practices, and (2) the prior saturated (SFA), monounsaturated (MUFA) and polyunsaturated fatty acid (PUFA) contents of such frying media (FM), together with their heating time-points at a standard frying temperature (180 oC). Methods: Corn, sunflower, extra virgin olive, rapeseed, linseed, canola, coconut and MUFA-rich algae frying oils, together with butter and lard, were heated according to laboratory-simulated shallow-frying episodes at 180 oC, and FM samples were collected at time-points of 0, 5, 10, 20, 30, 60, and 90 min. (n = 6 replicates per sample). Aldehydes were determined by 1H NMR analysis (Bruker AV 400 MHz spectrometer). The first (dependent output variable) PCR data matrix comprised aldehyde concentration scores vectors (PC1* and PC2*), whilst the second (predictor) one incorporated those from the fatty acid content/heating time variables (PC1-PC4) and their first-order interactions. Results: Structurally complex trans,trans- and cis,trans-alka-2,4-dienals, 4,5-epxy-trans-2-alkenals and 4-hydroxy-/4-hydroperoxy-trans-2-alkenals (group I aldehydes predominantly arising from PUFA peroxidation) strongly and positively loaded on PC1*, whereas n-alkanals and trans-2-alkenals (group II aldehydes derived from both MUFA and PUFA hydroperoxides) strongly and positively loaded on PC2*. PCR analysis of these scores vectors (SVs) demonstrated that PCs 1 (positively-loaded linoleoylglycerols and [linoleoylglycerol]:[SFA] content ratio), 2 (positively-loaded oleoylglycerols and negatively-loaded SFAs), 3 (positively-loaded linolenoylglycerols and [PUFA]:[SFA] content ratios), and 4 (exclusively orthogonal sampling time-points) all powerfully contributed to aldehydic PC1* SVs (p 10-3 to < 10-9), as did all PC1-3 x PC4 interaction ones (p 10-5 to < 10-9). PC2* was also markedly dependent on all the above PC SVs (PC2 > PC1 and PC3), and the interactions of PC1 and PC2 with PC4 (p < 10-9 in each case), but not the PC3 x PC4 contribution. Conclusions: NMR-linked PCR analysis is a valuable strategy for (1) modelling the generation of aldehydic LOPs in heated cooking oils and other FM, and (2) tracking their unsaturated fatty acid (UFA) triacylglycerol sources therein.

Keywords: frying oils, lipid oxidation products, frying episodes, chemometrics, principal component regression, NMR Analysis, cytotoxic/genotoxic aldehydes

Procedia PDF Downloads 159
811 Structural Analysis of a Composite Wind Turbine Blade

Authors: C. Amer, M. Sahin

Abstract:

The design of an optimised horizontal axis 5-meter-long wind turbine rotor blade in according with IEC 61400-2 standard is a research and development project in order to fulfil the requirements of high efficiency of torque from wind production and to optimise the structural components to the lightest and strongest way possible. For this purpose, a research study is presented here by focusing on the structural characteristics of a composite wind turbine blade via finite element modelling and analysis tools. In this work, first, the required data regarding the general geometrical parts are gathered. Then, the airfoil geometries are created at various sections along the span of the blade by using CATIA software to obtain the two surfaces, namely; the suction and the pressure side of the blade in which there is a hat shaped fibre reinforced plastic spar beam, so-called chassis starting at 0.5m from the root of the blade and extends up to 4 m and filled with a foam core. The root part connecting the blade to the main rotor differential metallic hub having twelve hollow threaded studs is then modelled. The materials are assigned as two different types of glass fabrics, polymeric foam core material and the steel-balsa wood combination for the root connection parts. The glass fabrics are applied using hand wet lay-up lamination with epoxy resin as METYX L600E10C-0, is the unidirectional continuous fibres and METYX XL800E10F having a tri-axial architecture with fibres in the 0,+45,-45 degree orientations in a ratio of 2:1:1. Divinycell H45 is used as the polymeric foam. The finite element modelling of the blade is performed via MSC PATRAN software with various meshes created on each structural part considering shell type for all surface geometries, and lumped mass were added to simulate extra adhesive locations. For the static analysis, the boundary conditions are assigned as fixed at the root through aforementioned bolts, where for dynamic analysis both fixed-free and free-free boundary conditions are made. By also taking the mesh independency into account, MSC NASTRAN is used as a solver for both analyses. The static analysis aims the tip deflection of the blade under its own weight and the dynamic analysis comprises normal mode dynamic analysis performed in order to obtain the natural frequencies and corresponding mode shapes focusing the first five in and out-of-plane bending and the torsional modes of the blade. The analyses results of this study are then used as a benchmark prior to modal testing, where the experiments over the produced wind turbine rotor blade has approved the analytical calculations.

Keywords: dynamic analysis, fiber reinforced composites, horizontal axis wind turbine blade, hand-wet layup, modal testing

Procedia PDF Downloads 415
810 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 389
809 Golden Dawn's Rhetoric on Social Networks: Populism, Xenophobia and Antisemitism

Authors: Georgios Samaras

Abstract:

New media such as Facebook, YouTube and Twitter introduced the world to a new era of instant communication. An era where online interactions could replace a lot of offline actions. Technology can create a mediated environment in which participants can communicate (one-to-one, one-to-many, and many-to-many) both synchronously and asynchronously and participate in reciprocal message exchanges. Currently, social networks are attracting similar academic attention to that of the internet after its mainstream implementation into public life. Websites and platforms are seen as the forefront of a new political change. There is a significant backdrop of previous methodologies employed to research the effects of social networks. New approaches are being developed to be able to adapt to the growth of social networks and the invention of new platforms. Golden Dawn was the first openly neo-Nazi party post World War II to win seats in the parliament of a European country. Its racist rhetoric and violent tactics on social networks were rewarded by their supporters, who in the face of Golden Dawn’s leaders saw a ‘new dawn’ in Greek politics. Mainstream media banned its leaders and members of the party indefinitely after Ilias Kasidiaris attacked Liana Kanelli, a member of the Greek Communist Party, on live television. This media ban was seen as a treasonous move by a significant percentage of voters, who believed that the system was desperately trying to censor Golden Dawn to favor mainstream parties. The shocking attack on live television received international coverage and while European countries were condemning this newly emerged neo-Nazi rhetoric, almost 7 percent of the Greek population rewarded Golden Dawn with 18 seats in the Greek parliament. Many seem to think that Golden Dawn mobilised its voters online and this approach played a significant role in spreading their message and appealing to wider audiences. No strict online censorship existed back in 2012 and although Golden Dawn was openly used neo-Nazi symbolism, it was allowed to use social networks without serious restrictions until 2017. This paper used qualitative methods to investigate Golden Dawn’s rise in social networks from 2012 to 2019. The focus of the content analysis was set on three social networking platforms: Facebook, Twitter and YouTube, while the existence of Golden Dawn’s website, which was used as a news sharing hub, was also taken into account. The content analysis included text and visual analyses that sampled content from their social networking pages to translate their political messaging through an ideological lens focused on extreme-right populism. The absence of hate speech regulations on social network platforms in 2012 allowed the free expression of those heavily ultranationalist and populist views, as they were employed by Golden Dawn in the Greek political scene. On YouTube, Facebook and Twitter, the influence of their rhetoric was particularly strong. Official channels and MPs profiles were investigated to explore the messaging in-depth and understand its ideological elements.

Keywords: populism, far-right, social media, Greece, golden dawn

Procedia PDF Downloads 133
808 Trainability of Executive Functions during Preschool Age Analysis of Inhibition of 5-Year-Old Children

Authors: Christian Andrä, Pauline Hähner, Sebastian Ludyga

Abstract:

Introduction: In the recent past, discussions on the importance of physical activity for child development have contributed to a growing interest in executive functions, which refer to cognitive processes. By controlling, modulating and coordinating sub-processes, they make it possible to achieve superior goals. Major components include working memory, inhibition and cognitive flexibility. While executive functions can be trained easily in school children, there are still research deficits regarding the trainability during preschool age. Methodology: This quasi-experimental study with pre- and post-design analyzes 23 children [age: 5.0 (mean value) ± 0.7 (standard deviation)] from four different sports groups. The intervention group was made up of 13 children (IG: 4.9 ± 0.6), while the control group consisted of ten children (CG: 5.1 ± 0.9). Between pre-test and post-test, children from the intervention group participated special games that train executive functions (i.e., changing rules of the game, introduction of new stimuli in familiar games) for ten units of their weekly sports program. The sports program of the control group was not modified. A computer-based version of the Eriksen Flanker Task was employed in order to analyze the participants’ inhibition ability. In two rounds, the participants had to respond 50 times and as fast as possible to a certain target (direction of sight of a fish; the target was always placed in a central position between five fish). Congruent (all fish have the same direction of sight) and incongruent (central fish faces opposite direction) stimuli were used. Relevant parameters were response time and accuracy. The main objective was to investigate whether children from the intervention group show more improvement in the two parameters than the children from the control group. Major findings: The intervention group revealed significant improvements in congruent response time (pre: 1.34 s, post: 1.12 s, p<.01), while the control group did not show any statistically relevant difference (pre: 1.31 s, post: 1.24 s). Likewise, the comparison of incongruent response times indicates a comparable result (IG: pre: 1.44 s, post: 1.25 s, p<.05 vs. CG: pre: 1.38 s, post: 1.38 s). In terms of accuracy for congruent stimuli, the intervention group showed significant improvements (pre: 90.1 %, post: 95.9 %, p<.01). In contrast, no significant improvement was found for the control group (pre: 88.8 %, post: 92.9 %). Vice versa, the intervention group did not display any significant results for incongruent stimuli (pre: 74.9 %, post: 83.5 %), while the control group revealed a significant difference (pre: 68.9 %, post: 80.3 %, p<.01). The analysis of three out of four criteria demonstrates that children who took part in a special sports program improved more than children who did not. The contrary results for the last criterion could be caused by the control group’s low results from the pre-test. Conclusion: The findings illustrate that inhibition can be trained as early as in preschool age. The combination of familiar games with increased requirements for attention and control processes appears to be particularly suitable.

Keywords: executive functions, flanker task, inhibition, preschool children

Procedia PDF Downloads 241
807 A Robust Stretchable Bio Micro-Electromechanical Systems Technology for High-Strain in vitro Cellular Studies

Authors: Tiffany Baetens, Sophie Halliez, Luc Buée, Emiliano Pallecchi, Vincent Thomy, Steve Arscott

Abstract:

We demonstrate here a viable stretchable bio-microelectromechanical systems (BioMEMS) technology for use with biological studies concerned with the effect of high mechanical strains on living cells. An example of this is traumatic brain injury (TBI) where neurons are damaged with physical force to the brain during, e.g., accidents and sports. Robust, miniaturized integrated systems are needed by biologists to be able to study the effect of TBI on neuron cells in vitro. The major challenges in this area are (i) to develop micro, and nanofabrication processes which are based on stretchable substrates and to (ii) create systems which are robust and performant at very high mechanical strain values—sometimes as high as 100%. At the time of writing, such processes and systems were rapidly evolving subject of research and development. The BioMEMS which we present here is composed of an elastomer substrate (low Young’s modulus ~1 MPa) onto which is patterned robust electrodes and insulators. The patterning of the thin films is achieved using standard photolithography techniques directly on the elastomer substrate—thus making the process generic and applicable to many materials’ in based systems. The chosen elastomer used is commercial ‘Sylgard 184’ polydimethylsiloxane (PDMS). It is spin-coated onto a silicon wafer. Multistep ultra-violet based photolithography involving commercial photoresists are then used to pattern robust thin film metallic electrodes (chromium/gold) and insulating layers (parylene) on the top of the PDMS substrate. The thin film metals are deposited using thermal evaporation and shaped using lift-off techniques The BioMEMS has been characterized mechanically using an in-house strain-applicator tool. The system is composed of 12 electrodes with one reference electrode transversally-orientated to the uniaxial longitudinal straining of the system. The electrical resistance of the electrodes is observed to remain very stable with applied strain—with a resistivity approaching that of evaporated gold—up to an interline strain of ~50%. The mechanical characterization revealed some interesting original properties of such stretchable BioMEMS. For example, a Poisson effect induced electrical ‘self-healing’ of cracking was identified. Biocompatibility of the commercial photoresist has been studied and is conclusive. We will present the results of the BioMEMS, which has also characterized living cells with a commercial Multi Electrode Array (MEA) characterization tool (Multi Channel Systems, USA). The BioMEMS enables the cells to be strained up to 50% and then characterized electrically and optically.

Keywords: BioMEMS, elastomer, electrical impedance measurements of living cells, high mechanical strain, microfabrication, stretchable systems, thin films, traumatic brain injury

Procedia PDF Downloads 136
806 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes

Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun

Abstract:

Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.

Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces

Procedia PDF Downloads 127
805 Estimation of Morbidity Level of Industrial Labour Conditions at Zestafoni Ferroalloy Plant

Authors: M. Turmanauli, T. Todua, O. Gvaberidze, R. Javakhadze, N. Chkhaidze, N. Khatiashvili

Abstract:

Background: Mining process has the significant influence on human health and quality of life. In recent years the events in Georgia were reflected on the industry working process, especially minimal requirements of labor safety, hygiene standards of workplace and the regime of work and rest are not observed. This situation is often caused by the lack of responsibility, awareness, and knowledge both of workers and employers. The control of working conditions and its protection has been worsened in many of industries. Materials and Methods: For evaluation of the current situation the prospective epidemiological study by face to face interview method was conducted at Georgian “Manganese Zestafoni Ferroalloy Plant” in 2011-2013. 65.7% of employees (1428 bulletin) were surveyed and the incidence rates of temporary disability days were studied. Results: The average length of a temporary disability single accident was studied taking into consideration as sex groups as well as the whole cohort. According to the classes of harmfulness the following results were received: Class 2.0-10.3%; 3.1-12.4%; 3.2-35.1%; 3.3-12.1%; 3.4-17.6%; 4.0-12.5%. Among the employees 47.5% and 83.1% were tobacco and alcohol consumers respectively. According to the age groups and years of work on the base of previous experience ≥50 ages and ≥21 years of work data prevalence respectively. The obtained data revealed increased morbidity rate according to age and years of work. It was found that the bone and articulate system and connective tissue diseases, aggravation of chronic respiratory diseases, ischemic heart diseases, hypertension and cerebral blood discirculation were the leading among the other diseases. High prevalence of morbidity observed in the workplace with not satisfactory labor conditions from the hygienic point of view. Conclusion: According to received data the causes of morbidity are the followings: unsafety labor conditions; incomplete of preventive medical examinations (preliminary and periodic); lack of access to appropriate health care services; derangement of gathering, recording, and analysis of morbidity data. This epidemiological study was conducted at the JSC “Manganese Ferro Alloy Plant” according to State program “ Prevention of Occupational Diseases” (Program code is 35 03 02 05).

Keywords: occupational health, mining process, morbidity level, cerebral blood discirculation

Procedia PDF Downloads 416
804 Antimicrobial Resistance of Acinetobacter baumannii in Veterinary Settings: A One Health Perspective from Punjab, Pakistan

Authors: Minhas Alam, Muhammad Hidayat Rasool, Mohsin Khurshid, Bilal Aslam

Abstract:

The genus Acinetobacter has emerged as a significant concern in hospital-acquired infections, particularly due to the versatility of Acinetobacter baumannii in causing nosocomial infections. The organism's remarkable metabolic adaptability allows it to thrive in various environments, including the environment, animals, and humans. However, the extent of antimicrobial resistance in Acinetobacter species from veterinary settings, especially in developing countries like Pakistan, remains unclear. This study aimed to isolate and characterize Acinetobacter spp. from veterinary settings in Punjab, Pakistan. A total of 2,230 specimens were collected, including 1,960 samples from veterinary settings (nasal and rectal swabs from dairy and beef cattle), 200 from the environment, and 70 from human clinical settings. Isolates were identified using routine microbiological procedures and confirmed by polymerase chain reaction (PCR). Antimicrobial susceptibility was determined by the disc diffusion method, and minimum inhibitory concentration (MIC) was measured by the micro broth dilution method. Molecular techniques, such as PCR and DNA sequencing, were used to screen for antimicrobial-resistant determinants. Genetic diversity was assessed using standard techniques. The results showed that the overall prevalence of A. baumannii in cattle was 6.63% (65/980). However, among cattle, a higher prevalence of A. baumannii was observed in dairy cattle, 7.38% (54/731), followed by beef cattle, 4.41% (11/249). Out of 65 A. baumannii isolates, the carbapenem resistance was found in 18 strains, i.e. 27.7%. The prevalence of A. baumannii in nasopharyngeal swabs was higher, i.e., 87.7% (57/65), as compared to rectal swabs, 12.3% (8/65). Class D β-lactamases genes blaOXA-23 and blaOXA-51 were present in all the CRAB from cattle. Among carbapenem-resistant isolates, 94.4% (17/18) were positive for class B β-lactamases gene blaIMP, whereas the blaNDM-1 gene was detected in only one isolate of A. baumannii. Among 70 clinical isolates of A. baumannii, 58/70 (82.9%) were positive for the blaOXA-23-like gene, and 87.1% (61/70) were CRAB isolates. Among all clinical isolates of A. baumannii, blaOXA-51-like gene was present. Hence, the co-existence of blaOXA-23 and blaOXA-51 was found in 82.85% of clinical isolates. From the environmental settings, a total of 18 A. baumannii isolates were recovered; among these, 38.88% (7/18) strains showed carbapenem resistance. All environmental isolates of A. baumannii harbored class D β-lactamases genes, i.e., blaOXA-51 and blaOXA-23 were detected in 38.9% (7/18) isolates. Hence, the co-existence of blaOXA-23 and blaOXA-51 was found in 38.88% of isolates. From environmental settings, 18 A. baumannii isolates were recovered, with 38.88% showing carbapenem resistance. All environmental isolates harbored blaOXA-51 and blaOXA-23 genes, with co-existence in 38.88% of isolates. MLST results showed ten different sequence types (ST) in clinical isolates, with ST 589 being the most common in carbapenem-resistant isolates. In veterinary isolates, ST2 was most common in CRAB isolates from cattle. Immediate control measures are needed to prevent the transmission of CRAB isolates among animals, the environment, and humans. Further studies are warranted to understand the mechanisms of antibiotic resistance spread and implement effective disease control programs.

Keywords: Acinetobacter baumannii, carbapenemases, drug resistance, MSLT

Procedia PDF Downloads 51
803 Association of Ovine Lymphocyte Antigen (OLA) with the Parasitic Infestation in Kashmiri Sheep Breeds

Authors: S. A. Bhat, Ahmad Arif, Muneeb U. Rehman, Manzoor R Mir, S. Bilal, Ishraq Hussain, H. M Khan, S. Shanaz, M. I Mir, Sabhiya Majid

Abstract:

Background: Geologically Climatic conditions of the state range from sub-tropical (Jammu), temperate (Kashmir) to cold artic (Ladakh) zones, which exerts significant influence on its agro-climatic conditions. Gastrointestinal parasitism is a major problem in sheep production worldwide. Materials and Methods: The present study was to evaluate the resistance status of sheep breeds reared in Kashmir Valley for natural resistance against Haemonchus contortus by natural pasture challenge infection. Ten microsatellite markers were used in the study for evaluation of association of Ovar-MHC with parasitic resistance in association with biochemical and parasitological parameters. Following deworming, 500 animals were subjected to selected contaminated pastures in a vicinity of the livestock farms of SKUAST-K and Sheep Husbandry Kashmir. For each animal about 10-15 ml blood was collected aseptically for molecular and biochemical analysis. Weekly fecal samples (3g) were taken, directly from the rectum of all experimental animals and examined for Fecal egg count (FEC) with modified McMaster technique. Packed cell volume (PCV) was determined within 2-5 h of blood collection, all the biochemical parameters were determined in serum by semi automated analyzer. DNA was extracted from all the blood samples with phenol-chloroform method. Microsatellite analysis was done by denaturing sequencing gel electrophoresis Results: Overall sheep from Bakerwal breed followed by Corriediale breed performed relatively better in the trial; however difference between breeds remained low. Both significant (P<0.05) and non-significant differences with respect to resistance against haemonchosis were noted at different intervals in all the parameters.. All the animals were typed for the microsatellites INRA132, OarCP73, DRB1 (U0022), OLA-DQA2, BM1818, TFAP2A, HH56, BM1815, IL-3 and BM-1258. An association study including the effect of FEC, PCV, TSP, SA, LW, and the number of alleles within each marker was done. All microsatellite markers showed degree of heterozygosity of 0.72, 0.72, 0.75, 0.62, 0.84, 0.69, 0.66, 0.65, 0.73 and 0.68 respectively. Significant association between alleles and the parameters measured were only found for the OarCP73, OLA-DQA2 and BM1815 microsatellite marker. Standard alleles of the above markers showed significant effect on the TP, SA and body weight. The three sheep breeds included in the study responded differently to the nematode infection, which may be attributed to their differences in their natural resistance against nematodes. Conclusion: Our data confirms that some markers (OarCP73, OLA-DQA2 and BM1815) within Ovar-MHC are associated with phenotypic parameters of resistance and suggest superiority of Bakerwal sheep breed in natural resistance against Haemonchus contortus.

Keywords: Ovar-Mhc, ovine leukocyte antigen (OLA), sheep, parasitic resistance, Haemonchus contortus, phenotypic & genotypic markers

Procedia PDF Downloads 700
802 Growth and Differentiation of Mesenchymal Stem Cells on Titanium Alloy Ti6Al4V and Novel Beta Titanium Alloy Ti36Nb6Ta

Authors: Eva Filová, Jana Daňková, Věra Sovková, Matej Daniel

Abstract:

Titanium alloys are biocompatible metals that are widely used in clinical practice as load bearing implants. The chemical modification may influence cell adhesion, proliferation, and differentiation as well as stiffness of the material. The aim of the study was to evaluate the adhesion, growth and differentiation of pig mesenchymal stem cells on the novel beta titanium alloy Ti36Nb6Ta compared to standard medical titanium alloy Ti6Al4V. Discs of Ti36Nb6Ta and Ti6Al4V alloy were sterilized by ethanol, put in 48-well plates, and seeded by pig mesenchymal stem cells at the density of 60×103/cm2 and cultured in Minimum essential medium (Sigma) supplemented with 10% fetal bovine serum and penicillin/streptomycin. Cell viability was evaluated using MTS assay (CellTiter 96® AQueous One Solution Cell Proliferation Assay;Promega), cell proliferation using Quant-iT™ ds DNA Assay Kit (Life Technologies). Cells were stained immunohistochemically using monoclonal antibody beta-actin, and secondary antibody conjugated with AlexaFluor®488 and subsequently the spread area of cells was measured. Cell differentiation was evaluated by alkaline phosphatase assay using p-nitrophenyl phosphate (pNPP) as a substrate; the reaction was stopped by NaOH, and the absorbance was measured at 405 nm. Osteocalcin, specific bone marker was stained immunohistochemically and subsequently visualized using confocal microscopy; the fluorescence intensity was analyzed and quantified. Moreover, gene expression of osteogenic markers osteocalcin and type I collagen was evaluated by real-time reverse transcription-PCR (qRT-PCR). For statistical evaluation, One-way ANOVA followed by Student-Newman-Keuls Method was used. For qRT-PCR, the nonparametric Kruskal-Wallis Test and Dunn's Multiple Comparison Test were used. The absorbance in MTS assay was significantly higher on titanium alloy Ti6Al4V compared to beta titanium alloy Ti36Nb6Ta on days 7 and 14. Mesenchymal stem cells were well spread on both alloys, but no difference in spread area was found. No differences in alkaline phosphatase assay, fluorescence intensity of osteocalcin as well as the expression of type I collagen, and osteocalcin genes were observed. Higher expression of type I collagen compared to osteocalcin was observed for cells on both alloys. Both beta titanium alloy Ti36Nb6Ta and titanium alloy Ti6Al4V Ti36Nb6Ta supported mesenchymal stem cellsˈ adhesion, proliferation and osteogenic differentiation. Novel beta titanium alloys Ti36Nb6Ta is a promising material for bone implantation. The project was supported by the Czech Science Foundation: grant No. 16-14758S, the Grant Agency of the Charles University, grant No. 1246314 and by the Ministry of Education, Youth and Sports NPU I: LO1309.

Keywords: beta titanium, cell growth, mesenchymal stem cells, titanium alloy, implant

Procedia PDF Downloads 302
801 The Influence of Active Breaks on the Attention/Concentration Performance in Eighth-Graders

Authors: Christian Andrä, Luisa Zimmermann, Christina Müller

Abstract:

Introduction: The positive relation between physical activity and cognition is commonly known. Relevant studies show that in everyday school life active breaks can lead to improvement in certain abilities (e.g. attention and concentration). A beneficial effect is in particular attributed to moderate activity. It is still unclear whether active breaks are beneficial after relatively short phases of cognitive load and whether the postulated effects of activity really have an immediate impact. The objective of this study was to verify whether an active break after 18 minutes of cognitive load leads to enhanced attention/concentration performance, compared to inactive breaks with voluntary mobile phone activity. Methodology: For this quasi-experimental study, 36 students [age: 14.0 (mean value) ± 0.3 (standard deviation); male/female: 21/15] of a secondary school were tested. In week 1, every student’s maximum heart rate (Hfmax) was determined through maximum effort tests conducted during physical education classes. The task was to run 3 laps of 300 m with increasing subjective effort (lap 1: 60%, lap 2: 80%, lap 3: 100% of the maximum performance capacity). Furthermore, first attention/concentration tests (D2-R) took place (pretest). The groups were matched on the basis of the pretest results. During week 2 and 3, crossover testing was conducted, comprising of 18 minutes of cognitive preload (test for concentration performance, KLT-R), a break and an attention/concentration test after a 2-minutes transition. Different 10-minutes breaks (active break: moderate physical activity with 65% Hfmax or inactive break: mobile phone activity) took place between preloading and transition. Major findings: In general, there was no impact of the different break interventions on the concentration test results (symbols processed after physical activity: 185.2 ± 31.3 / after inactive break: 184.4 ± 31.6; errors after physical activity: 5.7 ± 6.3 / after inactive break: 7.0. ± 7.2). There was, however, a noticeable development of the values over the testing periods. Although no difference in the number of processed symbols was detected (active/inactive break: period 1: 49.3 ± 8.8/46.9 ± 9.0; period 2: 47.0 ± 7.7/47.3 ± 8.4; period 3: 45.1 ± 8.3/45.6 ± 8.0; period 4: 43.8 ± 7.8/44.6 ± 8.0), error rates decreased successively after physical activity and increased gradually after an inactive break (active/inactive break: period 1: 1.9 ± 2.4/1.2 ± 1.4; period 2: 1.7 ± 1.8/ 1.5 ± 2.0, period 3: 1.2 ± 1.6/1.8 ± 2.1; period 4: 0.9 ± 1.5/2.5 ± 2.6; p= .012). Conclusion: Taking into consideration only the study’s overall results, the hypothesis must be dismissed. However, more differentiated evaluation shows that the error rates decreased after active breaks and increased after inactive breaks. Obviously, the effects of active intervention occur with a delay. The 2-minutes transition (regeneration time) used for this study seems to be insufficient due to the longer adaptation time of the cardio-vascular system in untrained individuals, which might initially affect the concentration capacity. To use the positive effects of physical activity for teaching and learning processes, physiological characteristics must also be considered. Only this will ensure optimum ability to perform.

Keywords: active breaks, attention/concentration test, cognitive performance capacity, heart rate, physical activity

Procedia PDF Downloads 301
800 A Player's Perspective of University Elite Netball Programmes in South Africa

Authors: Wim Hollander, Petrus Louis Nolte

Abstract:

University sport in South Africa is not isolated from the complexity of globalization and professionalization of sport, as it forms an integral part of the sports development environment in South Africa. In order to align their sports programs with global and professional requirements, several universities opted to develop elite sports programs; recruit specialized personnel such as coaches, administrators, and athletes; provide expert coaching; scientific and medical services; sports testing; fitness, technical and tactical expertise; sport psychological and rehabilitation support; academic guidance and career assistance; and student-athlete accommodation. In addition, universities provide administrative support and high-quality physical resources (training facilities) for the benefit of the overall South African sport system. Although it is not compulsory for universities to develop elite sports programs to prepare their teams for competitions, elite competitions such as the annual Varsity Sport, University Sport South Africa (USSA) and local club competitions and leagues within international university competitions where universities not only compete but also deliver players for representative national netball teams. The aim of this study is, therefore, to describe the perceptions of players of the university elite netball programs they were participating in. This study adopted a descriptive design with a quantitative approach, utilizing a self-structured questionnaire as a research technique. As this research formed part of a national research project for NSA with a population of 172 national and provincial netball players, a sample of 92 university netball players from the population was selected. Content validity of the self-structured questionnaire was secured through a test-retest process, with construct validity through a member of the Statistical Consultation Services (STATCON) of the University of Johannesburg that provided feedback on the structural format of the questionnaire. Reliability was measured utilizing Cronbach Alpha on p < 0.005 level of significance. A reliability score of 0.87 was measured. The research was approved by the Board of Netball South Africa and ethical conduct implemented according to the processes and procedures approved by the Ethics Committees of the Faculty of Health Sciences, the University of Johannesburg with clearance number REC-01-30-2019. From the results, it is evident that university elite netball programs are professional, especially with regards to the employment of knowledgeable and competent coaches and technical officials such as team managers and sport sciences staff. These professionals have access to elite training facilities, support staff, and relatively large groups of elite players, all elements of an elite program that could enhance the national federation’s (Netball South Africa) system. Universities could serve the dual purpose of serving as university netball clubs, as well as providing elite training services and facilities as performance hubs for national players.

Keywords: elite sport programmes, university netball, player experiences, varsity sport netball

Procedia PDF Downloads 154
799 A Review on Stormwater Harvesting and Reuse

Authors: Fatema Akram, Mohammad G. Rasul, M. Masud K. Khan, M. Sharif I. I. Amir

Abstract:

Australia is a country of some 7,700 million square kilometres with a population of about 22.6 million. At present water security is a major challenge for Australia. In some areas the use of water resources is approaching and in some parts it is exceeding the limits of sustainability. A focal point of proposed national water conservation programs is the recycling of both urban storm-water and treated wastewater. But till now it is not widely practiced in Australia, and particularly storm-water is neglected. In Australia, only 4% of storm-water and rainwater is recycled, whereas less than 1% of reclaimed wastewater is reused within urban areas. Therefore, accurately monitoring, assessing and predicting the availability, quality and use of this precious resource are required for better management. As storm-water is usually of better quality than untreated sewage or industrial discharge, it has better public acceptance for recycling and reuse, particularly for non-potable use such as irrigation, watering lawns, gardens, etc. Existing storm-water recycling practice is far behind of research and no robust technologies developed for this purpose. Therefore, there is a clear need for using modern technologies for assessing feasibility of storm-water harvesting and reuse. Numerical modelling has, in recent times, become a popular tool for doing this job. It includes complex hydrological and hydraulic processes of the study area. The hydrologic model computes storm-water quantity to design the system components, and the hydraulic model helps to route the flow through storm-water infrastructures. Nowadays water quality module is incorporated with these models. Integration of Geographic Information System (GIS) with these models provides extra advantage of managing spatial information. However for the overall management of a storm-water harvesting project, Decision Support System (DSS) plays an important role incorporating database with model and GIS for the proper management of temporal information. Additionally DSS includes evaluation tools and Graphical user interface. This research aims to critically review and discuss all the aspects of storm-water harvesting and reuse such as available guidelines of storm-water harvesting and reuse, public acceptance of water reuse, the scopes and recommendation for future studies. In addition to these, this paper identifies, understand and address the importance of modern technologies capable of proper management of storm-water harvesting and reuse.

Keywords: storm-water management, storm-water harvesting and reuse, numerical modelling, geographic information system, decision support system, database

Procedia PDF Downloads 354
798 Gender Gap in Returns to Social Entrepreneurship

Authors: Saul Estrin, Ute Stephan, Suncica Vujic

Abstract:

Background and research question: Gender differences in pay are present at all organisational levels, including at the very top. One possible way for women to circumvent organizational norms and discrimination is to engage in entrepreneurship because, as CEOs of their own organizations, entrepreneurs largely determine their own pay. While commercial entrepreneurship plays an important role in job creation and economic growth, social entrepreneurship has come to prominence because of its promise of addressing societal challenges such as poverty, social exclusion, or environmental degradation through market-based rather than state-sponsored activities. This opens the research question whether social entrepreneurship might be a form of entrepreneurship in which the pay of men and women is the same, or at least more similar; that is to say there is little or no gender pay gap. If the gender gap in pay persists also at the top of social enterprises, what are the factors, which might explain these differences? Methodology: The Oaxaca-Blinder Decomposition (OBD) is the standard approach of decomposing the gender pay gap based on the linear regression model. The OBD divides the gender pay gap into the ‘explained’ part due to differences in labour market characteristics (education, work experience, tenure, etc.), and the ‘unexplained’ part due to differences in the returns to those characteristics. The latter part is often interpreted as ‘discrimination’. There are two issues with this approach. (i) In many countries there is a notable convergence in labour market characteristics across genders; hence the OBD method is no longer revealing, since the largest portion of the gap remains ‘unexplained’. (ii) Adding covariates to a base model sequentially either to test a particular coefficient’s ‘robustness’ or to account for the ‘effects’ on this coefficient of adding covariates might be problematic, due to sequence-sensitivity when added covariates are correlated. Gelbach’s decomposition (GD) addresses latter by using the omitted variables bias formula, which constructs a conditional decomposition thus accounting for sequence-sensitivity when added covariates are correlated. We use GD to decompose the differences in gaps of pay (annual and hourly salary), size of the organisation (revenues), effort (weekly hours of work), and sources of finances (fees and sales, grants and donations, microfinance and loans, and investors’ capital) between men and women leading social enterprises. Database: Our empirical work is made possible by our collection of a unique dataset using respondent driven sampling (RDS) methods to address the problem that there is as yet no information on the underlying population of social entrepreneurs. The countries that we focus on are the United Kingdom, Spain, Romania and Hungary. Findings and recommendations: We confirm the existence of a gender pay gap between men and women leading social enterprises. This gap can be explained by differences in the accumulation of human capital, psychological and social factors, as well as cross-country differences. The results of this study contribute to a more rounded perspective, highlighting that although social entrepreneurship may be a highly satisfying occupation, it also perpetuates gender pay inequalities.

Keywords: Gelbach’s decomposition, gender gap, returns to social entrepreneurship, values and preferences

Procedia PDF Downloads 230
797 The Use of Prestige Language in Tennessee Williams’s "A Streetcar Named Desire"

Authors: Stuart Noel

Abstract:

In a streetcar Named Desire, Tennessee Williams presents Blanche DuBois, a most complex and intriguing character who often uses prestige language to project the image of an upper-class speaker and to disguise her darker and complicated self. She embodies various fascinating and contrasting characteristics. Like New Orleans (the locale of the play), Blanche represents two opposing images. One image projects that of genteel, Southern charm and beauty, speaking formally and using prestige language and what some linguists refer to as “hypercorrection,” and the other image reveals that of a soiled, deteriorating façade, full of decadence and illusion. Williams said on more than one occasion that Blanche’s use of such language was a direct reflection of her personality and character (as a high school English teacher). Prestige language is an exaggeratedly elevated, pretentious, and oftentimes melodramatic form of one’s language incorporating superstandard or more standard speech than usual in order to project a highly authoritative individual identity. Speech styles carry personal identification meaning not only because they are closely associated with certain social classes but because they tend to be associated with certain conversational contexts. Features which may be considered to be “elaborated” in form (for example, full forms vs. contractions) tend to cluster together in speech registers/styles which are typically considered to be more formal and/or of higher social prestige, such as academic lectures and news broadcasts. Members of higher social classes have access to the elaborated registers which characterize formal writings and pre-planned speech events, such as lectures, while members of lower classes are relegated to using the more economical registers associated with casual, face-to-face conversational interaction, since they do not participate in as many planned speech events as upper-class speakers. Tennessee Williams’s work is characteristically concerned with the conflict between the illusions of an individual and the reality of his/her situation equated with a conflict between truth and beauty. An examination of Blanche DuBois reveals a recurring theme of art and decay and the use of prestige language to reveal artistry in language and to hide a deteriorating self. His graceful and poetic writing personifies her downfall and deterioration. Her loneliness and disappointment are the things so often strongly feared by the sensitive artists and heroes in the world. Hers is also a special and delicate human spirit that is often misunderstood and repressed by society. Blanche is afflicted with a psychic illness growing out of her inability to face the harshness of human existence. She is a sensitive, artistic, and beauty-haunted creature who is avoiding her own humanity while hiding behind her use of prestige language. And she embodies a partial projection of Williams himself.

Keywords: American drama, prestige language, Southern American literature, Tennessee Williams

Procedia PDF Downloads 359
796 Treatment of Onshore Petroleum Drill Cuttings via Soil Washing Process: Characterization and Optimal Conditions

Authors: T. Poyai, P. Painmanakul, N. Chawaloesphonsiya, P. Dhanasin, C. Getwech, P. Wattana

Abstract:

Drilling is a key activity in oil and gas exploration and production. Drilling always requires the use of drilling mud for lubricating the drill bit and controlling the subsurface pressure. As drilling proceeds, a considerable amount of cuttings or rock fragments is generated. In general, water or Water Based Mud (WBM) serves as drilling fluid for the top hole section. The cuttings generated from this section is non-hazardous and normally applied as fill materials. On the other hand, drilling the bottom hole to reservoir section uses Synthetic Based Mud (SBM) of which synthetic oils are composed. The bottom-hole cuttings, SBM cuttings, is regarded as a hazardous waste, in accordance with the government regulations, due to the presence of hydrocarbons. Currently, the SBM cuttings are disposed of as an alternative fuel and raw material in cement kiln. Instead of burning, this work aims to propose an alternative for drill cuttings management under two ultimate goals: (1) reduction of hazardous waste volume; and (2) making use of the cleaned cuttings. Soil washing was selected as the major treatment process. The physiochemical properties of drill cuttings were analyzed, such as size fraction, pH, moisture content, and hydrocarbons. The particle size of cuttings was analyzed via light scattering method. Oil present in cuttings was quantified in terms of total petroleum hydrocarbon (TPH) through gas chromatography equipped with flame ionization detector (GC-FID). Other components were measured by the standard methods for soil analysis. Effects of different washing agents, liquid-to-solid (L/S) ratio, washing time, mixing speed, rinse-to-solid (R/S) ratio, and rinsing time were also evaluated. It was found that drill cuttings held the electrical conductivity of 3.84 dS/m, pH of 9.1, and moisture content of 7.5%. The TPH in cuttings existed in the diesel range with the concentration ranging from 20,000 to 30,000 mg/kg dry cuttings. A majority of cuttings particles held a mean diameter of 50 µm, which represented silt fraction. The results also suggested that a green solvent was considered most promising for cuttings treatment regarding occupational health, safety, and environmental benefits. The optimal washing conditions were obtained at L/S of 5, washing time of 15 min, mixing speed of 60 rpm, R/S of 10, and rinsing time of 1 min. After washing process, three fractions including clean cuttings, spent solvent, and wastewater were considered and provided with recommendations. The residual TPH less than 5,000 mg/kg was detected in clean cuttings. The treated cuttings can be then used for various purposes. The spent solvent held the calorific value of higher than 3,000 cal/g, which can be used as an alternative fuel. Otherwise, the recovery of the used solvent can be conducted using distillation or chromatography techniques. Finally, the generated wastewater can be combined with the produced water and simultaneously managed by re-injection into the reservoir.

Keywords: drill cuttings, green solvent, soil washing, total petroleum hydrocarbon (TPH)

Procedia PDF Downloads 141
795 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer

Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi

Abstract:

Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.

Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales

Procedia PDF Downloads 117
794 Contributory Antioxidant Role of Testosterone and Oxidative Stress Biomarkers in Males Exposed to Mixed Chemicals in an Automobile Repair Community

Authors: Saheed A. Adekola, Mabel A. Charles-Davies, Ridwan A. Adekola

Abstract:

Background: Testosterone is a known androgenic and anabolic steroid, primarily secreted in the testes. It plays an important role in the development of testes and prostate and has a range of biological actions. There is evidence that exposure to mixed chemicals in the workplace leads to the generation of free radicals and inadequate antioxidants leading to oxidative stress, which may serve as an early indicator of a pathophysiologic state. Based on findings, testosterone shows direct antioxidant effects by increasing the activities of antioxidant enzymes like glutathione peroxidase, thus indirectly contributing to antioxidant capacity. Objective: To evaluate the antioxidant role of testosterone as well as the relationship between testosterone and oxidative stress biomarkers in males exposed to mixed chemicals in the automobile repair community. Methods: The study included 43 participants aged 22- 60years exposed to mixed chemicals (EMC) from the automobile repair community. Forty (40) apparently healthy, unexposed, age-matched controls were recruited after informed consent. Demographic, sexual and anthropometric characteristics were obtained from pre-test structured questionnaires using standard methods. Blood samples (10mls) were collected from each subject into plain bottles and sera obtained were used for biochemical analyses. Serum levels of testosterone and luteinizing hormone (LH) were determined by enzyme immunoassay method, EIA (Immunometrics UK.LTD). Levels of total antioxidant capacity (TAC), total plasma peroxide (TPP), Malondialdehyde (MDA), hydrogen peroxide (H2O2), glutathione peroxide (GPX), superoxide dismutase (SOD), glutathione-S-transferase (GST), and reduced glutathione (GSH) were determined using spectrophotometric methods respectively. Results obtained were analyzed using the Student’s t-test and Chi-square test for quantitative variables and qualitative variables respectively. Multiple regression was used to find associations and relationships between the variables. Results: Significant higher concentrations of TPP, MDA, OSI, H2O2 and GST were observed in EMC compared with controls (p < 0.001). Within EMC, significantly higher levels of testosterone, LH and TAC were observed in eugonadic when compared with hypogonadic participants (p < 0.001). Diastolic blood pressure, waist circumference, waist height ratio and waist hip ratio were significantly higher in participants EMC compared with the controls. Sexual history and dietary intake showed that the controls had normal erection during sex and took more vegetables in their diet which may therefore be beneficial. Conclusion: The significantly increased levels of total antioxidant capacity in males exposed to mixed chemicals despite their exposure may probably reflect the contributory antioxidant role testosterone that prevents oxidative stress.

Keywords: mixed chemicals, oxidative stress, antioxidant, hypogonadism testosterone

Procedia PDF Downloads 128