Search results for: specific%20methanogenic%20activity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7654

Search results for: specific%20methanogenic%20activity

2134 Problem Solving: Process or Product? A Mathematics Approach to Problem Solving in Knowledge Management

Authors: A. Giannakopoulos, S. B. Buckley

Abstract:

Problem solving in any field is recognised as a prerequisite for any advancement in knowledge. For example in South Africa it is one of the seven critical outcomes of education together with critical thinking. As a systematic way to problem solving was initiated in mathematics by the great mathematician George Polya (the father of problem solving), more detailed and comprehensive ways in problem solving have been developed. This paper is based on the findings by the author and subsequent recommendations for further research in problem solving and critical thinking. Although the study was done in mathematics, there is no doubt by now in almost anyone’s mind that mathematics is involved to a greater or a lesser extent in all fields, from symbols, to variables, to equations, to logic, to critical thinking. Therefore it stands to reason that mathematical principles and learning cannot be divorced from any field. In management of knowledge situations, the types of problems are similar to mathematics problems varying from simple to analogical to complex; from well-structured to ill-structured problems. While simple problems could be solved by employees by adhering to prescribed sequential steps (the process), analogical and complex problems cannot be proceduralised and that diminishes the capacity of the organisation of knowledge creation and innovation. The low efficiency in some organisations and the low pass rates in mathematics prompted the author to view problem solving as a product. The authors argue that using mathematical approaches to knowledge management problem solving and treating problem solving as a product will empower the employee through further training to tackle analogical and complex problems. The question the authors asked was: If it is true that problem solving and critical thinking are indeed basic skills necessary for advancement of knowledge why is there so little literature of knowledge management (KM) about them and how they are connected and advance KM?This paper concludes with a conceptual model which is based on general accepted principles of knowledge acquisition (developing a learning organisation), knowledge creation, sharing, disseminating and storing thereof, the five pillars of knowledge management (KM). This model, also expands on Gray’s framework on KM practices and problem solving and opens the doors to a new approach to training employees in general and domain specific areas problems which can be adapted in any type of organisation.

Keywords: critical thinking, knowledge management, mathematics, problem solving

Procedia PDF Downloads 595
2133 Mitigating Self-Regulation Issues in the Online Instruction of Math

Authors: Robert Vanderburg, Michael Cowling, Nicholas Gibson

Abstract:

Mathematics is one of the core subjects taught in the Australian K-12 education system and is considered an important component for future studies in areas such as engineering and technology. In addition to this, Australia has been a world leader in distance education due to the vastness of its geographic landscape. Despite this, research is still needed on distance math instruction. Even though delivery of curriculum has given way to online studies, and there is a resultant push for computer-based (PC, tablet, smartphone) math instruction, much instruction still involves practice problems similar to those original curriculum packs, without the ability for students to self-regulate their learning using the full interactive capabilities of these devices. Given this need, this paper addresses issues students have during online instruction. This study consists of 32 students struggling with mathematics enrolled in a math tutorial conducted in an online setting. The study used a case study design to understand some of the blockades hindering the students’ success. Data was collected by tracking students practice and quizzes, tracking engagement of the site, recording one-on-one tutorials, and collecting data from interviews with the students. Results revealed that when students have cognitively straining tasks in an online instructional setting, the first thing to dissipate was their ability to self-regulate. The results also revealed that instructors could ameliorate the situation and provided useful data on strategies that could be used for designing future online tasks. Specifically, instructors could utilize cognitive dissonance strategies to reduce the cognitive drain of the tasks online. They could segment the instruction process to reduce the cognitive demands of the tasks and provide in-depth self-regulatory training, freeing mental capacity for the mathematics content. Finally, instructors could provide specific scheduling and assignment structure changes to reduce the amount of student centered self-regulatory tasks in the class. These findings will be discussed in more detail and summarized in a framework that can be used for future work.

Keywords: digital education, distance education, mathematics education, self-regulation

Procedia PDF Downloads 135
2132 Comparison of Zinc Amino Acid Complex and Zinc Sulfate in Diet for Asian Seabass (Lates calcarifer)

Authors: Kanokwan Sansuwan, Orapint Jintasataporn, Srinoy Chumkam

Abstract:

Asian seabass is one of the economically important fish of Thailand and other countries in the Southeast Asia. Zinc is an essential trace metal to fish and vital to various biological processes and function. It is required for normal growth and indispensable in the diet. Therefore, the artificial diets offered to intensively cultivated fish must possess the zinc content required by the animal metabolism for health maintenance and high weight gain rates. However, essential elements must also be in an available form to be utilized by the organism. Thus, this study was designed to evaluate the application of different zinc forms, including organic Zinc (zinc amino acid complex) and inorganic Zinc (zinc sulfate), as feed additives in diets for Asian seabass. Three groups with five replicates of fish (mean weight 22.54 ± 0.80 g) were given a basal diet either unsupplemented (control) or supplemented with 50 mg Zn kg−¹ sulfate (ZnSO₄) or Zinc Amino Acid Complex (ZnAA) respectively. Feeding regimen was initially set at 3% of body weight per day, and then the feed amount was adjusted weekly according to the actual feeding performance. The experiment was conducted for 10 weeks. Fish supplemented with ZnAA had the highest values in all studied growth indicators (weight gain, average daily growth and specific growth rate), followed by fish fed the diets with the ZnSO₄, and lowest in fish fed the diets with the control. Lysozyme and superoxide dismutase (SOD) activity of fish supplemented with ZnAA were significantly higher compared with all other groups (P < 0.05). Fish supplemented with ZnSO₄ exhibited significant increase in digestive enzyme activities (protease, pepsin and trypsin) compared with ZnAA and the control (P < 0.05). However, no significant differences were observed for RNA and protein in muscle (P > 0.05). The results of the present work suggest that ZnAA are a better source of trace elements for Asian seabass, based on growth performance and immunity indices examined in this study.

Keywords: Asian seabass, growth performance, zinc amino acid complex (ZnAA), zinc sulfate (ZnSO₄)

Procedia PDF Downloads 181
2131 Effect of Collection Technique of Blood on Clinical Pathology

Authors: Marwa Elkalla, E. Ali Abdelfadil, Ali. Mohamed. M. Sami, Ali M. Abdel-Monem

Abstract:

To assess the impact of the blood collection technique on clinical pathology markers and to establish reference intervals, a study was performed using normal, healthy C57BL/6 mice. Both sexes were employed, and they were randomly assigned to different groups depending on the phlebotomy technique used. The blood was drawn in one of four ways: intracardiac (IC), caudal vena cava (VC), caudal vena cava (VC) plus a peritoneal collection of any extravasated blood, or retroorbital phlebotomy (RO). Several serum biochemistries, such as a liver function test, a complete blood count with differentials, and a platelet count, were analysed from the blood and serum samples analysed. Red blood cell count, haemoglobin (p >0.002), hematocrit, alkaline phosphatase, albumin, total protein, and creatinine were all significantly greater in female mice. Platelet counts, specific white blood cell numbers (total, neutrophil, lymphocyte, and eosinophil counts), globulin, amylase, and the BUN/creatinine ratio were all greater in males. The VC approach seemed marginally superior to the IC approach for the characteristics under consideration and was linked to the least variation among both sexes. Transaminase levels showed the greatest variation between study groups. The aspartate aminotransferase (AST) values were linked with decreased fluctuation for the VC approach, but the alanine aminotransferase (ALT) values were similar between the IC and VC groups. There was a lot of diversity and range in transaminase levels between the MC and RO groups. We found that the RO approach, the only one tested that allowed for repeated sample collection, yielded acceptable ALT readings. The findings show that the test results are significantly affected by the phlebotomy technique and that the VC or IC techniques provide the most reliable data. When organising a study and comparing data to reference ranges, the ranges supplied here by collection method and sex can be utilised to determine the best approach to data collection. The authors suggest establishing norms based on the procedures used by each individual researcher in his or her own lab.

Keywords: clinical, pathology, blood, effect

Procedia PDF Downloads 95
2130 A Computerized Tool for Predicting Future Reading Abilities in Pre-Readers Children

Authors: Stephanie Ducrot, Marie Vernet, Eve Meiss, Yves Chaix

Abstract:

Learning to read is a key topic of debate today, both in terms of its implications on school failure and illiteracy and regarding what are the best teaching methods to develop. It is estimated today that four to six percent of school-age children suffer from specific developmental disorders that impair learning. The findings from people with dyslexia and typically developing readers suggest that the problems children experience in learning to read are related to the preliteracy skills that they bring with them from kindergarten. Most tools available to professionals are designed for the evaluation of child language problems. In comparison, there are very few tools for assessing the relations between visual skills and the process of learning to read. Recent literature reports that visual-motor skills and visual-spatial attention in preschoolers are important predictors of reading development — the main goal of this study aimed at improving screening for future reading difficulties in preschool children. We used a prospective, longitudinal approach where oculomotor processes (assessed with the DiagLECT test) were measured in pre-readers, and the impact of these skills on future reading development was explored. The dialect test specifically measures the online time taken to name numbers arranged irregularly in horizontal rows (horizontal time, HT), and the time taken to name numbers arranged in vertical columns (vertical time, VT). A total of 131 preschoolers took part in this study. At Time 0 (kindergarten), the mean VT, HT, errors were recorded. One year later, at Time 1, the reading level of the same children was evaluated. Firstly, this study allowed us to provide normative data for a standardized evaluation of the oculomotor skills in 5- and 6-year-old children. The data also revealed that 25% of our sample of preschoolers showed oculomotor impairments (without any clinical complaints). Finally, the results of this study assessed the validity of the DiagLECT test for predicting reading outcomes; the better a child's oculomotor skills are, the better his/her reading abilities will be.

Keywords: vision, attention, oculomotor processes, reading, preschoolers

Procedia PDF Downloads 146
2129 Organ Dose Calculator for Fetus Undergoing Computed Tomography

Authors: Choonsik Lee, Les Folio

Abstract:

Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.

Keywords: computed tomography, fetal dose, pregnant women, radiation dose

Procedia PDF Downloads 139
2128 Effect of Anionic Lipid on Zeta Potential Values and Physical Stability of Liposomal Amikacin

Authors: Yulistiani, Muhammad Amin, Fasich

Abstract:

A surface charge of the nanoparticle is a very important consideration in pulmonal drug delivery system. The zeta potential (ZP) is related to the surface charge which can predict stability of nanoparticles as nebules of liposomal amikacin. Anionic lipid such as 1,2-dipalmitoyl-sn-glycero-3-phosphatidylglycerol (DPPG) is expected to contribute to the physical stability of liposomal amikacin and the optimal ZP value. Suitable ZP can improve drug release profiles at specific sites in alveoli as well as their stability in dosage form. This study aimed to analyze the effect of DPPG on ZP values and physical stability of liposomal amikacin. Liposomes were prepared by using the reserved phase evaporation method. Liposomes consisting of DPPG, 1,2-dipalmitoyl-sn-glycero-3-phosphatidylcholine (DPPC), cholesterol and amikacin were formulated in five different compositions 0/150/5/100, 10//150/5/100, 20/150/5/100, 30/150/5/100 and 40/150/5/100 (w/v) respectively. A chloroform/methanol mixture in the ratio of 1 : 1 (v/v) was used as solvent to dissolve lipids. These systems were adjusted in the phosphate buffer at pH 7.4. Nebules of liposomal amikacin were produced by using the vibrating nebulizer and then characterized by the X-ray diffraction, differential scanning calorimetry, particle size and zeta potential analyzer, and scanning electron microscope. Amikacin concentration from liposome leakage was determined by the immunoassay method. The study revealed that presence of DPPG could increase the ZP value. The addition of 10 mg DPPG in the composition resulted in increasing of ZP value to 3.70 mV (negatively charged). The optimum ZP value was reached at -28.780 ± 0.70 mV and particle size of nebules 461.70 ± 21.79 nm. Nebulizing process altered parameters such as particle size, conformation of lipid components and the amount of surface charges of nanoparticles which could influence the ZP value. These parameters might have profound effects on the application of nebules in the alveoli; however, negatively charge nanoparticles were unexpected to have a high ZP value in this system due to increased macrophage uptake and pulmonal clearance. Therefore, the ratio of liposome 20/150/5/100 (w/v) resulted in the most stable colloidal system and might be applicable to pulmonal drug delivery system.

Keywords: anionic lipid, dipalmitoylphosphatidylglycerol, liposomal amikacin, stability, zeta potential

Procedia PDF Downloads 338
2127 Effect of Mistranslating tRNA Alanine on Polyglutamine Aggregation

Authors: Sunidhi Syal, Rasangi Tennakoon, Patrick O'Donoghue

Abstract:

Polyglutamine (polyQ) diseases are a group of diseases related to neurodegeneration caused by repeats of the amino acid glutamine (Q) in the DNA, which translates into an elongated polyQ tract in the protein. The pathological explanation is that the polyQ tract forms cytotoxic aggregates in the neurons, leading to their degeneration. There are no cures or preventative efforts established for these diseases as of today, although the symptoms of these diseases can be relieved. This study specifically focuses on Huntington's disease, which is a type of polyQ disease in which aggregation is caused by the extended cytosine, adenine, guanine (CUG) codon repeats in the huntingtin (HTT) gene, which encodes for the huntingtin protein. Using this principle, we attempted to create six models, which included mutating wildtype tRNA alanine variant tRNA-AGC-8-1 to have glutamine anticodons CUG and UUG so serine is incorporated at glutamine sites in poly Q tracts. In the process, we were successful in obtaining tAla-8-1 CUG mutant clones in the HTTexon1 plasmids with a polyQ tract of 23Q (non-pathogenic model) and 74Q (disease model). These plasmids were transfected into mouse neuroblastoma cells to characterize protein synthesis and aggregation in normal and mistranslating cells and to investigate the effects of glutamines replaced with alanines on the disease phenotype. Notably, we observed no noteworthy differences in mean fluorescence between the CUG mutants for 23Q or 74Q; however, the Triton X-100 assay revealed a significant reduction in insoluble 74Q aggregates. We were unable to create a tAla-8-1 UUG mutant clone, and determining the difference in the effects of the two glutamine anticodons may enrich our understanding of the disease phenotype. In conclusion, by generating structural disruption with the amino acid alanine, it may be possible to find ways to minimize the toxicity of Huntington's disease caused by these polyQ aggregates. Further research is needed to advance knowledge in this field by identifying the cellular and biochemical impact of specific tRNA variants found naturally in human genomes.

Keywords: Huntington's disease, polyQ, tRNA, anticodon, clone, overlap PCR

Procedia PDF Downloads 39
2126 An Advanced Automated Brain Tumor Diagnostics Approach

Authors: Berkan Ural, Arif Eser, Sinan Apaydin

Abstract:

Medical image processing is generally become a challenging task nowadays. Indeed, processing of brain MRI images is one of the difficult parts of this area. This study proposes a hybrid well-defined approach which is consisted from tumor detection, extraction and analyzing steps. This approach is mainly consisted from a computer aided diagnostics system for identifying and detecting the tumor formation in any region of the brain and this system is commonly used for early prediction of brain tumor using advanced image processing and probabilistic neural network methods, respectively. For this approach, generally, some advanced noise removal functions, image processing methods such as automatic segmentation and morphological operations are used to detect the brain tumor boundaries and to obtain the important feature parameters of the tumor region. All stages of the approach are done specifically with using MATLAB software. Generally, for this approach, firstly tumor is successfully detected and the tumor area is contoured with a specific colored circle by the computer aided diagnostics program. Then, the tumor is segmented and some morphological processes are achieved to increase the visibility of the tumor area. Moreover, while this process continues, the tumor area and important shape based features are also calculated. Finally, with using the probabilistic neural network method and with using some advanced classification steps, tumor area and the type of the tumor are clearly obtained. Also, the future aim of this study is to detect the severity of lesions through classes of brain tumor which is achieved through advanced multi classification and neural network stages and creating a user friendly environment using GUI in MATLAB. In the experimental part of the study, generally, 100 images are used to train the diagnostics system and 100 out of sample images are also used to test and to check the whole results. The preliminary results demonstrate the high classification accuracy for the neural network structure. Finally, according to the results, this situation also motivates us to extend this framework to detect and localize the tumors in the other organs.

Keywords: image processing algorithms, magnetic resonance imaging, neural network, pattern recognition

Procedia PDF Downloads 416
2125 Anticancer Effect of Doxorubicin Using Injectable Hydrogel

Authors: Prasamsha Panta, Da Yeon Kim, Ja Yong Jang, Min Jae Kim, Jae Ho Kim, Moon Suk Kim

Abstract:

Introduction: Among the many anticancer drugs used clinically, doxorubicin (Dox), was one of widely used drugs to treat many types of solid tumors such as liver, colon, breast, or lung. Intratumoral injection of chemotherapeutic agents is a potentially more effective alternative to systemic administration because direct delivery of the anticancer drug to the target may improve both the stability and efficacy of anticancer drugs. Injectable in situ-forming gels have attracted considerable attention because they can achieve site specific drug delivery, long term action periods, and improved patient compliance. Objective: Objective of present study is to confirm clinical benefit of intratumoral chemotherapy using injectable in situ-forming poly(ethylene glycol)-b-polycaprolactone diblock copolymer (MP) and Dox with increase in efficacy and reducing the toxicity in patients with cancer diseases. Methods and methodology: We prepared biodegradable MP hydrogel and measured viscosity for the evaluation of thermo-sensitive property. In vivo antitumor activity was performed with normal saline, MP only, single free Dox, repeat free Dox, and Dox-loaded MP gel. The remaining amount of Dox drug was measured using HPLC after the mouse was sacrified. For cytotoxicity studies WST-1 assay was performed. Histological analysis was done with H&E and TUNEL processes respectively. Results: The works in this experiment showed that Dox-loaded MP have biodegradable drug depot property. Dox-loaded MP gels showed remarkable in vitro cytotoxicity activities against cancer cells. Finally, this work indicates that injection of Dox-loaded MP allowed Dox to act effectively in the tumor and induced long-lasting supression of tumor growth. Conclusion: This work has examined the potential clinical utility of intratumorally injected Dox-loaded MP gel, which shows significant effect of higher local Dox retention compared with systemically administered Dox.

Keywords: injectable in-situ forming hydrogel, anticancer, doxorubicin, intratumoral injection

Procedia PDF Downloads 407
2124 Pattern of External Injuries Sustained during Bomb Blast Attacks in Karachi, Pakistan from 2000 to 2007

Authors: Arif Anwar Surani, Salman Ali, Asif Surani, Sohaib Zahid, Akbar Shoukat Ali, Zeeshan-Ul-Hassan Usmani, Joseph Varon, Salim Surani

Abstract:

Objective: Terrorism and suicidal bomb blast attacks are commonplace in Karachi, Pakistan. During the years 2000 to 2007, there were over 60 bomb explosions resulting in more than 1500 casualties. These explosions produce a wide variety of external injuries. We undertook this study to evaluate pattern of external injury produced after bomb blast attacks and to compare injury profile resulting from explosions in open versus semi-confined blast environments. Method: A retrospective, cross-sectional, study was conducted to review injuries sustained after bomb blast attacks in Karachi, Pakistan, from January 2000 to October 2007. Emergency medical records and medico legal certificates of patients presented to three major public sector hospitals of Karachi were evaluated using self-design proforma. Results: Data of 481 victims meet inclusion criteria and were incorporated for final analysis. Of these, 63.6% were injured in open spaces and 36.4% were injured in semi-confined blast environments. Lacerations were commonly encountered as external injury (47.7%) followed by penetrating wounds (15.3%). Lower and upper extremities were most commonly affected (38.6% and 19% respectively). Open and semi-confined blast environments produced a specific injury pattern and profile (p=<0.001). Conclusions: Bomb blast attacks in Karachi produce an external injury pattern consistent with other studies, with exception of an increased frequency in penetrating wounds. Semi-confined blast environments were associated with severe injuries. Further studies are required to better classify injuries and their severity based on standardized scoring systems. Effective emergency response systems must be designed to cope with mass causalities following bomb explosions.

Keywords: bomb blast attacks, injury pattern, external injury, open space, semi-confined space, blast environment

Procedia PDF Downloads 396
2123 Automatic Processing of Trauma-Related Visual Stimuli in Female Patients Suffering From Post-Traumatic Stress Disorder after Interpersonal Traumatization

Authors: Theresa Slump, Paula Neumeister, Katharina Feldker, Carina Y. Heitmann, Thomas Straube

Abstract:

A characteristic feature of post-traumatic stress disorder (PTSD) is the automatic processing of disorder-specific stimuli that expresses itself in intrusive symptoms such as intense physical and psychological reactions to trauma-associated stimuli. That automatic processing plays an essential role in the development and maintenance of symptoms. The aim of our study was, therefore, to investigate the behavioral and neural correlates of automatic processing of trauma-related stimuli in PTSD. Although interpersonal traumatization is a form of traumatization that often occurs, it has not yet been sufficiently studied. That is why, in our study, we focused on patients suffering from interpersonal traumatization. While previous imaging studies on PTSD mainly used faces, words, or generally negative visual stimuli, our study presented complex trauma-related and neutral visual scenes. We examined 19 female subjects suffering from PTSD and examined 19 healthy women as a control group. All subjects did a geometric comparison task while lying in a functional-magnetic-resonance-imaging (fMRI) scanner. Trauma-related scenes and neutral visual scenes that were not relevant to the task were presented while the subjects were doing the task. Regarding the behavioral level, there were not any significant differences between the task performance of the two groups. Regarding the neural level, the PTSD patients showed significant hyperactivation of the hippocampus for task-irrelevant trauma-related stimuli versus neutral stimuli when compared with healthy control subjects. Connectivity analyses revealed altered connectivity between the hippocampus and other anxiety-related areas in PTSD patients, too. Overall, those findings suggest that fear-related areas are involved in PTSD patients' processing of trauma-related stimuli even if the stimuli that were used in the study were task-irrelevant.

Keywords: post-traumatic stress disorder, automatic processing, hippocampus, functional magnetic resonance imaging

Procedia PDF Downloads 196
2122 Beyond Information Failure and Misleading Beliefs in Conditional Cash Transfer Programs: A Qualitative Account of Structural Barriers Explaining Why the Poor Do Not Invest in Human Capital in Northern Mexico

Authors: Francisco Fernandez de Castro

Abstract:

The Conditional Cash Transfer (CCT) model gives monetary transfers to beneficiary families on the condition that they take specific education and health actions. According to the economic rationale of CCTs the poor need incentives to invest in their human capital because they are trapped by a lack of information and misleading beliefs. If left to their own decision, the poor will not be able to choose what is in their best interests. The basic assumption of the CCT model is that the poor need incentives to take care of their own education and health-nutrition. Due to the incentives (income cash transfers and conditionalities), beneficiary families are supposed to attend doctor visits and health talks. Children would stay in the school. These incentivized behaviors would produce outcomes such as better health and higher level of education, which in turn will reduce poverty. Based on a grounded theory approach to conduct a two-year period of qualitative data collection in northern Mexico, this study shows that this explanation is incomplete. In addition to the information failure and inadequate beliefs, there are structural barriers in everyday life of households that make health-nutrition and education investments difficult. In-depth interviews and observation work showed that the program takes for granted local conditions in which beneficiary families should fulfill their co-responsibilities. Data challenged the program’s assumptions and unveiled local obstacles not contemplated in the program’s design. These findings have policy and research implications for the CCT agenda. They bring elements for late programming due to the gap between the CCT strategy as envisioned by policy designers, and the program that beneficiary families experience on the ground. As for research consequences, these findings suggest new avenues for scholarly work regarding the causal mechanisms and social processes explaining CCT outcomes.

Keywords: conditional cash transfers, incentives, poverty, structural barriers

Procedia PDF Downloads 113
2121 Proposals of Exposure Limits for Infrasound From Wind Turbines

Authors: M. Pawlaczyk-Łuszczyńska, T. Wszołek, A. Dudarewicz, P. Małecki, M. Kłaczyński, A. Bortkiewicz

Abstract:

Human tolerance to infrasound is defined by the hearing threshold. Infrasound that cannot be heard (or felt) is not annoying and is not thought to have any other adverse or health effects. Recent research has largely confirmed earlier findings. ISO 7196:1995 recommends the use of G-weighted characteristics for the assessment of infrasound. There is a strong correlation between G-weighted SPL and annoyance perception. The aim of this study was to propose exposure limits for infrasound from wind turbines. However, only a few countries have set limits for infrasound. These limits are usually no higher than 85-92 dBG, and none of them are specific to wind turbines. Over the years, a number of studies have been carried out to determine hearing thresholds below 20 Hz. It has been recognized that 10% of young people would be able to perceive 10 Hz at around 90 dB, and it has also been found that the difference in median hearing thresholds between young adults aged around 20 years and older adults aged over 60 years is around 10 dB, irrespective of frequency. This shows that older people (up to about 60 years of age) retain good hearing in the low frequency range, while their sensitivity to higher frequencies is often significantly reduced. In terms of exposure limits for infrasound, the average hearing threshold corresponds to a tone with a G-weighted SPL of about 96 dBG. In contrast, infrasound at Lp,G levels below 85-90 dBG is usually inaudible. The individual hearing threshold can, therefore be 10-15 dB lower than the average threshold, so the recommended limits for environmental infrasound could be 75 dBG or 80 dBG. It is worth noting that the G86 curve has been taken as the threshold of auditory perception of infrasound reached by 90-95% of the population, so the G75 and G80 curves can be taken as the criterion curve for wind turbine infrasound. Finally, two assessment methods and corresponding exposure limit values have been proposed for wind turbine infrasound, i.e. method I - based on G-weighted sound pressure level measurements and method II - based on frequency analysis in 1/3-octave bands in the frequency range 4-20 Hz. Separate limit values have been set for outdoor living areas in the open countryside (Area A) and for noise sensitive areas (Area B). In the case of Method I, infrasound limit values of 80 dBG (for areas A) and 75 dBG (for areas B) have been proposed, while in the case of Method II - criterion curves G80 and G75 have been chosen (for areas A and B, respectively).

Keywords: infrasound, exposure limit, hearing thresholds, wind turbines

Procedia PDF Downloads 81
2120 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies

Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan

Abstract:

The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.

Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping

Procedia PDF Downloads 96
2119 Morphology Evolution in Titanium Dioxide Nanotubes Arrays Prepared by Electrochemical Anodization

Authors: J. Tirano, H. Zea, C. Luhrs

Abstract:

Photocatalysis has established as viable option in the development of processes for the treatment of pollutants and clean energy production. This option is based on the ability of semiconductors to generate an electron flow by means of the interaction with solar radiation. Owing to its electronic structure, TiO₂ is the most frequently used semiconductors in photocatalysis, although it has a high recombination of photogenerated charges and low solar energy absorption. An alternative to reduce these limitations is the use of nanostructured morphologies which can be produced during the synthesis of TiO₂ nanotubes (TNTs). Therefore, if possible to produce vertically oriented nanostructures it will be possible to generate a greater contact area with electrolyte and better charge transfer. At present, however, the development of these innovative structures still presents an important challenge for the development of competitive photoelectrochemical devices. This research focuses on established correlations between synthesis variables and 1D nanostructure morphology which has a direct effect on the photocatalytic performance. TNTs with controlled morphology were synthesized by two-step potentiostatic anodization of titanium foil. The anodization was carried out at room temperature in an electrolyte composed of ammonium fluoride, deionized water and ethylene glycol. Consequent thermal annealing of as-prepared TNTs was conducted in the air between 450 °C-550 °C. Morphology and crystalline phase of the TNTs were carried out by SEM, EDS and XRD analysis. As results, the synthesis conditions were established to produce nanostructures with specific morphological characteristics. Anatase was the predominant phase of TNTs after thermal treatment. Nanotubes with 10 μm in length, 40 nm in pore diameter and a surface-volume ratio of 50 are important in photoelectrochemical applications based on TiO₂ due to their 1D characteristics, high surface-volume ratio, reduced radial dimensions and high oxide/electrolyte interface. Finally, this knowledge can be used to improve the photocatalytic activity of TNTs by making additional surface modifications with dopants that improve their efficiency.

Keywords: electrochemical anodization, morphology, self-organized nanotubes, TiO₂ nanotubes

Procedia PDF Downloads 156
2118 Safety Profile of Anti-Retroviral Medicine in South Africa Based on Reported Adverse Drug Reactions

Authors: Sarah Gounden, Mukesh Dheda, Boikhutso Tlou, Elizabeth Ojewole, Frasia Oosthuizen

Abstract:

Background: Antiretroviral therapy (ART) has been effective in the reduction of mortality and resulted in an improvement in the prognosis of HIV-infected patients. However, treatment with antiretrovirals (ARVs) has led to the development of many adverse drug reactions (ADRs). It is, therefore, necessary to determine the safety profile of these medicines in a South African population in order to ensure safe and optimal medicine use. Objectives: The aim of this study was to quantify ADRs experienced with the different ARVs currently used in South Africa, to determine the safety profile of ARV medicine in South Africa based on reported ADRs, and to determine the ARVs with the lowest risk profile based on specific patient populations. Methodology: This was a quantitative study. Individual case safety reports for the period January 2010 – December 2013 were obtained from the National Pharmacovigilance Center; these reports contained information on ADRs, ARV medicine, and patient demographics. Data was analysed to find associations that may exist between ADRs experienced, ARV medicines used and patient demographics. Results: A total of 1916 patient reports were received of which 1534 met the inclusion criteria for the study. The ARV with the lowest risk of ADRs were found to be lamivudine (0.51%, n=12), followed by lopinavir/ritonavir combination (0.8%, n=19) and abacavir (0.64%, n=15). A higher incidence of ADRs was observed in females compared to males. The age group 31–50 years and the weight group 61–80 kg had the highest incidence of ADRs reported. Conclusion: This study found that the safest ARVs to be used in a South African population are lamivudine, abacavir, and the lopinavir/ritonavir combination. Gender differences play a significant role in the occurrence of ADRs and both anatomical and physiological differences account for this. An increased BMI (body mass index) in both men and women showed an increase in the incidence of ADRs associated with ARV therapy.

Keywords: adverse drug reaction, antiretrovirals, HIV/AIDS, pharmacovigilance, South Africa

Procedia PDF Downloads 350
2117 In Silico Study of Cell Surface Structures of Parabacteroides distasonis Involved in Its Maintain Within the Gut Microbiota and Its Potential Pathogenicity

Authors: Jordan Chamarande, Lisiane Cunat, Corentine Alauzet, Catherine Cailliez-Grimal

Abstract:

Gut microbiota (GM) is now considered a new organ mainly due to the microorganism’s specific biochemical interaction with its host. Although mechanisms underlying host-microbiota interactions are not fully described, it is now well-defined that cell surface molecules and structures of the GM play a key role in such relation. The study of surface structures of GM members is also fundamental for their role in the establishment of species in the versatile and competitive environment of the digestive tract and as a potential virulence factor. Among these structures are capsular polysaccharides (CPS), fimbriae, pili and lipopolysaccharides (LPS), all well-described for their central role in microorganism colonization and communication with host epithelium. The health-promoting Parabacteroides distasonis, which is part of the core microbiome, has recently received a lot of attention, showing beneficial properties for its host and as a new potential biotherapeutic product. However, to the best of the authors’ knowledge, the cell surface molecules and structures of P. distasonis that allow its maintain within the GM are not identified. Moreover, although P. distasonis is strongly recognized as intestinal commensal species with benefits for its host, it has also been recognized as an opportunistic pathogen. In this study, we reported gene clusters potentially involved in the synthesis of the capsule, fimbriae-like and pili-like cell surface structures in 26 P. distasonis genomes and applied the new RfbA-Typing classification in order to better understand and characterize the beneficial/pathogenic behaviour related to P. distasonis strains. In context, 2 different types of fimbriae, 3 of pilus and up to 14 capsular polysaccharide loci, have been identified over the 26 genomes studied. Moreover, the addition of data to the rfbA-Type classification modified the outcome by rearranging rfbA genes and adding a fifth group to the classification. In conclusion, the strain variability in terms of external proteinaceous structure could explain the inter-strain differences previously observed in P. distasonis adhesion capacities and its potential pathogenicity.

Keywords: gut microbiota, Parabacteroides distasonis, capsular polysaccharide, fimbriae, pilus, O-antigen, pathogenicity, probiotic, comparative genomics

Procedia PDF Downloads 101
2116 Clinical Empathy: The Opportunity to Offer Optimal Treatment to People with Serious Illness

Authors: Leonore Robieux, Franck Zenasni, Marc Pocard, Clarisse Eveno

Abstract:

Empirical data in health psychology studies show the necessity to consider the doctor-patient communication and its positive impact on outcomes such as patients’ satisfaction, treatment adherence, physical and psychological wellbeing. In this line, the present research aims to define the role and determinants of an effective doctor–patient communication during the treatment of patients with serious illness (peritoneal carcinomatosis). We carried out a prospective longitudinal study including patients treated for peritoneal carcinomatosis of various origins. From November 2016, to date, data were collected using validated questionnaires at two times of evaluation: one month before the surgery (T0) and one month after (T1). Thus, patients reported their (a) anxiety and depression levels, (b) standardized and individualized quality of life and (c) how they perceived communication, attitude and empathy of the surgeon. 105 volunteer patients (Mean age = 58.18 years, SD = 10.24, 62.2% female) participated to the study. PC arose from rare diseases (14%), colorectal (38%), eso-gastric (24%) and ovarian (8%) cancer. Three groups are defined according to the severity of their pathology and the treatment offered to them: (1) important surgical treatment with the goal of healing (53%), (2) repeated palliative surgical treatment (17%), and (3) the patients recused for surgical treatment, only palliative approach (30%). Results are presented according to Baron and Kenny recommendations. The regressions analyses show that only depression and anxiety are sensitive to the communication and empathy of surgeon. The main results show that a good communication and high level of empathy at T0 and T1 limit depression and anxiety of the patients in T1. Results also indicate that the severity of the disease modulates this positive impact of communication: better is the communication the less are the level of depression and anxiety of the patients. This effect is higher for patients treated for the more severe disease. These results confirm that, even in the case severe disease a good communication between patient and physician remains a significant factor in promoting the well-being of patients. More specific training need to be developed to promote empathic care.

Keywords: clinical empathy, determinants, healthcare, psychological wellbeing

Procedia PDF Downloads 121
2115 Aerodynamic Design Optimization Technique for a Tube Capsule That Uses an Axial Flow Air Compressor and an Aerostatic Bearing

Authors: Ahmed E. Hodaib, Muhammed A. Hashem

Abstract:

High-speed transportation has become a growing concern. To increase high-speed efficiencies and minimize power consumption of a vehicle, we need to eliminate the friction with the ground and minimize the aerodynamic drag acting on the vehicle. Due to the complexity and high power requirements of electromagnetic levitation, we make use of the air in front of the capsule, that produces the majority of the drag, to compress it in two phases and inject a proportion of it through small nozzles to make a high-pressure air cushion to levitate the capsule. The tube is partially-evacuated so that the air pressure is optimized for maximum compressor effectiveness, optimum tube size, and minimum vacuum pump power consumption. The total relative mass flow rate of the tube air is divided into two fractions. One is by-passed to flow over the capsule body, ensuring that no chocked flow takes place. The other fraction is sucked by the compressor where it is diffused to decrease the Mach number (around 0.8) to be suitable for the compressor inlet. The air is then compressed and intercooled, then split. One fraction is expanded through a tail nozzle to contribute to generating thrust. The other is compressed again. Bleed from the two compressors is used to maintain a constant air pressure in an air tank. The air tank is used to supply air for levitation. Dividing the total mass flow rate increases the achievable speed (Kantrowitz limit), and compressing it decreases the blockage of the capsule. As a result, the aerodynamic drag on the capsule decreases. As the tube pressure decreases, the drag decreases and the capsule power requirements decrease, however, the vacuum pump consumes more power. That’s why Design optimization techniques are to be used to get the optimum values for all the design variables given specific design inputs. Aerodynamic shape optimization, Capsule and tube sizing, compressor design, diffuser and nozzle expander design and the effect of the air bearing on the aerodynamics of the capsule are to be considered. The variations of the variables are to be studied for the change of the capsule velocity and air pressure.

Keywords: tube-capsule, hyperloop, aerodynamic design optimization, air compressor, air bearing

Procedia PDF Downloads 329
2114 Amelioration of Lipopolysaccharide Induced Murine Colitis by Cell Wall Contents of Probiotic Lactobacillus Casei: Targeting Immuno-Inflammation and Oxidative Stress

Authors: Vishvas N. Patel, Mehul Chorawala

Abstract:

Currently, according to the authors best knowledge there are less effective therapeutic agents to limit intestinal mucosa damage associated with inflammatory bowel disease (IBD). Clinical studies have shown beneficial effects of several probiotics in patients of IBD. Probiotics are live organisms; confer a health benefit to the host by modulating immunoinflammation and oxidative stress. Although probiotics in murine and human improve disease severity, very little is known about the specific contribution of cell wall contents of probiotics in IBD. Herein, we investigated the ameliorative potential of cell wall contents of Lactobacillus casei (LC) in lipopolysaccharide (LPS)-induced murine colitis. Methods: Colitis was induced in LPS-sensitized rats by intracolonic instillation of LPS (50 µg/rat) for consecutive 14 days. Concurrently, cell wall contents isolated from 103, 106 and 109 CFU of LC was given subcutaneously to each rat for 21 days, considering sulfasalazine (100 mg/kg, p.o.) as standard. The severity of colitis was assessed by body weight loss, food intake, stool consistency, rectal bleeding, colon weight/length, spleen weight and histological analysis. Colonic inflammatory markers (myeloperoxidase (MPO) activity, C-reactive protein and proinflammatory cytokines) and oxidative stress markers (malondialdehyde, reduced glutathione and nitric oxide) were also assayed. Results: Cell wall contents of isolated from 106 and 109 CFU of LC significantly improved the severity of colitis by reducing body weight loss and diarrhea & bleeding incidence, improving food intake, colon weight/length, spleen weight and microscopic damage to the colonic mucosa. The treatment also reduced levels of inflammatory and oxidative stress markers and boosted antioxidant molecule. However, cell wall contents of isolated from 103 were ineffective. Conclusion: In conclusion, cell wall contents of LC attenuate LPS-induced colitis by modulating immuno-inflammation and oxidative stress.

Keywords: probiotics, Lactobacillus casei, immuno-inflammation, oxidative stress, lipopolysaccharide, colitis

Procedia PDF Downloads 86
2113 The Influence of Characteristics of Waste Water on Properties of Sewage Sludge

Authors: Catalina Iticescu, Lucian P. Georgescu, Mihaela Timofti, Gabriel Murariu, Catalina Topa

Abstract:

In the field of environmental protection in the EU and also in Romania, strict and clear rules are imposed that are respected. Among those, mandatory municipal wastewater treatment is included. Our study involved Municipal Wastewater Treatment Plant (MWWTP) of Galati. MWWTP began its activity by the end of 2011 and technology is one of the most modern used in the EU. Moreover, to our knowledge, it is the first technology of this kind used in the region. Until commissioning, municipal wastewater was discharged directly into the Danube without any treatment. Besides the benefits of depollution, a new problem has arisen: the accumulation of increasingly large sewage sludge. Therefore, it is extremely important to find economically feasible and environmentally friendly solutions. One of the most feasible methods of disposing of sewage sludge is their use on agricultural land. Sewage sludge can be used in agriculture if monitored in terms of physicochemical properties (pH, nutrients, heavy metals, etc.), in order not to contribute to pollution in soils and not to affect chemical and biological balances, which are relatively fragile. In this paper, 16 physico-chemical parameters were monitored. Experimental testings were realised on waste water samples, sewage sludge results and treated water samples. Testing was conducted with electrochemichal methods (pH, conductivity, TDS); parameters N-total (mg/L), P-total (mg/L), N-NH4 (mg/L), N-NO2 (mg/L), N-NO3 (mg/L), Fe-total (mg/L), Cr-total (mg/L), Cu (mg/L), Zn (mg/L), Cd (mg/L), Pb (mg/L), Ni (mg/L) were determined by spectrophotometric methods using a spectrophotometer NOVA 60 and specific kits. Analyzing the results, we concluded that Sewage sludges, although containing heavy metals, are in small quantities and will not affect the land on which they will be deposited. Also, the amount of nutrients contained are appreciable. These features indicate that the sludge can be safely used in agriculture, with the advantage that they represent a cheap fertilizer. Acknowledgement: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation – UEFISCDI, PNCDI III project, 79BG/2017, Efficiency of the technological process for obtaining of sewage sludge usable in agriculture, Efficient.

Keywords: municipal wastewater, physico-chemical properties, sewage sludge, technology

Procedia PDF Downloads 207
2112 Impact of Preksha Meditation on Academic Anxiety of Female Teenagers

Authors: Neelam Vats, Madhvi Pathak Pillai, Rajender Lal, Indu Dabas

Abstract:

The pressure of scoring higher marks to be able to get admission in a higher ranked institution has become a social stigma for school students. It leads to various social and academic pressures on them, causing psychological anxiety. This undue stress on students sometimes may even steer to aggressive behavior or suicidal tendencies. Human mind is always surrounded by the some desires, emotions and passions, which usually disturbs our mental peace. In such a scenario, we look for a solution that helps in removing all the obstacles of mind and make us mentally peaceful and strong enough to be able to deal with all kind of pressure. Preksha meditation is one such technique which aims at bringing the positive changes for overall transformation of personality. Hence, the present study was undertaken to assess the impact of Preksha Meditation on the academic anxiety on female teenagers. The study was conducted on 120 high school students from the capital city of India. All students were in the age group of 13-15 years. They also belonged to similar social as well as economic status. The sample was equally divided into two groups i.e. experimental group (N = 60) and control group (N = 60). Subjects of the experimental group were given the intervention of Preksha Meditation practice by the trained instructor for one hour per day, six days a week, for three months for the first experimental stage and another three months for the second experimental stage. The subjects of the control group were not assigned any specific type of activity rather they continued doing their normal official activities as usual. The Academic Anxiety Scale was used to collect data during multi-level stages i.e. pre-experimental stage, post-experimental stage phase-I, and post-experimental stage phase-II. The data were statistically analyzed by computing the two-tailed-‘t’ test for inter group comparison and Sandler’s ‘A’ test with alpha = or p < 0.05 for intra-group comparisons. The study concluded that the practice for longer duration of Preksha Meditation practice brings about very significant and beneficial changes in the pattern of academic anxiety.

Keywords: academic anxiety, academic pressure, Preksha, meditation

Procedia PDF Downloads 130
2111 Relevance Of Cognitive Rehabilitation Amongst Children Having Chronic Illnesses – A Theoretical Analysis

Authors: Pulari C. Milu Maria Anto

Abstract:

Background: Cognitive Rehabilitation/Retraining has been variously used in the research literature to represent non-pharmacological interventions that target the cognitive impairments with the goal of ameliorating cognitive function and functional behaviors to optimize the quality of life. Along with adult’s cognitive impairments, the need to address acquired cognitive impairments (due to any chronic illnesses like CHD - congenital heart diseases or ALL - Acute Lymphoblastic Leukemia) among child populations is inevitable. Also, it has to be emphasized as same we consider the cognitive impairments seen in the children having neurodevelopmental disorders. Methods: All published brain image studies (Hermann, B. et al,2002, Khalil, A. et al., 2004, Follin, C. et al, 2016, etc.) and studies emphasizing cognitive impairments in attention, memory, and/or executive function and behavioral aspects (Henkin, Y. et al,2007, Bellinger, D. C., & Newburger, J. W. (2010), Cheung, Y. T., et al,2016, that could be identified were reviewed. Based on a systematic review of the literature from (2000 -2021) different brain imaging studies, increased risk of neuropsychological and psychosocial impairments are briefly described. Clinical and research gap in the area is discussed. Results:30 papers, both Indian studies and foreign publications (Sage journals, Delhi psychiatry journal, Wiley Online Library, APA PsyNet, Springer, Elsevier, Developmental medicine, and child neurology), were identified. Conclusions: In India, a very limited number of brain imaging studies and neuropsychological studies have done by indicating the cognitive deficits of a child having or undergone chronic illness. None of the studies have emphasized the relevance nor the need of implementingCR among such children, even though its high time to address but still not established yet. The review of the current evidence is to bring out an insight among rehabilitation professionals in establishing a child specific CR and to publish new findings regarding the implementation of CR among such children. Also, this study will be an awareness on considering cognitive aspects of a child having acquired cognitive deficit (due to chronic illness), especially during their critical developmental period.

Keywords: cognitive rehabilitation, neuropsychological impairments, congenital heart diseases, acute lymphoblastic leukemia, epilepsy, and neuroplasticity

Procedia PDF Downloads 178
2110 Effect of Retained Posterior Horn of Medial Meniscus on Functional Outcome of ACL Reconstructed Knees

Authors: Kevin Syam, Devendra K. Chauhan, Mandeep Singh Dhillon

Abstract:

Background: The posterior horn of medial meniscus (PHMM) is a secondary stabilizer against anterior translation of tibia. Cadaveric studies have revealed increased strain on the ACL graft and greater instrumented laxity in Posterior horn deficient knees. Clinical studies have shown higher prevalence of radiological OA after ACL reconstruction combined with menisectomy. However, functional outcomes in ACL reconstructed knee in the absence of Posterior horn is less discussed, and specific role of posterior horn is ill-documented. This study evaluated functional and radiological outcomes in posterior horn preserved and posterior horn sacrificed ACL reconstructed knees. Materials: Of the 457 patients who had ACL reconstruction done over a 6 year period, 77 cases with minimum follow up of 18 months were included in the study after strict exclusion criteria (associated lateral meniscus injury, other ligamentous injuries, significant cartilage degeneration, repeat injury and contralateral knee injuries were excluded). 41 patients with intact menisci were compared with 36 patients with absent posterior horn of medial meniscus. Radiological and clinical tests for instability were conducted, and knees were evaluated using subjective International Knee Documentation Committee (IKDC) score and the Orthopadische Arbeitsgruppe Knie score (OAK). Results: We found a trend towards significantly better overall outcome (OAK) in cases with intact PHMM at average follow-up of 43.03 months (p value 0.082). Cases with intact PHMM had significantly better objective stability (p value 0.004). No significant differences were noted in the subjective IKDC score (p value 0.526) and the functional OAK outcome (category D) (p value 0.363). More cases with absent posterior horn had evidence of radiological OA (p value 0.022) even at mid-term follow-up. Conclusion: Even though the overall OAK and subjective IKDC scores did not show significant difference between the two subsets, the poorer outcomes in terms of objective stability and radiological OA noted in the absence of PHMM, indicates the importance of preserving this important part of the meniscus.

Keywords: ACL, functional outcome, knee, posterior of medial meniscus

Procedia PDF Downloads 358
2109 Multiscale Analysis of Shale Heterogeneity in Silurian Longmaxi Formation from South China

Authors: Xianglu Tang, Zhenxue Jiang, Zhuo Li

Abstract:

Characterization of shale multi scale heterogeneity is an important part to evaluate size and space distribution of shale gas reservoirs in sedimentary basins. The origin of shale heterogeneity has always been a hot research topic for it determines shale micro characteristics description and macro quality reservoir prediction. Shale multi scale heterogeneity was discussed based on thin section observation, FIB-SEM, QEMSCAN, TOC, XRD, mercury intrusion porosimetry (MIP), and nitrogen adsorption analysis from 30 core samples in Silurian Longmaxi formation. Results show that shale heterogeneity can be characterized by pore structure and mineral composition. The heterogeneity of shale pore is showed by different size pores at nm-μm scale. Macropores (pore diameter > 50 nm) have a large percentage of pore volume than mesopores (pore diameter between 2~ 50 nm) and micropores (pore diameter < 2nm). However, they have a low specific surface area than mesopores and micropores. Fractal dimensions of the pores from nitrogen adsorption data are higher than 2.7, what are higher than 2.8 from MIP data, showing extremely complex pore structure. This complexity in pore structure is mainly due to the organic matter and clay minerals with complex pore network structures, and diagenesis makes it more complicated. The heterogeneity of shale minerals is showed by mineral grains, lamina, and different lithology at nm-km scale under the continuous changing horizon. Through analyzing the change of mineral composition at each scale, random arrangement of mineral equal proportion, seasonal climate changes, large changes of sedimentary environment, and provenance supply are considered to be the main reasons that cause shale minerals heterogeneity from microcosmic to macroscopic. Due to scale effect, the change of shale multi scale heterogeneity is a discontinuous process, and there is a transformation boundary between homogeneous and in homogeneous. Therefore, a shale multi scale heterogeneity changing model is established by defining four types of homogeneous unit at different scales, which can be used to guide the prediction of shale gas distribution from micro scale to macro scale.

Keywords: heterogeneity, homogeneous unit, multiscale, shale

Procedia PDF Downloads 450
2108 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 218
2107 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption, and GDP for Turkey: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in the VECM suggests negative long-run causalities from consumption of petroleum products and the direct combustion of crude oil, coal and natural gas to GDP. Conversely, positive impacts of CO2 emissions and electricity consumption on GDP are found to be significant in Turkey during the period. There exists a short-run bidirectional relationship between electricity consumption and natural gas consumption. There exists a positive unidirectional causality running from electricity consumption to natural gas consumption, while there exists a negative unidirectional causality running from natural gas consumption to electricity consumption. Moreover, GDP has a negative effect on electricity consumption in Turkey in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Turkey over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis

Procedia PDF Downloads 507
2106 A Systematic Review on Orphan Drugs Pricing, and Prices Challenges

Authors: Seyran Naghdi

Abstract:

Background: Orphan drug development is limited by very high costs attributed to the research and development and small size market. How health policymakers address this challenge to consider both supply and demand sides need to be explored for directing the policies and plans in the right way. The price is an important signal for pharmaceutical companies’ profitability and the patients’ accessibility as well. Objective: This study aims to find out the orphan drugs' price-setting patterns and approaches in health systems through a systematic review of the available evidence. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) approach was used. MedLine, Embase, and Web of Sciences were searched via appropriate search strategies. Through Medical Subject Headings (MeSH), the appropriate terms for pricing were 'cost and cost analysis', and it was 'orphan drug production', and 'orphan drug', for orphan drugs. The critical appraisal was performed by the Joanna-Briggs tool. A Cochrane data extraction form was used to obtain the data about the studies' characteristics, results, and conclusions. Results: Totally, 1,197 records were found. It included 640 hits from Embase, 327 from Web of Sciences, and 230 MedLine. After removing the duplicates, 1,056 studies remained. Of them, 924 studies were removed in the primary screening phase. Of them, 26 studies were included for data extraction. The majority of the studies (>75%) are from developed countries, among them, approximately 80% of the studies are from European countries. Approximately 85% of evidence has been produced in the recent decade. Conclusions: There is a huge variation of price-setting among countries, and this is related to the specific pharmacological market structure and the thresholds that governments want to intervene in the process of pricing. On the other hand, there is some evidence on the availability of spaces to reduce the very high costs of orphan drugs development through an early agreement between pharmacological firms and governments. Further studies need to focus on how the governments could incentivize the companies to agree on providing the drugs at lower prices.

Keywords: orphan drugs, orphan drug production, pricing, costs, cost analysis

Procedia PDF Downloads 162
2105 Embedding Employability in the Curriculum: Experiences from New Zealand

Authors: Narissa Lewis, Susan Geertshuis

Abstract:

The global and national employability agenda is changing the higher education landscape as academic staff are faced with the responsibility of developing employability capabilities and attributes in addition to delivering discipline specific content and skills. They realise that the shift towards teaching sustainable capabilities means a shift in the way they teach. But what that shift should be or how they should bring it about is unclear. As part of a national funded project, representatives from several New Zealand (NZ) higher education institutions and the NZ Association of Graduate Employers partnered to discover, trial and disseminate means of embedding employability in the curriculum. Findings from four focus groups (n=~75) and individual interviews (n=20) with staff from several NZ higher education institutions identified factors that enable or hinder embedded employability development within their respective institutions. Participants believed that higher education institutions have a key role in developing graduates for successful lives and careers however this requires a significant shift in culture within their respective institutions. Participants cited three main barriers: lack of strategic direction, support and guidance; lack of understanding and awareness of employability; and lack of resourcing and staff capability. Without adequate understanding and awareness of employability, participants believed it is difficult to understand what employability is let alone how it can be embedded in the curriculum. This presentation will describe some of the impacts that the employability agenda has on staff as they try to move from traditional to contemporary forms of teaching to develop employability attributes of students. Changes at the institutional level are required to support contemporary forms of teaching, however this is often beyond the sphere of influence at the teaching staff level. The study identified that small changes to teaching practices were necessary and a simple model to facilitate change from traditional to contemporary forms of teaching was developed. The model provides a framework to identify small but impactful teaching practices and exemplar teaching practices were identified. These practices were evaluated for transferability into other contexts to encourage small but impactful changes to embed employability in the curriculum.

Keywords: curriculum design, change management, employability, teaching exemplars

Procedia PDF Downloads 326