Search results for: standard form of contract
2204 Lean Production to Increase Reproducibility and Work Safety in the Laser Beam Melting Process Chain
Authors: C. Bay, A. Mahr, H. Groneberg, F. Döpper
Abstract:
Additive Manufacturing processes are becoming increasingly established in the industry for the economic production of complex prototypes and functional components. Laser beam melting (LBM), the most frequently used Additive Manufacturing technology for metal parts, has been gaining in industrial importance for several years. The LBM process chain – from material storage to machine set-up and component post-processing – requires many manual operations. These steps often depend on the manufactured component and are therefore not standardized. These operations are often not performed in a standardized manner, but depend on the experience of the machine operator, e.g., levelling of the build plate and adjusting the first powder layer in the LBM machine. This lack of standardization limits the reproducibility of the component quality. When processing metal powders with inhalable and alveolar particle fractions, the machine operator is at high risk due to the high reactivity and the toxic (e.g., carcinogenic) effect of the various metal powders. Faulty execution of the operation or unintentional omission of safety-relevant steps can impair the health of the machine operator. In this paper, all the steps of the LBM process chain are first analysed in terms of their influence on the two aforementioned challenges: reproducibility and work safety. Standardization to avoid errors increases the reproducibility of component quality as well as the adherence to and correct execution of safety-relevant operations. The corresponding lean method 5S will therefore be applied, in order to develop approaches in the form of recommended actions that standardize the work processes. These approaches will then be evaluated in terms of ease of implementation and their potential for improving reproducibility and work safety. The analysis and evaluation showed that sorting tools and spare parts as well as standardizing the workflow are likely to increase reproducibility. Organizing the operational steps and production environment decreases the hazards of material handling and consequently improves work safety.Keywords: additive manufacturing, lean production, reproducibility, work safety
Procedia PDF Downloads 1842203 Computational Approach to Identify Novel Chemotherapeutic Agents against Multiple Sclerosis
Authors: Syed Asif Hassan, Tabrej Khan
Abstract:
Multiple sclerosis (MS) is a chronic demyelinating autoimmune disorder, of the central nervous system (CNS). In the present scenario, the current therapies either do not halt the progression of the disease or have side effects which limit the usage of current Disease Modifying Therapies (DMTs) for a longer period of time. Therefore, keeping the current treatment failure schema, we are focusing on screening novel analogues of the available DMTs that specifically bind and inhibit the Sphingosine1-phosphate receptor1 (S1PR1) thereby hindering the lymphocyte propagation toward CNS. The novel drug-like analogs molecule will decrease the frequency of relapses (recurrence of the symptoms associated with MS) with higher efficacy and lower toxicity to human system. In this study, an integrated approach involving ligand-based virtual screening protocol (Ultrafast Shape Recognition with CREDO Atom Types (USRCAT)) to identify the non-toxic drug like analogs of the approved DMTs were employed. The potency of the drug-like analog molecules to cross the Blood Brain Barrier (BBB) was estimated. Besides, molecular docking and simulation using Auto Dock Vina 1.1.2 and GOLD 3.01 were performed using the X-ray crystal structure of Mtb LprG protein to calculate the affinity and specificity of the analogs with the given LprG protein. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has a higher hypothetical affinity, also has greater negative value. Further, the non-specific ligands were screened out using the structural filter proposed by Baell and Holloway. Based on the USRCAT, Lipinski’s values, toxicity and BBB analysis, the drug-like analogs of fingolimod and BG-12 showed that RTL and CHEMBL1771640, respectively are non-toxic and permeable to BBB. The successful docking and DSX analysis showed that RTL and CHEMBL1771640 could bind to the binding pocket of S1PR1 receptor protein of human with greater affinity than as compared to their parent compound (Fingolimod). In this study, we also found that all the drug-like analogs of the standard MS drugs passed the Bell and Holloway filter.Keywords: antagonist, binding affinity, chemotherapeutics, drug-like, multiple sclerosis, S1PR1 receptor protein
Procedia PDF Downloads 2572202 The Feasibility and Usability of Antennas Silence Zone for Localization and Path Finding
Authors: S. Malebary, W. Xu
Abstract:
Antennas are important components that enable transmitting and receiving signals in mid-air (wireless). The radiation pattern of omni-directional (i.e., dipole) antennas, reflects the variation of power radiated by an antenna as a function of direction when transmitting. As the performance of the antenna is the same in transmitting and receiving, it also reflects the sensitivity of the antenna in different directions when receiving. The main observation when dealing with omni-directional antennas, regardless the application, is they equally radiate power in all directions in reference to Equivalent Isotropically Radiated Power (EIRP). Disseminating radio frequency signals in an omni-directional manner form a doughnut-shape-field with a cone in the middle of the elevation plane (when mounted vertically). In this paper, we investigate the existence of this physical phenomena namely silence cone zone (the zone where radiated power is nulled). First, we overview antenna types and properties that have the major impact on the shape of the electromagnetic field. Then we model various off the shelf dipoles in Matlab based on antennas’ features (dimensions, gain, operating frequency, … etc.) and compare the resulting radiation patterns. After that, we validate the existence of the null zone in Omni-directional antennas by conducting experiments and generating waveforms (using USRP1 and USRP2) at various frequencies using different types of antennas and gains in indoor/outdoor. We capture the generated waveforms around antennas' null zone in the reactive, near, and far field with a spectrum analyzer mounted on a drone, using various off the shelf antennas. We analyze the captured signals in RF-Explorer and plot the impact on received power and signal amplitude inside and around the null zone. Finally, it is concluded from evaluation and measurements the existence of null zones in Omni-directional antennas which we plan on extending this work in the near future to investigate the usability of the null zone for various applications such as localization and path finding.Keywords: antennas, amplitude, field regions, frequency, FSPL, omni-directional, radiation pattern, RSSI, silence zone cone
Procedia PDF Downloads 3042201 On Cloud Computing: A Review of the Features
Authors: Assem Abdel Hamed Mousa
Abstract:
The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.Keywords: internet, cloud computing, ubiquitous computing, big data
Procedia PDF Downloads 3842200 Preparation and Properties of Polylactic Acid/MDI Modified Thermoplastic Starch Blends
Authors: Sukhila Krishnan, Smita Mohanty, Sanjay K. Nayak
Abstract:
Polylactide (PLA) and thermoplastic starch (TPS) are the most promising bio-based materials presently available on the market. Polylactic acid is one of the versatile biodegradable polyester showing wide range of applications in various fields and starch is a biopolymer which is renewable, cheap as well as extensively available. The usual increase in the cost of petroleum-based commodities in the next decades opens bright future for these materials. Their biodegradability and compostability was an added advantage in applications that are difficult to recycle. Currently, thermoplastic starch (TPS) has been used as a substitute for synthetic plastic in several commercial products. But, TPS shows some limitations mainly due to its brittle and hydrophilic nature, which has to be resolved to widen its application.The objective of the work we report here was to initiate chemical modifications on TPS and to build up a process to control its chemical structure using a solution process which can reduce its water sensitive properties and then blended it with PLA to improve compatibility between PLA and TPS. The method involves in cleavage of starch amylose and amylopectin chain backbone to plasticize with glycerol and water in batch mixer and then the prepared TPS was reacted in solution with diisocyanates i.e, 4,4'-Methylenediphenyl Diisocyanate (MDI).This diisocyanate was used before with great success for the chemical modification of TPS surface. The method utilized here will form an urethane-linkages between reactive isocyanate groups (–NCO) and hydroxyl groups (-OH) of starch as well as of glycerol. New polymer synthesised shows a reduced crystallinity, less hydrophilic and enhanced compatibility with other polymers. The TPS was prepared by Haake Rheomix 600 batch mixer with roller rotors operating at 50 rpm. The produced material is then refluxed for 5hrs with MDI in toluene with constant stirring. Finally, the modified TPS was melt blended with PLA in different compositions. Blends obtained shows an improved mechanical properties. These materials produced are characterized by Fourier Transform Infrared Spectra (FTIR), DSC, X-Ray diffraction and mechanical tests.Keywords: polylactic acid, thermoplastic starch, Methylenediphenyl Diisocyanate, Polylactide (PLA)
Procedia PDF Downloads 3862199 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review
Authors: Anicet Dansou
Abstract:
Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete
Procedia PDF Downloads 1102198 Determination of Medians of Biochemical Maternal Serum Markers in Healthy Women Giving Birth to Normal Babies
Authors: Noreen Noreen, Aamir Ijaz, Hamza Akhtar
Abstract:
Background: Screening plays a major role to detect chromosomal abnormalities, Down syndrome, neural tube defects and other inborn diseases of the newborn. Serum biomarkers in the second trimester are useful in determining risk of most common chromosomal anomalies; these test include Alpha-fetoprotein (AFP), Human chorionic gonadotropin (hCG), Unconjugated Oestriol (UEȝ)and inhibin-A. Quadruple biomarkers are worth test in diagnosing the congenital pathology during pregnancy, these procedures does not form a part of routine health care of pregnant women in Pakistan, so the median value is lacking for population in Pakistan. Objective: To determine median values of biochemical maternal serum markers in local population during second trimester maternal screening. Study settings: Department of Chemical Pathology and Endocrinology, Armed Forces Institute of Pathology (AFIP) Rawalpindi. Methods: Cross-Sectional study for estimation of reference values. Non-probability consecutive sampling, 155 healthy pregnant women, of 30-40 years of age, will be included. As non-parametric statistics will be used, the minimum sample size is 120. Result: Total 155 women were enrolled into this study. The age of all women enrolled ranged from 30 to39 yrs. Among them, 39 per cent of women were less than 34 years. Mean maternal age 33.46±2.35 SD and maternal body weight were 54.98±2.88. Median value of quadruple markers calculated from 15-18th week of gestation that will be used for calculation of MOM for screening of trisomy21 in this gestational age. Median value at 15 week of gestation were observed hCG 36650 mIU/ml, AFP 23.3 IU/ml, UEȝ 3.5 nmol/L, InhibinA 198 ng/L, at 16 week of gestation hCG 29050 mIU/ml, AFP 35.4 IU/ml, UEȝ 4.1 nmol/L, InhibinA 179 ng/L, at 17 week of gestation hCG 28450 mIU/ml, AFP 36.0 IU/ml, UEȝ 6.7 nmol/L, InhibinA 176 ng/L and at 18 week of gestation hCG 25200 mIU/ml, AFP 38.2 IU/ml, UEȝ 8.2 nmol/L, InhibinA 190 ng/L respectively.All the comparisons were significant (p-Value <0.005) with 95% confidence Interval (CI) and level of significance of study set by going through literature and set at 5%. Conclusion: The median values for these four biomarkers in Pakistani pregnant women can be used to calculate MoM.Keywords: screening, down syndrome, quadruple test, second trimester, serum biomarkers
Procedia PDF Downloads 1812197 SNP g.1007A>G within the Porcine DNAL4 Gene Affects Sperm Motility Traits
Authors: I. Wiedemann, A. R. Sharifi, A. Mählmeyer, C. Knorr
Abstract:
A requirement for sperm motility is a morphologically intact flagellum with a central axoneme. The flagellar beating is caused by the varying activation and inactivation of dynein molecules which are located in the axoneme. DNAL4 (dynein, axonemal, light chain 4) is regarded as a possible functional candidate gene encoding a small subunit of the dyneins. In the present study, 5814bp of the porcine DNAL4 (GenBank Acc. No. AM284696.1, 6097 bp, 4 exons) were comparatively sequenced using three boars with a high motility (>68%) and three with a low motility (<60%). Primers were self-designed except for those covering exons 1, 2 and 3. Prior to sequencing, the PCR products were purified. Sequencing was performed with an ABI PRISM 3100 Genetic Analyzer using the BigDyeTM Terminator v3.1 Cycle Sequencing Reaction Kit. Finally, 23 SNPs were described and genotyped for 82 AI boars representing the breeds Piétrain, German Large White and German Landrace. The genotypes were used to assess possible associations with standard spermatological parameters (ejaculate volume, density, and sperm motility (undiluted (Motud), 24h (Mot1) and 48h (Mot2) after semen collection) that were regularly recorded on the AI station. The analysis included a total of 8,833 spermatological data sets which ranged from 2 to 295 sets per boar in five years. Only SNP g.1007A>G had a significant effect. Finally, the gene substitution effect using the following statistical model was calculated: Yijk= µ+αi+βj+αβij+b1Sijk+b2Aijk+b3T ijk + b4Vijk+b5(α*A)ijk +b6(β*A)ijk+b7(A*T)ijk+Uijk+eijk where Yijk is the semen characteristics, µ is the general mean, α is the main effect of breed, β is the main effect of season, S is the effect of SNP (g.1007A > G), A is the effect of age at semen collection, V is the effect of diluter, αβ, α*A, β*A, A*T are interactions between the fixed effects, b1-b7 are regression coefficients between y and the respective covariate, U is the random effect of repeated observation on animal and e is the random error. The results from the single marker regression analysis revealed highly significant effects (p < 0.0001) of SNP g.1007A > G on Mot1 resp. on Mot2, resulting in a marked reduction by 11.4% resp. 15.4%. Furthermore a loss of Motud by 4.6% was detected (p < 0.0178). Considering the SNP g.1007A > G as a main factor (dominant-recessive model), significant differences between genotypes AA and AG as well as AA and GG for Mot1 and Mot2 exist. For Motud there was a significant difference between AA and GG.Keywords: association, DNAL4, porcine, sperm traits
Procedia PDF Downloads 4602196 Variation in N₂ Fixation and N Contribution by 30 Groundnut (Arachis hypogaea L.) Varieties Grown in Blesbokfontein Mpumalanga Province, South Africa
Authors: Titus Y. Ngmenzuma, Cherian. Mathews, Feilx D. Dakora
Abstract:
In Africa, poor nutrient availability, particularly N and P, coupled with low soil moisture due to erratic rainfall, constitutes the major crop production constraints. Although inorganic fertilizers are an option for meeting crop nutrient requirements for increased grain yield, the high cost and scarcity of inorganic inputs make them inaccessible to resource-poor farmers in Africa. Because crops grown on such nutrient-poor soils are micronutrient deficient, incorporating N₂-fixing legumes into cropping systems can sustainably improve crop yield and nutrient accumulation in the grain. In Africa, groundnut can easily form an effective symbiosis with native soil rhizobia, leading to marked N contribution in cropping systems. In this study, field experiments were conducted at Blesbokfontein in Mpumalanga Province to assess N₂ fixation and N contribution by 30 groundnut varieties during the 2018/2019 planting season using the ¹⁵N natural abundance technique. The results revealed marked differences in shoot dry matter yield, symbiotic N contribution, soil N uptake and grain yield among the groundnut varieties. The percent N derived from fixation ranged from 37 to 44% for varieties ICGV131051 and ICGV13984. The amount of N-fixed ranged from 21 to 58 kg/ha for varieties Chinese and IS-07273, soil N uptake from 31 to 80 kg/ha for varieties IS-07947 and IS-07273, and grain yield from 193 to 393 kg/ha for varieties ICGV15033 and ICGV131096, respectively. Compared to earlier studies on groundnut in South Africa, this study has shown low N₂ fixation and N contribution to the cropping systems, possibly due to environmental factors such as low soil moisture. Because the groundnut varieties differed in their growth, symbiotic performance and grain yield, more field testing is required over a range of differing agro-ecologies to identify genotypes suitable for different cropping environmentsKeywords: ¹⁵N natural abundance, percent N derived from fixation, amount of N-fixed, grain yield
Procedia PDF Downloads 1902195 Experimental Study on Bending and Torsional Strength of Bulk Molding Compound Seat Back Frame Part
Authors: Hee Yong Kang, Hyeon Ho Shin, Jung Cheol Yoo, Il Taek Lee, Sung Mo Yang
Abstract:
Lightweight technology using composites is being developed for vehicle seat structures, and its design must meet the safety requirements. According to the Federal Motor Vehicle Safety Standard (FMVSS) 207 seating systems test procedure, the back moment load is applied to the seat back frame structure for the safety evaluation of the vehicle seat. The seat back frame using the composites is divided into three parts: upper part frame, and left- and right-side frame parts following the manufacturing process. When a rear moment load is applied to the seat back frame, the side frame receives the bending load and the torsional load at the same time. This results in the largest loaded strength. Therefore, strength test of the component unit is required. In this study, a component test method based on the FMVSS 207 seating systems test procedure was proposed for the strength analysis of bending load and torsional load of the automotive Bulk Molding Compound (BMC) Seat Back Side Frame. Moreover, strength evaluation according to the carbon band reinforcement was performed. The back-side frame parts of the seat that are applied to the test were manufactured through BMC that is composed of vinyl ester Matrix and short carbon fiber. Then, two kinds of reinforced and non-reinforced parts of carbon band were formed through a high-temperature compression molding process. In addition, the structure that is applied to the component test was constructed by referring to the FMVSS 207. Then, the bending load and the torsional load were applied through the displacement control to perform the strength test for four load conditions. The results of each test are shown through the load-displacement curves of the specimen. The failure strength of the parts caused by the reinforcement of the carbon band was analyzed. Additionally, the fracture characteristics of the parts for four strength tests were evaluated, and the weakness structure of the back-side frame of the seat structure was confirmed according to the test conditions. Through the bending and torsional strength test methods, we confirmed the strength and fracture characteristics of BMC Seat Back Side Frame according to the carbon band reinforcement. And we proposed a method of testing the part strength of a seat back frame for vehicles that can meet the FMVSS 207.Keywords: seat back frame, bending and torsional strength, BMC (Bulk Molding Compound), FMVSS 207 seating systems
Procedia PDF Downloads 2102194 CFD Simulation of Spacer Effect on Turbulent Mixing Phenomena in Sub Channels of Boiling Nuclear Assemblies
Authors: Shashi Kant Verma, S. L. Sinha, D. K. Chandraker
Abstract:
Numerical simulations of selected subchannel tracer (Potassium Nitrate) based experiments have been performed to study the capabilities of state-of-the-art of Computational Fluid Dynamics (CFD) codes. The Computational Fluid Dynamics (CFD) methodology can be useful for investigating the spacer effect on turbulent mixing to predict turbulent flow behavior such as Dimensionless mixing scalar distributions, radial velocity and vortices in the nuclear fuel assembly. A Gibson and Launder (GL) Reynolds stress model (RSM) has been selected as the primary turbulence model to be applied for the simulation case as it has been previously found reasonably accurate to predict flows inside rod bundles. As a comparison, the case is also simulated using a standard k-ε turbulence model that is widely used in industry. Despite being an isotropic turbulence model, it has also been used in the modeling of flow in rod bundles and to produce lateral velocities after thorough mixing of coolant fairly. Both these models have been solved numerically to find out fully developed isothermal turbulent flow in a 30º segment of a 54-rod bundle. Numerical simulation has been carried out for the study of natural mixing of a Tracer (Passive scalar) to characterize the growth of turbulent diffusion in an injected sub-channel and, afterwards on, cross-mixing between adjacent sub-channels. The mixing with water has been numerically studied by means of steady state CFD simulations with the commercial code STAR-CCM+. Flow enters into the computational domain through the mass inflow at the three subchannel faces. Turbulence intensity and hydraulic diameter of 1% and 5.9 mm respectively were used for the inlet. A passive scalar (Potassium nitrate) is injected through the mass fraction of 5.536 PPM at subchannel 2 (Upstream of the mixing section). Flow exited the domain through the pressure outlet boundary (0 Pa), and the reference pressure was 1 atm. Simulation results have been extracted at different locations of the mixing zone and downstream zone. The local mass fraction shows uniform mixing. The effect of the applied turbulence model is nearly negligible just before the outlet plane because the distributions look like almost identical and the flow is fully developed. On the other hand, quantitatively the dimensionless mixing scalar distributions change noticeably, which is visible in the different scale of the colour bars.Keywords: single-phase flow, turbulent mixing, tracer, sub channel analysis
Procedia PDF Downloads 2082193 Evaluation of Triage Performance: Nurse Practice and Problem Classifications
Authors: Atefeh Abdollahi, Maryam Bahreini, Babak Choobi Anzali, Fatemeh Rasooli
Abstract:
Introduction: Triage becomes the main part of organization of care in Emergency department (ED)s. It is used to describe the sorting of patients for treatment priority in ED. The accurate triage of injured patients has reduced fatalities and improved resource usage. Besides, the nurses’ knowledge and skill are important factors in triage decision-making. The ability to define an appropriate triage level and their need for intervention is crucial to guide to a safe and effective emergency care. Methods: This is a prospective cross-sectional study designed for emergency nurses working in four public university hospitals. Five triage workshops have been conducted every three months for emergency nurses based on a standard triage Emergency Severity Index (ESI) IV slide set - approved by Iranian Ministry of Health. Most influential items on triage performance were discussed through brainstorming in workshops which then, were peer reviewed by five emergency physicians and two head registered nurses expert panel. These factors that might distract nurse’ attention from proper decisions included patients’ past medical diseases, the natural tricks of triage and system failure. After permission had been taken, emergency nurses participated in the study and were given the structured questionnaire. Data were analysed by SPSS 21.0. Results: 92 emergency nurses enrolled in the study. 30 % of nurses reported the past history of chronic disease as the most influential confounding factor to ascertain triage level, other important factors were the history of prior admission, past history of myocardial infarction and heart failure to be 20, 17 and 11 %, respectively. Regarding the concept of difficulties in triage practice, 54.3 % reported that the discussion with patients and family members was difficult and 8.7 % declared that it is hard to stay in a single triage room whole day. Among the participants, 45.7 and 26.1 % evaluated the triage workshops as moderately and highly effective, respectively. 56.5 % reported overcrowding as the most important system-based difficulty. Nurses were mainly doubtful to differentiate between the triage levels 2 and 3 according to the ESI VI system. No significant correlation was found between the work record of nurses in triage and the uncertainty in determining the triage level and difficulties. Conclusion: The work record of nurses hardly seemed to be effective on the triage problems and issues. To correct the deficits, training workshops should be carried out, followed by continuous refresher training and supportive supervision.Keywords: assessment, education, nurse, triage
Procedia PDF Downloads 2352192 An Exploratory Study on the Level of Awareness and Common Barriers of Physicians on Overweight and Obesity Management in Bangladesh
Authors: Kamrun Nahar Koly, Saimul Islam
Abstract:
Overweight and obesity is increasing at an alarming rate and a leading risk factor for morbidity throughout the world. In a country like Bangladesh where under nutrition and overweight both co-exist at the same time, but this issue has been underexplored as expected. The aim of the present study was to assess the knowledge, attitudes and identify the barriers of the physicians regarding overweight and obesity management on an urban hospital of Dhaka city in Bangladesh. A simple cross sectional study was conducted at two selected government and two private hospital to assess the knowledge, attitude and common barriers regarding overweight and obesity management among healthcare professionals. One hundred and fifty five physicians were surveyed. A standard questionnaire was constructed in local language and interview was administrated. Among the 155 physicians, majority 53 (34.20%) were working on SMC, 36 (23.20%) from DMC, 33 (21.30%) were based on SSMC and the rest 33 (21.30%) were from HFRCMH. Mean age of the study physicians were 31.88±5.92. Majority of the physicians 80 (51.60%) were not able to answer the correct prevalence of obesity but also a substantial number of them 75(48.40%) could mark the right answer. Among the physicians 150 (96.77%) reported BMI as a diagnostic index for overweight and obesity, where as 43 (27.74%) waist circumference, 30 (19.35%) waist hip ratio and 26 (16.77%) marked mid-arm circumference. A substantial proportion 71 (46.70%) of the physicians thought that they do not have much to do controlling weight problem in Bangladesh context though it has been opposed by 42 (27.60%) of the physicians and 39(25.70%) was neutral to comment. The majority of them 147 (96.1%) thought that a family based education program would be beneficial followed by 145 (94.8%) physicians mentioned about raising awareness among mothers as she is the primary caregiver. The idea of a school based education program will also help to early intervene referred by 142 (92.8%) of the physicians. Community based education program was also appreciated by 136 (89.5%) of the physicians. About 74 (47.7%) of them think that the patients still lack in motivation to maintain their weight properly at the same time too many patients to deal with can be a barrier as well assumed by 73 (47.1%) of them. Lack of national policy or management guideline can act as an obstacle told by 60 (38.7%) of the physicians. The relationship of practicing as a part of the general examination and chronic disease management was statistically significant (p<0.05) with physician occupational status. As besides, perceived barriers like lack of parents support, lack of a national policy was statistically significant (p<0.05) with physician occupational status. For the young physician, more training programme will be needed to transform their knowledge and attitude into practice. However, several important barriers interface for the physician treatment efforts and need to address.Keywords: obesity management, physician, awareness, barriers, Bangladesh
Procedia PDF Downloads 1662191 Effect of Lifestyle Modification for Two Years on Obesity and Metabolic Syndrome Components in Elementary Students: A Community-Based Trial
Authors: Bita Rabbani, Hossein Chiti, Faranak Sharifi, Saeedeh Mazloomzadeh
Abstract:
Background: Lifestyle modifications, especially improving nutritional patterns and increasing physical activity, are the most important factors in preventing obesity and metabolic syndrome in children and adolescents. For this purpose, the following interventional study was designed to investigate the effects of educational programs for students, as well as changes in diet and physical activity, on obesity and components of the metabolic syndrome. Methods: This study is part of an interventional research project (elementary school) conducted on all students of Sama schools in Zanjan and Abhar in three levels of elementary, middle, and high school, including 1000 individuals in Zanjan (intervention group) and 1000 individuals (control group) in Abhar in 2011. Interventions were based on educating students, teachers, and parents, changes in food services, and physical activity. We primarily measured anthropometric indices, fasting blood sugar, lipid profiles, and blood pressure and completed standard nutrition and physical activity questionnaires. Also, blood insulin levels were randomly measured in a number of students. Data analysis was done by SPSS software version 16.0. Results: Overall, 589 individuals (252 male, 337 female) entered the case group, and 803 individuals (344 male, 459 female) entered the control group. After two years of intervention, mean waist circumference (63.8 ± 10.9) and diastolic BP (63.8 ± 10.4) were significantly lower; however, mean systolic BP (10.1.0 ± 12.5), food score (25.0 ± 5.0) and drinking score (12.1 ± 2.3) were higher in the intervention group (p<0.001). Comparing components of metabolic syndrome between the second year and at time of recruitment within the intervention group showed that although number of overweight/obese individuals, individuals with hypertriglyceridemia and high LDL increased, abdominal obesity, high BP, hyperglycemia, and insulin resistance decreased (p<0.001). On the other hand, in the control group, number of individuals with high BP increased significantly. Conclusion: The prevalence of abdominal obesity and hypertension, which are two major components of metabolic syndrome, are much higher in our study than in other regions of country. However, interventions for modification of diet and increase in physical activity are effective in lowering their prevalence.Keywords: metabolic syndrome, obesity, life style, nutrition, hypertension
Procedia PDF Downloads 682190 Emerging Issues of Non-Communicable Diseases among Older Persons in India
Authors: Dhananjay W. Bansod, Santosh Phad
Abstract:
Non-Communicable Diseases (NCD) are major contributing factors to the disease burden in the world as well as in India. With a growing proportion of older persons in India gives rise to several challenges. With the advancement of age, elderly is exposed to various kinds of health problems more specifically NCDs. Therefore, an effort has been made to examine the prevalence of NCDs among older persons and its treatment-seeking behaviour, also it is tried to explore the association between the NCDs and its effect on the overall wellbeing of older persons. Data used from “Building Knowledge Base of Population Ageing Survey” conducted in 2011 in seven states of India. Six chronic diseases used (non-communicable diseases) namely Arthritis, Hypertension, Cataract, Diabetes, Asthma and Heart diseases to understand the issues related to NCDs. Also seen the effect of NCDs on the wellbeing of the elderly, the subjective well-being consists of nine questions from which SUBI score generated for mental health status, which ranges from 9 to 27. This Index indicates that lower the score better is the mental health status. Further, this index modified and generated three categories of Better (9-15), Average (16-20) and Worse (21-27). The reliability analysis is carried out with the coefficient (Cronbach’s alpha) of the scale was 0.8884. The result shows that Orthopedic / musculoskeletal ailments involving arthritis, rheumatism and osteoarthritis are the most common type of ailment followed by hypertension. Two-thirds of the elderly reported suffering from at least one chronic ailment. Most chronic illness conditions received some form of treatment and mainly depend on public health facilities. Financial insecurity is the primary obstruction in seeking treatment for most of the chronic ailments which typically require a longer duration of medication and repeated medical consultations, both having significant economic implications. According to SUBI index, only 15 per cent of the elderly are in Better mental health status, and one-third of the elderly are with the worse score. Elderly with the ailments like Cataract, Asthma and Arthritis have worse mental health. It depicts that the burden of disease is more among the elderly and it is directly affecting the overall wellbeing of older persons.Keywords: NCD, well-being, older person, India
Procedia PDF Downloads 1502189 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization
Authors: Younis Elhaddad, Alfonso Ortega
Abstract:
Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production
Procedia PDF Downloads 1632188 Structural Equation Modeling Exploration for the Multiple College Admission Criteria in Taiwan
Authors: Tzu-Ling Hsieh
Abstract:
When the Taiwan Ministry of Education implemented a new university multiple entrance policy in 2002, most colleges and universities still use testing scores as mainly admission criteria. With forthcoming 12 basic-year education curriculum, the Ministry of Education provides a new college admission policy, which will be implemented in 2021. The new college admission policy will highlight the importance of holistic education by more emphases on the learning process of senior high school, except only on the outcome of academic testing. However, the development of college admission criteria doesn’t have a thoughtful process. Universities and colleges don’t have an idea about how to make suitable multi-admission criteria. Although there are lots of studies in other countries which have implemented multi-college admission criteria for years, these studies still cannot represent Taiwanese students. Also, these studies are limited without the comparison of two different academic fields. Therefore, this study investigated multiple admission criteria and its relationship with college success. This study analyzed the Taiwan Higher Education Database with 12,747 samples from 156 universities and tested a conceptual framework that examines factors by structural equation model (SEM). The conceptual framework of this study was adapted from Pascarella's general causal model and focused on how different admission criteria predict students’ college success. It discussed the relationship between admission criteria and college success, also the relationship how motivation (one of admission standard) influence college success through engagement behaviors of student effort and interactions with agents of socialization. After processing missing value, reliability and validity analysis, the study found three indicators can significantly predict students’ college success which was defined as average grade of last semester. These three indicators are the Chinese language scores at college entrance exam, high school class rank, and quality of student academic engagement. In addition, motivation can significantly predict quality of student academic engagement and interactions with agents of socialization. However, the multi-group SEM analysis showed that there is no difference to predict college success between the students from liberal arts and science. Finally, this study provided some suggestions for universities and colleges to develop multi-admission criteria through the empirical research of Taiwanese higher education students.Keywords: college admission, admission criteria, structural equation modeling, higher education, education policy
Procedia PDF Downloads 1802187 Development of the Maturity Sensor Prototype and Method of Its Placement in the Structure
Authors: Yelbek B. Utepov, Assel S. Tulebekova, Alizhan B. Kazkeyev
Abstract:
Maturity sensors are used to determine concrete strength by the non-destructive method. The method of placement of the maturity sensors determines their number required for a certain frame of a monolithic building. Previous studies weakly describe this aspect, giving only logical assumptions. This paper proposes a cheap prototype of an embedded wireless sensor for monitoring concrete structures, as well as an alternative strategy for placing sensors based on the transitional boundaries of the temperature distribution of concrete curing, which were determined by building a heat map of the temperature distribution, where unknown values are calculated by the method of inverse distance weighing. The developed prototype can simultaneously measure temperature and relative humidity over a smartphone-controlled time interval. It implements a maturity method to assess the in-situ strength of concrete, which is considered an alternative to the traditional shock impulse and compression testing method used in Kazakhstan. The prototype was tested in laboratory and field conditions. The tests were aimed at studying the effect of internal and external temperature and relative humidity on concrete's strength gain. Based on an experimentally poured concrete slab with randomly integrated maturity sensors, it was determined that the transition boundaries form elliptical forms. Temperature distribution over the largest diameter of the ellipses was plotted, resulting in correct and inverted parabolas. As a result, the distance between the closest opposite crossing points of the parabolas is accepted as the maximum permissible step for setting the maturity sensors. The proposed placement strategy can be applied to sensors that measure various continuous phenomena such as relative humidity. Prototype testing has also revealed Bluetooth inconvenience due to weak signal and inability to access multiple prototypes simultaneously. For this reason, further prototype upgrades are planned in future work.Keywords: heat map, placement strategy, temperature and relative humidity, wireless embedded sensor
Procedia PDF Downloads 1782186 Indirect Genotoxicity of Diesel Engine Emission: An in vivo Study Under Controlled Conditions
Authors: Y. Landkocz, P. Gosset, A. Héliot, C. Corbière, C. Vendeville, V. Keravec, S. Billet, A. Verdin, C. Monteil, D. Préterre, J-P. Morin, F. Sichel, T. Douki, P. J. Martin
Abstract:
Air Pollution produced by automobile traffic is one of the main sources of pollutants in urban atmosphere and is largely due to exhausts of the diesel engine powered vehicles. The International Agency for Research on Cancer, which is part of the World Health Organization, classified in 2012 diesel engine exhaust as carcinogenic to humans (Group 1), based on sufficient evidence that exposure is associated with an increased risk for lung cancer. Amongst the strategies aimed at limiting exhausts in order to take into consideration the health impact of automobile pollution, filtration of the emissions and use of biofuels are developed, but their toxicological impact is largely unknown. Diesel exhausts are indeed complex mixtures of toxic substances difficult to study from a toxicological point of view, due to both the necessary characterization of the pollutants, sampling difficulties, potential synergy between the compounds and the wide variety of biological effects. Here, we studied the potential indirect genotoxicity of emission of Diesel engines through on-line exposure of rats in inhalation chambers to a subchronic high but realistic dose. Following exposure to standard gasoil +/- rapeseed methyl ester either upstream or downstream of a particle filter or control treatment, rats have been sacrificed and their lungs collected. The following indirect genotoxic parameters have been measured: (i) telomerase activity and telomeres length associated with rTERT and rTERC gene expression by RT-qPCR on frozen lungs, (ii) γH2AX quantification, representing double-strand DNA breaks, by immunohistochemistry on formalin fixed-paraffin embedded (FFPE) lung samples. These preliminary results will be then associated with global cellular response analyzed by pan-genomic microarrays, monitoring of oxidative stress and the quantification of primary DNA lesions in order to identify biological markers associated with a potential pro-carcinogenic response of diesel or biodiesel, with or without filters, in a relevant system of in vivo exposition.Keywords: diesel exhaust exposed rats, γH2AX, indirect genotoxicity, lung carcinogenicity, telomerase activity, telomeres length
Procedia PDF Downloads 3912185 History of Pediatric Renal Pathology
Authors: Mostafa Elbaba
Abstract:
Because childhood renal diseases are grossly different compared to adult diseases, pediatric nephrology was founded as a specialty in 1965. Renal pathology specialty was introduced at the London Ciba Symposium in 1961. The history of renal pathology can be divided into two eras: one starting in the 1650s with the invention of the microscope, the second in the 1950s with the implementation of renal biopsy, and the presence of electron microscopy and immunofluorescence study. Prior to the 1950s, the study of diseased human kidneys was restricted to postmortem examination by gross pathology. In 1827, Richard Bright first described his triad of kidney disease, which was confirmed by morbid kidney changes at autopsy. In 1905 Friedrich Mueller coined the term “nephrosis” describing the inflammatory form of “degenerative” diseases, and later F. Munk added the term “lipoid nephrosis”. The most profound influence on renal diseases’ classification came from the publication of Volhard and Fahr in 1914. In 1899, Carl Max Wilhelm Wilms described Wilms' tumor of the kidneys in children. Chronic pyelonephritis was a popular renal diagnosis and the most common cause of uremia until the 1960s. Although kidney biopsy had been used early in the 1930s for renal tumors, the earliest reports of its use in the diagnosis of medical kidney disease were by Iversen and Brun in 1951, followed by Alwall in 1952, then by Pardo in 1953. The earliest intentional renal biopsies were done in 1944 by Nils Alwall, while the procedure was abandoned after the death of one of his 13 patients who biopsied. In 1950, Antonino Perez-Ara attempted renal biopsies, but his results were missed because of an unpopular journal publication. In the year 1951, Claus Brun and Poul Iverson developed the biopsy procedure using an aspiration technique. Popularizing renal biopsy practice is accredited to Robert Kark, who published his distinct work in 1954. He perfected the technique of renal biopsy in the prone position using the Vim-Silverman needle and used intravenous pyelography to improve the localization of the kidney.Keywords: history, medicine, nephrology, pediatrics, pathology
Procedia PDF Downloads 602184 Satellite Images to Determine Levels of Fire Severity in a Native Chilean Forest: Assessing the Responses of Soil Mesofauna Diversity to a Fire Event
Authors: Carolina Morales, Ricardo Castro-Huerta, Enrique A. Mundaca
Abstract:
The edaphic fauna is the main factor involved in the transformation of nutrients and soil decomposition processes. Edaphic organisms are highly sensitive to soil disturbances, which normally causes changes in the composition and abundance of such organisms. Fire is known to be a disturbing factor since it affects the physical, chemical and biological properties of the soil and the whole ecosystem. During the summer (December-March) of 2017, Chile suffered the major fire events recorded in its modern history, which affected a vast area and a number of ecosystem types. The objective of this study was first to use remote sensing satellite images and GIS (Geographic Information Systems) to assess and identify levels of fire severity in disturbed areas and to compare the responses of the soil mesofauna diversity among such areas. We identified four areas (treatments) with an ascending level of severity, namely: mild, medium, high severity, and free of fire. A non-affected patch of forest was established as a control. Three samples from each treatment were collected in the form of a soil cube (10x10x10 cm). Edaphic mesofauna was obtained from each sample through the Berlese-Tullgren funnel method. Collected specimens were quantified and identified, using the RTU (Recognisable Taxonomic Unit) criterion. Diversity was analysed using inferential statistics to compare Simpson and Shannon-Wiener indexes across treatments. As predicted, the unburned forest patch (control) exhibited higher diversity values than the treatments. Significantly higher diversity values were recorded in those treatments subjected to lower fire severity. We conclude that remote sensing zoning is an adequate tool to identify different levels of fire severity and that an edaphic mesofauna is a group of organisms that qualify as good bioindicators for monitoring soil recovery after fire events.Keywords: bioindicator, Chile, fire severity level, soil
Procedia PDF Downloads 1622183 Implications of Human Cytomegalovirus as a Protective Factor in the Pathogenesis of Breast Cancer
Authors: Marissa Dallara, Amalia Ardeljan, Lexi Frankel, Nadia Obaed, Naureen Rashid, Omar Rashid
Abstract:
Human Cytomegalovirus (HCMV) is a ubiquitous virus that remains latent in approximately 60% of individuals in developed countries. Viral load is kept at a minimum due to a robust immune response that is produced in most individuals who remain asymptomatic. HCMV has been recently implicated in cancer research because it may impose oncomodulatory effects on tumor cells of which it infects, which could have an impact on the progression of cancer. HCMV has been implicated in increased pathogenicity of certain cancers such as gliomas, but in contrast, it can also exhibit anti-tumor activity. HCMV seropositivity has been recorded in tumor cells, but this may also have implications in decreased pathogenesis of certain forms of cancer such as leukemia as well as increased pathogenesis in others. This study aimed to investigate the correlation between cytomegalovirus and the incidence of breast cancer. Methods The data used in this project was extracted from a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to analyze the patients infected versus patients not infection with cytomegalovirus using ICD-10, ICD-9 codes. Permission to utilize the database was given by Holy Cross Health, Fort Lauderdale, for the purpose of academic research. Data analysis was conducted using standard statistical methods. Results The query was analyzed for dates ranging from January 2010 to December 2019, which resulted in 14,309 patients in both the infected and control groups, respectively. The two groups were matched by age range and CCI score. The incidence of breast cancer was 1.642% and 235 patients in the cytomegalovirus group compared to 4.752% and 680 patients in the control group. The difference was statistically significant by a p-value of less than 2.2x 10^-16 with an odds ratio of 0.43 (0.4 to 0.48) with a 95% confidence interval. Investigation into the effects of HCMV treatment modalities, including Valganciclovir, Cidofovir, and Foscarnet, on breast cancer in both groups was conducted, but the numbers were insufficient to yield any statistically significant correlations. Conclusion This study demonstrates a statistically significant correlation between cytomegalovirus and a reduced incidence of breast cancer. If HCMV can exert anti-tumor effects on breast cancer and inhibit growth, it could potentially be used to formulate immunotherapy that targets various types of breast cancer. Further evaluation is warranted to assess the implications of cytomegalovirus in reducing the incidence of breast cancer.Keywords: human cytomegalovirus, breast cancer, immunotherapy, anti-tumor
Procedia PDF Downloads 2112182 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method
Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon
Abstract:
The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue
Procedia PDF Downloads 2492181 Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry
Authors: Mónica P. Cala, Juan S. Carreño, Roland J.W. Meesters
Abstract:
Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.Keywords: biofluids, breast cancer, metabolic profiling, micro-sampling
Procedia PDF Downloads 4132180 Morphological and Molecular Evaluation of Dengue Virus Serotype 3 Infection in BALB/c Mice Lungs
Authors: Gabriela C. Caldas, Fernanda C. Jacome, Arthur da C. Rasinhas, Ortrud M. Barth, Flavia B. dos Santos, Priscila C. G. Nunes, Yuli R. M. de Souza, Pedro Paulo de A. Manso, Marcelo P. Machado, Debora F. Barreto-Vieira
Abstract:
The establishment of animal models for studies of DENV infections has been challenging, since circulating epidemic viruses do not naturally infect nonhuman species. Such studies are of great relevance to the various areas of dengue research, including immunopathogenesis, drug development and vaccines. In this scenario, the main objective of this study is to verify possible morphological changes, as well as the presence of antigens and viral RNA in lung samples from BALB/c mice experimentally infected with an epidemic and non-neuroadapted DENV-3 strain. Male BALB/c mice, 2 months old, were inoculated with DENV-3 by intravenous route. After 72 hours of infection, the animals were euthanized and the lungs were collected. Part of the samples was processed by standard technique for analysis by light and transmission electronic microscopies and another part was processed for real-time PCR analysis. Morphological analyzes of lungs from uninfected mice showed preserved tissue areas. In mice infected with DENV-3, the analyzes revealed interalveolar septum thickening with presence of inflammatory infiltrate, foci of alveolar atelectasis and hyperventilation, bleeding foci in the interalveolar septum and bronchioles, peripheral capillary congestion, accumulation of fluid in the blood capillary, signs of interstitial cell necrosis presence of platelets and mononuclear inflammatory cells circulating in the capillaries and/or adhered to the endothelium. In addition, activation of endothelial cells, platelets, mononuclear inflammatory cell and neutrophil-type polymorphonuclear inflammatory cell evidenced by the emission of cytoplasmic membrane prolongation was observed. DEN-like particles were seen in the cytoplasm of endothelial cells. The viral genome was recovered from 3 in 12 lung samples. These results demonstrate that the BALB / c mouse represents a suitable model for the study of the histopathological changes induced by DENV infection in the lung, with tissue alterations similar to those observed in human cases of DEN.Keywords: BALB/c mice, dengue, histopathology, lung, ultrastructure
Procedia PDF Downloads 2542179 Molecular Characterization of Two Thermoplastic Biopolymer-Degrading Fungi Utilizing rRNA-Based Technology
Authors: Nuha Mansour Alhazmi, Magda Mohamed Aly, Fardus M. Bokhari, Ahmed Bahieldin, Sherif Edris
Abstract:
Out of 30 fungal isolates, 2 new isolates were proven to degrade poly-β-hydroxybutyrate (PHB). Enzyme assay for these isolates indicated the optimal environmental conditions required for depolymerase enzyme to induce the highest level of biopolymer degradation. The two isolates were basically characterized at the morphological level as Trichoderma asperellum (isolate S1), and Aspergillus fumigates (isolate S2) using standard approaches. The aim of the present study was to characterize these two isolates at the molecular level based on the highly diverged rRNA gene(s). Within this gene, two domains of the ribosome large subunit (LSU) namely internal transcribed spacer (ITS) and 26S were utilized in the analysis. The first domain comprises the ITS1/5.8S/ITS2 regions ( > 500 bp), while the second domain comprises the D1/D2/D3 regions ( > 1200 bp). Sanger sequencing was conducted at Macrogen (Inc.) for the two isolates using primers ITS1/ITS4 for the first domain, while primers LROR/LR7 for the second domain. Sizes of the first domain ranged between 594-602 bp for S1 isolate and 581-594 bp for S2 isolate, while those of the second domain ranged between 1228-1238 bp for S1 isolate and 1156-1291 for S2 isolate. BLAST analysis indicated 99% identities of the first domain of S1 isolate with T. asperellum isolates XP22 (ID: KX664456.1), CTCCSJ-G-HB40564 (ID: KY750349.1), CTCCSJ-F-ZY40590 (ID: KY750362.1) and TV (ID: KU341015.1). BLAST of the first domain of S2 isolate indicated 100% identities with A. fumigatus isolate YNCA0338 (ID: KP068684.1) and strain MEF-Cr-6 (ID: KU597198.1), while 99% identities with A. fumigatus isolate CCA101 (ID: KT877346.1) and strain CD1621 (ID: JX092088.1). Large numbers of other T. asperellum and A. fumigatus isolates and strains showed high level of identities with S1 and S2 isolates, respectively, based on the diversity of the first domain. BLAST of the second domain of S1 isolate indicated 99 and 100% identities with only two strains of T. asperellum namely TR 3 (ID: HM466685.1) and G (ID: KF723005.1), respectively. However, other T. species (ex., atroviride, hamatum, deliquescens, harzianum, etc.) also showed high level of identities. BLAST of the second domain of S2 isolate indicated 100% identities with A. fumigatus isolate YNCA0338 (ID: KP068684.1) and strain MEF-Cr-6 (ID: KU597198.1), while 99% identities with A. fumigatus isolate CCA101 (ID: KT877346.1) and strain CD1621 (ID: JX092088.1). Large numbers of other A. fumigatus isolates and strains showed high level of identities with S2 isolate. Overall, the results of molecular characterization based on rRNA diversity for the two isolates of T. asperellum and A. fumigatus matched those obtained by morphological characterization. In addition, ITS domain proved to be more sensitive than 26S domain in diversity profiling of fungi at the species level.Keywords: Aspergillus fumigates, Trichoderma asperellum, PHB, degradation, BLAST, ITS, 26S, rRNA
Procedia PDF Downloads 1602178 The Impact Of Environmental Management System ISO 14001 Adoption on Firm Performance
Authors: Raymond Treacy, Paul Humphreys, Ronan McIvor, Trevor Cadden, Alan McKittrick
Abstract:
This study employed event study methodology to examine the role of institutions, resources and dynamic capabilities in the relationship between the Environmental Management System ISO 14001 adoption and firm performance. Utilising financial data from 140 ISO 14001 certified firms and 320 non-certified firms, the results of the study suggested that the UK and Irish manufacturers were not implementing ISO 14001 solely to gain legitimacy. In contrast, the results demonstrated that firms were fully integrating the ISO 14001 standard within their operations as certified firms were able to improve both financial and operating performance when compared to non-certified firms. However, while there were significant and long lasting improvements for employee productivity, manufacturing cost efficiency, return on assets and sales turnover, the sample firms operating cycle and fixed asset efficiency displayed evidence of diminishing returns in the long-run, underlying the observation that no operating advantage based on incremental improvements can be everlasting. Hence, there is an argument for investing in dynamic capabilities which help renew and refresh the resource base and help the firm adapt to changing environments. Indeed, the results of the regression analysis suggest that dynamic capabilities for innovation acted as a moderator in the relationship between ISO 14001 certification and firm performance. This, in turn, will have a significant and symbiotic influence on sustainability practices within the participating organisations. The study not only provides new and original insights, but demonstrates pragmatically how firms can take advantage of environmental management systems as a moderator to significantly enhance firm performance. However, while it was shown that firm innovation aided both short term and long term ROA performance, adaptive market capabilities only aided firms in the short-term at the marketing strategy deployment stage. Finally, the results have important implications for firms operating in an economic recession as the results suggest that firms should scale back investment in R&D while operating in an economic downturn. Conversely, under normal trading conditions, consistent and long term investments in R&D was found to moderate the relationship between ISO 14001 certification and firm performance. Hence, the results of the study have important implications for academics and management alike.Keywords: supply chain management, environmental management systems, quality management, sustainability, firm performance
Procedia PDF Downloads 3112177 Ecocentric Principles for the Change of the Anthropocentric Design Within the Other Species Related Fields
Authors: Armando Cuspinera
Abstract:
Humans are nature itself, being with non-human species part of the same ecosystem, but the praxis reflects that not all relations are the same. In fields of design such as Biomimicry, Biodesign, and Biophilic design exist different approaches towards nature, nevertheless, anthropocentric principles such as domination, objectivization, or exploitation are defined in the same as ecocentric principles of inherent importance in life itself. Anthropocentrism has showed humanity with pollution of the earth, water, air, and the destruction of whole ecosystems from monocultures and rampant production of useless objects that life cannot outstand this unaware rhythm of life focused only for the human benefits. Even if by nature the biosphere is resilient, studies showed in the Paris Agreement explain that humanity will perish in an unconscious way of praxis. This is why is important to develop a differentiation between anthropocentric and ecocentricprinciples in the praxis of design, in order to enhance respect, valorization, and positive affectivity towards other life forms is necessary to analyze what principles are reproduced from each practice of design. It is only from the study of immaterial dimensions of design such as symbolism, epistemology, and ontology that the relation towards nature can be redesigned, and in order to do so, it must be studies from the dimensions of ontological design what principles –anthropocentric or ecocentric- through what the objects enhance or focus the perception humans have to its surrounding. The things we design also design us is the principle of ontological design, and in order to develop a way of ecological design in which is possible to consider other species as users, designers or collaborators is important to extend the studies and relation to other living forms from a transdisciplinary perspective of techniques, knowledge, practice, and disciplines in general. Materials, technologies, and any kind of knowledge have the principle of a tool: is not good nor bad, but is in the way of using it the possibilities that exist within them. The collaboration of disciplines and fields of study gives the opportunity to connect principles from other cultures such as Deep Ecology and Environmental Humanities in the development of methodologies of design that study nature, integrates their strategies to our own species, and considers life of other species as important as human life, and is only form the studies of ontological design that material and immaterial dimensions can be analyzed and imbued with structures that already exist in other fields.Keywords: design, antropocentrism, ecocentrism, ontological design
Procedia PDF Downloads 1572176 Teachers' Experience for Improving Fine Motor Skills of Children with Down Syndrome in the Context of Special Education in Southern Province of Sri Lanka
Authors: Sajee A. Gamage, Champa J. Wijesinghe, Patricia Burtner, Ananda R. Wickremasinghe
Abstract:
Background: Teachers working in the context of special education have an enormous responsibility of enhancing performance skills of children in their classroom settings. Fine Motor Skills (FMS) are essential functional skills for children to gain independence in Activities of Daily Living. Children with Down Syndrome (DS) are predisposed to specific challenges due to deficits in FMS. This study is aimed to determine the teachers’ experience on improving FMS of children with DS in the context of special education of Southern Province, Sri Lanka. Methodology: A cross-sectional study was conducted among all consenting eligible teachers (n=147) working in the context of special education in government schools of Southern Province of Sri Lanka. A self-administered questionnaire was developed based on literature and expert opinion to assess teachers’ experience regarding deficits of FMS, limitations of classroom activity performance and barriers to improve FMS of children with DS. Results: Approximately 93% of the teachers were females with a mean age ( ± SD) of 43.1 ( ± 10.1) years. Thirty percent of the teachers had training in special educationand 83% had children with DS in their classrooms. Major deficits of FMS reported were deficits in grasping (n=116; 79%), in-hand manipulation (n=103; 70%) and bilateral hand use (n=99; 67.3%). Paperwork (n=70; 47.6%), painting (n=58; 39.5%), scissor work (n=50; 34.0%), pencil use for writing (n=45; 30.6%) and use of tools in the classroom (n=41; 27.9%) were identified as major classroom performance limitations of children with DS. Parental factors (n=67; 45.6%), disease specific characteristics (n=58; 39.5%) and classroom factors (n=36; 24.5%), were identified as major barriers to improve FMS in the classroom setting. Lack of resources and standard tools, social stigma and late school admission were also identified as barriers to FMS training. Eighty nine percent of the teachers informed that training fine motor activities in a special education classroom was more successful than work with normal classroom setting. Conclusion: Major areas of FMS deficits were grasping, in-hand manipulation and bilateral hand use; classroom performance limitations included paperwork, painting and scissor work of children with DS. Teachers recommended regular practice of fine motor activities according to individual need. Further research is required to design a culturally specific FMS assessment tool and intervention methods to improve FMS of children with DS in Sri Lanka.Keywords: classroom activities, Down syndrome, experience, fine motor skills, special education, teachers
Procedia PDF Downloads 1542175 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 326