Search results for: relative rates
1878 Enhanced Mechanical Properties and Corrosion Resistance of Fe-Based Thin Film Metallic Glasses via Pulsed Laser Deposition
Authors: Ali Obeydavi, Majid Rahimi
Abstract:
This study explores the synthesis and characterization of Fe-Cr-Mo-Co-C-B-Si thin film metallic glasses fabricated using the pulsed laser deposition (PLD) technique on silicon wafer and 304 stainless steel substrates. it systematically varied the laser pulse numbers (20,000; 30,000; 40,000) and energies (130, 165, 190 mJ) to investigate their effects on the microstructural, mechanical, and corrosion properties of the deposited films. Comprehensive characterization techniques, including grazing incidence X-ray diffraction, field emission scanning electron microscopy, atomic force microscopy, and transmission electron microscopy with selected area electron diffraction, were utilized to assess the amorphous structure and surface morphology. Results indicated that increased pulse numbers and laser energies led to enhanced deposition rates and film thicknesses. Nanoindentation tests demonstrated that the hardness and elastic modulus of the amorphous thin films significantly surpassed those of the 304 stainless steel substrate. Additionally, electrochemical polarization and impedance spectroscopy revealed that the Fe-based metallic glass coatings exhibited superior corrosion resistance compared to the stainless steel substrate. The observed improvements in mechanical and corrosion properties are attributed to the unique amorphous structure achieved through the PLD process, highlighting the potential of these materials for protective coatings in aggressive environments.Keywords: thin film metallic glasses, pulsed laser deposition, mechanical properties, corrosion resistance
Procedia PDF Downloads 221877 The Impact of PM-Based Regulations on the Concentration and Sources of Fine Organic Carbon in the Los Angeles Basin from 2005 to 2015
Authors: Abdulmalik Altuwayjiri, Milad Pirhadi, Sina Taghvaee, Constantinos Sioutas
Abstract:
A significant portion of PM₂.₅ mass concentration is carbonaceous matter (CM), which majorly exists in the form of organic carbon (OC). Ambient OC originates from a multitude of sources and plays an important role in global climate effects, visibility degradation, and human health. In this study, positive matrix factorization (PMF) was utilized to identify and quantify the long-term contribution of PM₂.₅ sources to total OC mass concentration in central Los Angeles (CELA) and Riverside (i.e., receptor site), using the chemical speciation network (CSN) database between 2005 and 2015, a period during which several state and local regulations on tailpipe emissions were implemented in the area. Our PMF resolved five different factors, including tailpipe emissions, non-tailpipe emissions, biomass burning, secondary organic aerosol (SOA), and local industrial activities for both sampling sites. The contribution of vehicular exhaust emissions to the OC mass concentrations significantly decreased from 3.5 µg/m³ in 2005 to 1.5 µg/m³ in 2015 (by about 58%) at CELA, and from 3.3 µg/m³ in 2005 to 1.2 µg/m³ in 2015 (by nearly 62%) at Riverside. Additionally, SOA contribution to the total OC mass, showing higher levels at the receptor site, increased from 23% in 2005 to 33% and 29% in 2010 and 2015, respectively, in Riverside, whereas the corresponding contribution at the CELA site was 16%, 21% and 19% during the same period. The biomass burning maintained an almost constant relative contribution over the whole period. Moreover, while the adopted regulations and policies were very effective at reducing the contribution of tailpipe emissions, they have led to an overall increase in the fractional contributions of non-tailpipe emissions to total OC in CELA (about 14%, 28%, and 28% in 2005, 2010 and 2015, respectively) and Riverside (22%, 27% and 26% in 2005, 2010 and 2015), underscoring the necessity to develop equally effective mitigation policies targeting non-tailpipe PM emissions.Keywords: PM₂.₅, organic carbon, Los Angeles megacity, PMF, source apportionment, non-tailpipe emissions
Procedia PDF Downloads 1981876 Comparati̇ve Study of Pi̇xel and Object-Based Image Classificati̇on Techni̇ques for Extracti̇on of Land Use/Land Cover Informati̇on
Authors: Mahesh Kumar Jat, Manisha Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) have been used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: remote sensing, GIS, object based, classification
Procedia PDF Downloads 1301875 The Role of Transport Investment and Enhanced Railway Accessibility in Regional Efficiency Improvement in Saudi Arabia: Data Envelopment Analysis
Authors: Saleh Alotaibi, Mohammed Quddus, Craig Morton, Jobair Bin Alam
Abstract:
This paper explores the role of large-scale investment in transport sectors and the impact of increased railway accessibility on the efficiency of the regional economic productivity in the Kingdom of Saudi Arabia (KSA). There are considerable differences among the KSA regions in terms of their levels of investment and productivity due to their geographical scale and location, which in turn greatly affect their relative efficiency. The study used a non-parametric linear programming technique - Data Envelopment Analysis (DEA) - to measure the regional efficiency change over time and determine the drivers of inefficiency and their scope of improvement. In addition, Window DEA analysis is carried out to compare the efficiency performance change for various time periods. Malmquist index (MI) is also analyzed to identify the sources of productivity change between two subsequent years. The analysis involves spatial and temporal panel data collected from 1999 to 2018 for the 13 regions of the country. Outcomes reveal that transport investment and improved railway accessibility, in general, have significantly contributed to regional economic development. Moreover, the endowment of the new railway stations has spill-over effects. The DEA Window analysis confirmed the dynamic improvement in the average regional efficiency over the study periods. MI showed that the technical efficiency change was the main source of regional productivity improvement. However, there is evidence of investment allocation discrepancy among regions which could limit the achievement of development goals in the long term. These relevant findings will assist the Saudi government in developing better strategic decisions for future transport investments and their allocation at the regional level.Keywords: data envelopment analysis, transport investment, railway accessibility, efficiency
Procedia PDF Downloads 1491874 Early Childhood Developmental Delay in 63 Low- and Middle-Income Countries: Prevalence and Inequalities Estimated from National Health Surveys
Authors: Jesus D. Cortes Gil, Fernanda Ewerling, Leonardo Ferreira, Aluisio J. D. Barros
Abstract:
Background: The sustainable development goals call for inclusive, equitable, and quality learning opportunities for all. This is especially important for children, to ensure they all develop to their full potential. We studied the prevalence and inequalities of suspected delay in child development in 63 low- and middle-income countries. Methods and Findings: We used the early child development module from national health surveys, which covers four developmental domains (physical, social-emotional, learning, literacy-numeracy) and provides a combined indicator (early child development index, ECDI) of whether children are on track. We calculated the age-adjusted prevalence of suspected delay at the country level and stratifying by wealth, urban/rural residence, sex of the child, and maternal education. We also calculated measures of absolute and relative inequality. We studied 330.613 children from 63 countries. The prevalence of suspected delay for the ECDI ranged from 3% in Barbados to 67% in Chad. For all countries together, 25% of the children were suspected of developmental delay. At regional level, the prevalence of delay ranged from 10% in Europe and Central Asia to 42% in West and Central Africa. The literacy-numeracy domain was by far the most challenging, with the highest proportions of delay. We observed very large inequalities, and most markedly for the literacy-numeracy domain. Conclusions: To date, our study presents the most comprehensive analysis of child development using an instrument especially developed for national health surveys. With a quarter of the children globally suspected of developmental delay, we face an immense challenge. The multifactorial aspect of early child development and the large gaps we found only add to the challenge of not leaving these children behind.Keywords: child development, inequalities, global health, equity
Procedia PDF Downloads 1191873 Shallow Water Lidar System in Measuring Erosion Rate of Coarse-Grained Materials
Authors: Ghada S. Ellithy, John. W. Murphy, Maureen K. Corcoran
Abstract:
Erosion rate of soils during a levee or dam overtopping event is a major component in risk assessment evaluation of breach time and downstream consequences. The mechanism and evolution of dam or levee breach caused by overtopping erosion is a complicated process and difficult to measure during overflow due to accessibility and quickly changing conditions. In this paper, the results of a flume erosion tests are presented and discussed. The tests are conducted on a coarse-grained material with a median grain size D50 of 5 mm in a 1-m (3-ft) wide flume under varying flow rates. Each test is performed by compacting the soil mix r to its near optimum moisture and dry density as determined from standard Proctor test in a box embedded in the flume floor. The box measures 0.45 m wide x 1.2 m long x 0.25 m deep. The material is tested several times at varying hydraulic loading to determine the erosion rate after equal time intervals. The water depth, velocity are measured at each hydraulic loading, and the acting bed shear is calculated. A shallow water lidar (SWL) system was utilized to record the progress of soil erodibility and water depth along the scanned profiles of the tested box. SWL is a non-contact system that transmits laser pulses from above the water and records the time-delay between top and bottom reflections. Results from the SWL scans are compared with before and after manual measurements to determine the erosion rate of the soil mix and other erosion parameters.Keywords: coarse-grained materials, erosion rate, LIDAR system, soil erosion
Procedia PDF Downloads 1121872 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer
Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın
Abstract:
We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer
Procedia PDF Downloads 2471871 Identification and Evaluation of Environmental Concepts in Paulo Coelho's "The Alchemist"
Authors: Tooba Sabir, Asima Jaffar, Namra Sabir, Mohammad Amjad Sabir
Abstract:
Ecocriticism is the study of relationship between human and environment which has been represented in literature since the very beginning in pastoral tradition. However, the analysis of such representation is new as compared to the other critical evaluations like Psychoanalysis, Marxism, Post-colonialism, Modernism and many others. Ecocritics seek to find information like anthropocentrism, ecocentrism, ecofeminism, eco-Marxism, representation of environment and environmental concept and several other topics. In the current study the representation of environmental concepts, were ecocritically analyzed in Paulo Coelho’s The Alchemist, one of the most read novels throughout the world, having been translated into many languages. Analysis of the text revealed, the representations of environmental ideas like landscapes and tourism, biodiversity, land-sea displacement, environmental disasters and warfare, desert winds and sand dunes. 'This desert was once a sea' throws light on different theories of land-sea displacement, one being the plate-tectonic theory which proposes Earth’s lithosphere to be divided into different large and small plates, continuously moving toward, away from or parallel to each other, resulting in land-sea displacement. Another theory is the continental drift theory which holds onto the belief that one large landmass—Pangea, broke down into smaller pieces of land that moved relative to each other and formed continents of the present time. The cause of desertification may, however, be natural i.e. climate change or artificial i.e. by human activities. Imagery of the environmental concepts, at some instances in the novel, is detailed and at other instances, is not as striking, but still is capable of arousing readers’ imagination. The study suggests that ecocritical justifications of environmental concepts in the text will increase the interactions between literature and environment which should be encouraged in order to induce environmental awareness among the readers.Keywords: biodiversity, ecocritical analysis, ecocriticism, environmental disasters, landscapes
Procedia PDF Downloads 2641870 Removal of Bulk Parameters and Chromophoric Fractions of Natural Organic Matter by Porous Kaolin/Fly Ash Ceramic Membrane at South African Drinking Water Treatment Plants
Authors: Samkeliso S. Ndzimandze, Welldone Moyo, Oranso T. Mahlangu, Adolph A. Muleja, Alex T. Kuvarega, Thabo T. I. Nkambule
Abstract:
The high cost of precursor materials has hindered the commercialization of ceramic membrane technology in water treatment. In this work, a ceramic membrane disc (approximately 50 mm in diameter and 4 mm thick) was prepared from low-cost starting materials, kaolin, and fly ash by pressing at 200 bar and calcining at 900 °C. The fabricated membrane was characterized for various physicochemical properties, natural organic matter (NOM) removal as well as fouling propensity using several techniques. Further, the ceramic membrane was tested on samples collected from four drinking water treatment plants in KwaZulu-Natal, South Africa (named plants 1-4). The membrane achieved 48.6%, 54.6%, 57.4%, and 76.4% bulk UV254 reduction for raw water at plants 1, 2, 3, and 4, respectively. These removal rates were comparable to UV254 reduction achieved by coagulation/flocculation steps at the respective plants. Further, the membrane outperformed sand filtration steps in plants 1-4 in removing disinfection by-product precursors (8%-32%) through size exclusion. Fluorescence excitation-emission matrices (FEEM) studies showed the removal of fluorescent NOM fractions present in the water samples by the membrane. The membrane was fabricated using an up-scalable facile method, and it has the potential for application as a polishing step to complement conventional processes in water treatment for drinking purposes.Keywords: crossflow filtration, drinking water treatment plants, fluorescence excitation-emission matrices, ultraviolet 254 (UV₂₅₄)
Procedia PDF Downloads 431869 Plant Growth, Symbiotic Performance and Grain Yield of 63 Common Bean Genotypes Grown Under Field Conditions at Malkerns Eswatini
Authors: Rotondwa P. Gunununu, Mustapha Mohammed, Felix D. Dakora
Abstract:
Common bean is the most importantly high protein grain legume grown in Southern Africa for human consumption and income generation. Although common bean can associate with rhizobia to fix N₂ for bacterial use and plant growth, it is reported to be a poor nitrogen fixer when compared to other legumes. N₂ fixation can vary with legume species, genotype and rhizobial strain. Therefore, screening legume germplasm can reveal rhizobia/genotype combinations with high N₂-fixing efficiency for use by farmers. This study assessed symbiotic performance and N₂ fixation in 63 common bean genotypes under field conditions at Malkerns Station in Eswatini, using the ¹⁵N natural abundance technique. The shoots of common bean genotypes were sampled at a pod-filling stage, oven-dried (65oC for 72h), weighed, ground into a fine powder (0.50 mm sieve), and subjected to ¹⁵N/¹⁴N isotopic analysis using mass spectrometry. At maturity, plants from the inner rows were harvested for the determination of grain yield. The results revealed significantly higher modulation (p≤0.05) in genotypes MCA98 and CIM-RM01-97-8 relative to the other genotypes. Shoot N concentration was highest in genotype MCA 98, followed by KAB 10 F2.8-84, with most genotypes showing shoot N concentrations below 2%. Percent N derived from atmospheric N₂ fixation (%Ndfa) differed markedly among genotypes, with CIM-RM01-92-3 and DAB 174, respectively, recording the highest values of 66.65% and 66.22 % N derived from fixation. There were also significant differences in grain yield, with CIM-RM02-79-1 producing the highest yield (3618.75 kg/ha). These results represent an important contribution in the profiling of symbiotic functioning of common bean germplasm for improved N₂ fixation.Keywords: nitrogen fixation, %Ndfa, ¹⁵N natural abundance, grain yield
Procedia PDF Downloads 2181868 Performance Optimization on Waiting Time Using Queuing Theory in an Advanced Manufacturing Environment: Robotics to Enhance Productivity
Authors: Ganiyat Soliu, Glen Bright, Chiemela Onunka
Abstract:
Performance optimization plays a key role in controlling the waiting time during manufacturing in an advanced manufacturing environment to improve productivity. Queuing mathematical modeling theory was used to examine the performance of the multi-stage production line. Robotics as a disruptive technology was implemented into a virtual manufacturing scenario during the packaging process to study the effect of waiting time on productivity. The queuing mathematical model was used to determine the optimum service rate required by robots during the packaging stage of manufacturing to yield an optimum production cost. Different rates of production were assumed in a virtual manufacturing environment, cost of packaging was estimated with optimum production cost. An equation was generated using queuing mathematical modeling theory and the theorem adopted for analysis of the scenario is the Newton Raphson theorem. Queuing theory presented here provides an adequate analysis of the number of robots required to regulate waiting time in order to increase the number of output. Arrival rate of the product was fast which shows that queuing mathematical model was effective in minimizing service cost and the waiting time during manufacturing. At a reduced waiting time, there was an improvement in the number of products obtained per hour. The overall productivity was improved based on the assumptions used in the queuing modeling theory implemented in the virtual manufacturing scenario.Keywords: performance optimization, productivity, queuing theory, robotics
Procedia PDF Downloads 1541867 The Construction Technology of Dryer Silo Materials to Grains Made from Webbing Bamboo: A Drying Technology Solutions to Empowerment Farmers in Yogyakarta, Indonesia
Authors: Nursigit Bintoro, Abadi Barus, Catur Setyo Dedi Pamungkas
Abstract:
Indonesia is an agrarian country have almost population work as farmers. One of the popular agriculture commodity in Indonesia is paddy and corn. Production of paddy and corn are increased, but not balanced to the development of appropriate technology to farmers. Methods of drying applied with farmers still using sunshine. Drying by this method has some drawbacks, such as differences moisture content of corn grains, time used to dry around 3 days, and less quality of the products obtained. Beside it, the method of drying by using sunshine can’t do when the rainy season arrives. On this season the product obtained has less quality. One solution to the above problems is to create a dryer with simple technology. That technology is made silo dryer from webbing bamboo and wood. This technology is applicable to be applied to farmers' groups as well as the creation technology is quite cheap. The experiment material used in this research will be obtained from the corn grains. The equipment used are woven bamboo with a height of 3 meters and have capacity of up to 900 kgs as a silo, gas, burner, blower, bucket elevators, thermocouple, Arduino microcontroller 2560. This tools automatically records all the data of temperature and relative humidity. During on drying, each 30 minutes take 9 sample for measuring moisture content with moisture meter. By using this technology, farmers can save time, energy, and cost to the drying their agriculture product. In addition, by using this technology have good quality moisture content of grains and have a longer shelf life because the temperature when the heating process is controlled. Therefore, this technology is applicable to be applied to the public because the materials used to make the dryer easier to find, cheaper, and manufacture of the dryer made simple with good quality.Keywords: grains, dryer, moisture content, appropriate technology
Procedia PDF Downloads 3581866 Galtung’s Violence Triangle: We Need to Be Thinking Upside Down
Authors: Michael Fusi Ligaliga
Abstract:
Peace and Conflict Studies (PACS), despite being a new pedagogical discipline, is a growing interdisciplinary academic field that has expanded its presence from the traditional lens of war, conflict, and violence to addressing various social issues impacting society. Family and domestic violence (FDV) has seldom been explored through the PACS lens despite some studies showing that “on average, nearly 20 people per minute are physically abused by an intimate partner in the United States. Over one year, this equates to more than 10 million women and men.” In the Pacific, FDV rates are some of the highest in the world. The friction caused by cultural practices reinforcing patriarchy and male impunity, compounded by historical colonial experiences, as well as the impact of Christianity on the Pacific region, creates a complex social landscape when thinking about and addressing FDV in the Pacific. This paper seeks to re-examine Johan Galtung’s violence triangle (GVT) theory and its application to understanding FDV in the Pacific. Galtung argues that there are three forms of violence – direct, structural, and cultural. Direct violence (DV) is behaviors that threaten life itself or diminishes the ability of a person to meet his or her basic needs. This form of violence is visible because it is manifested in behaviors such as killing, maiming, sexual assault, etc. Structural violence (SV) exists when people do not get equal access to goods and services (health, education, justice) that enable them to reach their full potential. When ideologies embedded in cultural norms and practices are used to justify and advocate acts of violence by shifting the moral parameters from being wrong to right or acceptable, this, according to Galtung, is referred to as Cultural violence (CV).Keywords: direct violence, cultural violence, structural violence, indigenous peacebuilding, samoa
Procedia PDF Downloads 771865 The Femoral Eversion Endarterectomy Technique with Transection: Safety and Efficacy
Authors: Hansraj Riteesh Bookun, Emily Maree Stevens, Jarryd Leigh Solomon, Anthony Chan
Abstract:
Objective: This was a retrospective cross-sectional study evaluating the safety and efficacy of femoral endarterectomy using the eversion technique with transection as opposed to the conventional endarterectomy technique with either vein or synthetic patch arterioplasty. Methods: Between 2010 to mid 2017, 19 patients with mean age of 75.4 years, underwent eversion femoral endarterectomy with transection by a single surgeon. There were 13 males (68.4%), and the comorbid burden was as follows: ischaemic heart disease (53.3%), diabetes (43.8%), stage 4 kidney impairment (13.3%) and current or ex-smoking (73.3%). The indications were claudication (45.5%), rest pain (18.2%) and tissue loss (36.3%). Results: The technical success rate was 100%. One patient required a blood transfusion following bleeding from intraoperative losses. Two patients required blood transfusions from low post operative haemogloblin concentrations – one of them in the context of myelodysplastic syndrome. There were no unexpected returns to theatre. The mean length of stay was 11.5 days with two patients having inpatient stays of 36 and 50 days respectively due to the need for rehabilitation. There was one death unrelated to the operation. Conclusion: The eversion technique with transection is safe and effective with low complication rates and a normally expected length of stay. It poses the advantage of not requiring a synthetic patch. This technique features minimal extraneous dissection as there is no need to harvest vein for a patch. Additionally, future endovascular interventions can be performed by puncturing the native vessel. There is no change to the femoral bifurcation anatomy after this technique. We posit that this is a useful adjunct to the surgeon’s panoply of vascular surgical techniques.Keywords: endarterectomy, eversion, femoral, vascular
Procedia PDF Downloads 1991864 Possible Modulation of FAS and PTP-1B Signaling in Ameliorative Potential of Bombax ceiba against High Fat Diet Induced Obesity
Authors: Paras Gupta, Rohit Goyal, Yamini Chauhan, Pyare Lal Sharma
Abstract:
Background: Bombax ceiba Linn., commonly called as Semal, is used in various gastro-intestinal disturbances. It contains lupeol which inhibits PTP-1B, adipogenesis, TG synthesis and accumulation of lipids in adipocytes and adipokines whereas the flavonoids isolated from B. ceiba has FAS inhibitory activity. The present study was aimed to investigate ameliorative potential of Bombax ceiba to experimental obesity in Wistar rats, and its possible mechanism of action. Methods: Male Wistar albino rats weighing 180–220 g were employed in present study. Experimental obesity was induced by feeding high fat diet for 10 weeks. Methanolic extract of B. ceiba extract 100, 200 and 400 mg/kg and Gemfibrozil 50 mg/kg as standard drug were given orally from 7th to 10th week. Results: Induction with HFD for 10 weeks caused significant (p < 0.05) increase in % body wt, BMI, LEE indices; serum glucose, triglyceride, LDL, VLDL, cholesterol, free fatty acid, ALT, AST; tissue TBARS, nitrate/nitrite levels; different fat pads and relative liver weight; and significant decrease in food intake (g and kcal), serum HDL and tissue glutathione levels in HFD control rats. Treatment with B. ceiba extract and Gemfibrozil significantly attenuated these HFD induced changes, as compared to HFD control. The effect of B. ceiba 200 and 400 mg/kg was more pronounced in comparison to Gemfibrozil. Conclusion: On the basis of results obtained, it may be concluded that the methanolic extract of stem bark of Bombax ceiba has significant ameliorative potential against HFD induced obesity in rats, possibly through modulation of FAS and PTP-1B signaling due to the presence of flavonoids and lupeol.Keywords: obesity, Bombax ceiba, free fatty acid, protein tyrosine phosphatase-1B, fatty acid synthase
Procedia PDF Downloads 4001863 Corrosion Response of Friction Stir Processed Mg-Zn-Zr-RE Alloy
Authors: Vasanth C. Shunmugasamy, Bilal Mansoor
Abstract:
Magnesium alloys are increasingly being considered for structural systems across different industrial sectors, including precision components of biomedical devices, owing to their high specific strength, stiffness and biodegradability. However, Mg alloys exhibit a high corrosion rate that restricts their application as a biomaterial. For safe use as biomaterial, it is essential to control their corrosion rates. Mg alloy corrosion is influenced by several factors, such as grain size, precipitates and texture. In Mg alloys, microgalvanic coupling between the α-Mg matrix and secondary precipitates can exist, which results in an increased corrosion rate. The present research addresses this challenge by engineering the microstructure of a biodegradable Mg–Zn–RE–Zr alloy by friction stir processing (FSP), a severe plastic deformation process. The FSP-processed Mg alloys showed improved corrosion resistance and mechanical properties. FSPed Mg alloy showed refined grains, a strong basal texture and broken and uniformly distributed secondary precipitates in the stir zone. Mg, alloy base material, exposed to In vitro corrosion medium showed micro galvanic coupling between precipitate and matrix, resulting in the unstable passive layer. However, FS processed alloy showed uniform corrosion owing to stable surface film formation. The stable surface film is attributed to refined grains, preferred texture and distribution of precipitates. The research results show promising potential for Mg alloy to be developed as a biomaterial.Keywords: biomaterials, severe plastic deformation, magnesium alloys, corrosion
Procedia PDF Downloads 431862 FRATSAN: A New Software for Fractal Analysis of Signals
Authors: Hamidreza Namazi
Abstract:
Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure
Procedia PDF Downloads 4671861 Effect of Pozzolanic Additives on the Strength Development of High Performance Concrete
Authors: Laura Dembovska, Diana Bajare, Ina Pundiene, Daira Erdmane
Abstract:
The aim of this research is to estimate effect of pozzolanic substitutes and their combination on the hydration heat and final strength of high performance concrete. Ternary cementitious systems with different ratios of ordinary Portland cement, silica fume and calcined clay were investigated. Local illite clay was calcined at temperature 700oC in rotary furnace for 20 min. It has been well recognized that the use of pozzolanic materials such as silica fume or calcined clay are recommended for high performance concrete for reduction of porosity, increasing density and as a consequence raising the chemical durability of the concrete. It has been found, that silica fume has a superior influence on the strength development of concrete, but calcined clay increase density and decrease size of dominating pores. Additionally it was found that the rates of pozzolanic reaction and calcium hydroxide consumption in the silica fume-blended cement pastes are higher than in the illite clay-blended cement pastes, it strongly depends from the amount of pozzolanic substitutes which are used. If the pozzolanic reaction is dominating then amount of Ca(OH)2 is decreasing. The identity and the amount of the phases present were determined from the thermal analysis (DTA) data. The hydration temperature of blended cement pastes was measured during the first 24 hours. Fresh and hardened concrete properties were tested. Compressive strength was determined and differential thermal analysis (DTA) was conducted of specimens at the age of 3, 14, 28 and 56 days.Keywords: high performance concrete, pozzolanic additives, silica fume, ternary systems
Procedia PDF Downloads 3751860 From Liquid to Solid: Advanced Characterization of Glass Applying Oscillatory Rheometry
Authors: Christopher Giehl, Anja Allabar, Daniela Ehgartner
Abstract:
Rotational rheometry is standard practice for the viscosity measurement of molten glass, neglecting the viscoelastic properties of this material, especially at temperatures approaching the glass transition. Oscillatory rheometry serves as a powerful toolbox for glass melt characterization beyond viscosity measurements. Heating and cooling rates and the time-dependent visco-elastic behavior influence the temperature where materials undergo the glass transition. This study presents quantitative thermo-mechanical visco-elasticity measurements on three samples in the Na-K-Al-Si-O system. The measurements were performed with a Furnace Rheometer System combined with an air-bearing DSR 502 measuring head (Anton Paar) and a Pt90Rh10 measuring geometry. Temperature ramps were conducted in rotation and oscillation, and the (complex) viscosity values were compared to calculated viscosity values based on sample composition. Furthermore, temperature ramps with different frequencies were conducted, also revealing the frequency-dependence of the shear loss modulus G’’ and the shear storage modulus G’. Here, lower oscillatory frequency results in lower glass transition temperature, as defined by the G’-G’’ crossover point. This contribution demonstrates that oscillatory rheometry serves as a powerful toolbox beyond viscosity measurements, as it considers the visco-elasticity of glass melts quantifying viscous and elastic moduli. Further, it offers a strong definition of Tg beyond the 10^12 Pas concept, which cannot be utilized with rotational viscometry data.Keywords: frequency dependent glass transition, Na-K-Al-Si-O glass melts, oscillatory rheometry, visco-elasticity
Procedia PDF Downloads 1071859 Sensitivity Analysis of Prestressed Post-Tensioned I-Girder and Deck System
Authors: Tahsin A. H. Nishat, Raquib Ahsan
Abstract:
Sensitivity analysis of design parameters of the optimization procedure can become a significant factor while designing any structural system. The objectives of the study are to analyze the sensitivity of deck slab thickness parameter obtained from both the conventional and optimum design methodology of pre-stressed post-tensioned I-girder and deck system and to compare the relative significance of slab thickness. For analysis on conventional method, the values of 14 design parameters obtained by the conventional iterative method of design of a real-life I-girder bridge project have been considered. On the other side for analysis on optimization method, cost optimization of this system has been done using global optimization methodology 'Evolutionary Operation (EVOP)'. The problem, by which optimum values of 14 design parameters have been obtained, contains 14 explicit constraints and 46 implicit constraints. For both types of design parameters, sensitivity analysis has been conducted on deck slab thickness parameter which can become too sensitive for the obtained optimum solution. Deviations of slab thickness on both the upper and lower side of its optimum value have been considered reflecting its realistic possible ranges of variations during construction. In this procedure, the remaining parameters have been kept unchanged. For small deviations from the optimum value, compliance with the explicit and implicit constraints has been examined. Variations in the cost have also been estimated. It is obtained that without violating any constraint deck slab thickness obtained by the conventional method can be increased up to 25 mm whereas slab thickness obtained by cost optimization can be increased only up to 0.3 mm. The obtained result suggests that slab thickness becomes less sensitive in case of conventional method of design. Therefore, for realistic design purpose sensitivity should be conducted for any of the design procedure of girder and deck system.Keywords: sensitivity analysis, optimum design, evolutionary operations, PC I-girder, deck system
Procedia PDF Downloads 1371858 Effect of Climate Change on Groundwater Recharge in a Sub-Humid Sub-Tropical Region of Eastern India
Authors: Suraj Jena, Rabindra Kumar Panda
Abstract:
The study region of the reported study was in Eastern India, having a sub-humid sub-tropical climate and sandy loam soil. The rainfall in this region has wide temporal and spatial variation. Due to lack of adequate surface water to meet the irrigation and household demands, groundwater is being over exploited in that region leading to continuous depletion of groundwater level. Therefore, there is an obvious urgency in reversing the depleting groundwater level through induced recharge, which becomes more critical under the climate change scenarios. The major goal of the reported study was to investigate the effects of climate change on groundwater recharge and subsequent adaptation strategies. Groundwater recharge was modelled using HELP3, a quasi-two-dimensional, deterministic, water-routing model along with global climate models (GCMs) and three global warming scenarios, to examine the changes in groundwater recharge rates for a 2030 climate under a variety of soil and vegetation covers. The relationship between the changing mean annual recharge and mean annual rainfall was evaluated for every combination of soil and vegetation using sensitivity analysis. The relationship was found to be statistically significant (p<0.05) with a coefficient of determination of 0.81. Vegetation dynamics and water-use affected by the increase in potential evapotranspiration for large climate variability scenario led to significant decrease in recharge from 49–658 mm to 18–179 mm respectively. Therefore, appropriate conjunctive use, irrigation schedule and enhanced recharge practices under the climate variability and land use/land cover change scenarios impacting the groundwater recharge needs to be understood properly for groundwater sustainability.Keywords: Groundwater recharge, climate variability, Land use/cover, GCM
Procedia PDF Downloads 2821857 Preference Heterogeneity as a Positive Rather Than Negative Factor towards Acceptable Monitoring Schemes: Co-Management of Artisanal Fishing Communities in Vietnam
Authors: Chi Nguyen Thi Quynh, Steven Schilizzi, Atakelty Hailu, Sayed Iftekhar
Abstract:
Territorial Use Rights for Fisheries (TURFs) have been emerged as a promising tool for fisheries conservation and management. However, illegal fishing has undermined the effectiveness of TURFs, profoundly degrading global fish stocks and marine ecosystems. Conservation and management of fisheries, therefore, largely depends on effectiveness of enforcing fishing regulations, which needs co-enforcement by fishers. However, fishers tend to resist monitoring participation, as their views towards monitoring scheme design has not been received adequate attention. Fishers’ acceptability of a monitoring scheme is likely to be achieved if there is a mechanism allowing fishers to engage in the early planning and design stages. This study carried out a choice experiment with 396 fishers in Vietnam to elicit fishers’ preferences for monitoring scheme and to estimate the relative importance that fishers place on the key design elements. Preference heterogeneity was investigated using a Scale-Adjusted Latent Class Model that accounts for both preference and scale variance. Welfare changes associated with the proposed monitoring schemes were also examined. It is found that there are five distinct preference classes, suggesting that there is no one-size-fits-all scheme well-suited to all fishers. Although fishers prefer to be compensated more for their participation, compensation is not a driving element affecting fishers’ choice. Most fishers place higher value on other elements, such as institutional arrangements and monitoring capacity. Fishers’ preferences are driven by their socio-demographic and psychological characteristics. Understanding of how changes in design elements’ levels affect the participation of fishers could provide policy makers with insights useful for monitoring scheme designs tailored to the needs of different fisher classes.Keywords: Design of monitoring scheme, Enforcement, Heterogeneity, Illegal Fishing, Territorial Use Rights for Fisheries
Procedia PDF Downloads 3241856 The Impact of Rising Architectural Façade in Improving Terms of the Physical Urban Ambience Inside the Free Space for Urban Fabric - the Street- Case Study the City of Biskra
Authors: Rami Qaoud, Alkama Djamal
Abstract:
When we ask about the impact of rising architectural façade in improving the terms physical urban ambiance inside the free space for urban fabric. Considered as bringing back life and culture values and civilization to these cities. And This will be the theme of this search. Where we have conducted the study about the relationship that connects the empty and full of in the urban fabric in terms of the density construction and the architectural elevation of its façade to street view. In this framework, we adopted in the methodology of this research the technical field experience. And according to three types of Street engineering(H≥2W, H=W, H≤0.5W). Where we conducted a field to raise the values of the physical ambiance according to three main axes of ambiance. The first axe 1 - Thermal ambiance. Where the temperature values were collected, relative humidity, wind speed, temperature of surfaces (the outer wall-ground). The second axe 2- Visual ambiance. Where we took the values of natural lighting levels during the daytime. The third axe 3- Acoustic ambiance . Where we take sound values during the entire day. That experience, which lasted for three consecutive days, and through six stations of measuring, where it has been one measuring station for each type of the street engineering and in two different way street. Through the obtained results and with the comparison of those values. We noticed the difference between this values and the three type of street engineering. Where the difference the calorific values of air equal 4 ° C , in terms of the visual ambiance the difference in the direct lighting natural periods amounted six hours between the three types of street engineering. As well in terms of sound ambience, registered a difference in values of up 15 (db) between the three types. This difference in values indicates The impact of rising architectural façade in improving the physical urban ambiance within the free field - street- for urban fabric.Keywords: street, physical urban ambience, rising architectural façade, urban fabric
Procedia PDF Downloads 2891855 Effect of Varying Diets on Growth, Development and Survival of Queen Bee (Apis mellifera L.) in Captivity
Authors: Muhammad Anjum Aqueel, Zaighum Abbas, Mubasshir Sohail, Muhammad Abubakar, Hafiz Khurram Shurjeel, Abu Bakar Muhammad Raza, Muhammad Afzal, Sami Ullah
Abstract:
Keeping in view the increasing demand, queen of Apis mellifera L. (Hymenoptera: Apidae) was reared artificially in this experiment at varying diets including royal jelly. Larval duration, pupal duration, weight, and size of pupae were evaluated at different diets including royal jelly. Queen larvae were raised by Doo Little grafting method. Four different diets were mixed with royal jelly and applied to larvae. Fructose, sugar, yeast, and honey were provided to rearing queen larvae along with same amount of royal jelly. Larval and pupal duration were longest (6.15 and 7.5 days, respectively) at yeast and shortest on honey (5.05 and 7.02 days, respectively). Heavier and bigger pupae were recorded on yeast (168.14 mg and 1.76 cm, respectively) followed by diets having sugar and honey. Due to production of heavier and bigger pupae, yeast was considered as best artificial diet for the growing queen larvae. So, in the second part of experiment, different amounts of yeast were provided to growing larvae along with fixed amount (0.5 g) of royal jelly. Survival rates of the larvae and queen bee were 70% and 40% in the 4-g food, 86.7% and 53.3% in the 6-g food, and 76.7% and 50% in the 8-g food. Weight of adult queen bee (1.459±0.191 g) and the number of ovarioles (41.7±21.3) were highest at 8 g of food. Results of this study are helpful for bee-keepers in producing fitter queen bees.Keywords: apis melifera l, dietary effect, survival and development, honey bee queen
Procedia PDF Downloads 4901854 Dual Electrochemical Immunosensor for IL-13Rα2 and E-Cadherin Determination in Cell, Serum and Tissues from Cancer Patients
Authors: Amira ben Hassine, A. Valverde, V. Serafín, C. Muñoz-San Martín, M. Garranzo-Asensio, M. Gamella, R. Barderas, M. Pedrero, N. Raouafi, S. Campuzano, P. Yáñez-Sedeño, J. M. Pingarrón
Abstract:
This work describes the development of a dual electrochemical immunosensing platform for accurate determination of two target proteins, IL-13 Receptor α2 (IL-13Rα2) and E-cadherin (E-cad). The proposed methodology is based on the use of sandwich immunosensing approaches (involving horseradish peroxidase-labeled detector antibodies) implemented onto magnetic microbeads (MBs) and amperometric transduction at screen-printed dual carbon electrodes (SPdCEs). The magnetic bioconjugates were captured onto SPdCEs and the amperometric transduction was performed using the H2O2/hydroquinone (HQ) system. Under optimal experimental conditions, the developed bio platform demonstrates linear concentration ranges of 1.0–25 and 5.0-100 ng mL-1, detection limits of 0.28 and 1.04 ng mL-1 for E-cad and IL-13Rα2, respectively, and excellent selectivity against other non-target proteins. The developed immuno-platform also offers a good reproducibility among amperometric responses provided by nine different sensors constructed in the same manner (Relative Standard Deviation values of 3.1% for E-cad and 4.3% for IL-13Rα2). Moreover, obtained results confirm the practical applicability of this bio-platform for the accurate determination of the endogenous levels of both extracellular receptors in colon cancer cells (both intact and lysed) with different metastatic potential and serum and tissues from patients diagnosed with colorectal cancer at different grades. Interesting features in terms of, simplicity, speed, portability and sample amount required to provide quantitative results, make this immuno-platform more compatible than conventional methodologies with the clinical diagnosis and prognosis at the point of care.Keywords: electrochemistry, mmunosensors, biosensors, E-cadherin, IL-13 receptor α2, cancer colorectal
Procedia PDF Downloads 1371853 Effect of Gas Boundary Layer on the Stability of a Radially Expanding Liquid Sheet
Authors: Soumya Kedia, Puja Agarwala, Mahesh Tirumkudulu
Abstract:
Linear stability analysis is performed for a radially expanding liquid sheet in the presence of a gas medium. A liquid sheet can break up because of the aerodynamic effect as well as its thinning. However, the study of the aforementioned effects is usually done separately as the formulation becomes complicated and is difficult to solve. Present work combines both, aerodynamic effect and thinning effect, ignoring the non-linearity in the system. This is done by taking into account the formation of the gas boundary layer whilst neglecting viscosity in the liquid phase. Axisymmetric flow is assumed for simplicity. Base state analysis results in a Blasius-type system which can be solved numerically. Perturbation theory is then applied to study the stability of the liquid sheet, where the gas-liquid interface is subjected to small deformations. The linear model derived here can be applied to investigate the instability for sinuous as well as varicose modes, where the former represents displacement in the centerline of the sheet and the latter represents modulation in sheet thickness. Temporal instability analysis is performed for sinuous modes, which are significantly more unstable than varicose modes, for a fixed radial distance implying local stability analysis. The growth rates, measured for fixed wavenumbers, predicated by the present model are significantly lower than those obtained by the inviscid Kelvin-Helmholtz instability and compare better with experimental results. Thus, the present theory gives better insight into understanding the stability of a thin liquid sheet.Keywords: boundary layer, gas-liquid interface, linear stability, thin liquid sheet
Procedia PDF Downloads 2291852 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 1521851 Evaluation of Machine Learning Algorithms and Ensemble Methods for Prediction of Students’ Graduation
Authors: Soha A. Bahanshal, Vaibhav Verdhan, Bayong Kim
Abstract:
Graduation rates at six-year colleges are becoming a more essential indicator for incoming fresh students and for university rankings. Predicting student graduation is extremely beneficial to schools and has a huge potential for targeted intervention. It is important for educational institutions since it enables the development of strategic plans that will assist or improve students' performance in achieving their degrees on time (GOT). A first step and a helping hand in extracting useful information from these data and gaining insights into the prediction of students' progress and performance is offered by machine learning techniques. Data analysis and visualization techniques are applied to understand and interpret the data. The data used for the analysis contains students who have graduated in 6 years in the academic year 2017-2018 for science majors. This analysis can be used to predict the graduation of students in the next academic year. Different Predictive modelings such as logistic regression, decision trees, support vector machines, Random Forest, Naïve Bayes, and KNeighborsClassifier are applied to predict whether a student will graduate. These classifiers were evaluated with k folds of 5. The performance of these classifiers was compared based on accuracy measurement. The results indicated that Ensemble Classifier achieves better accuracy, about 91.12%. This GOT prediction model would hopefully be useful to university administration and academics in developing measures for assisting and boosting students' academic performance and ensuring they graduate on time.Keywords: prediction, decision trees, machine learning, support vector machine, ensemble model, student graduation, GOT graduate on time
Procedia PDF Downloads 721850 Impact of Curvatures in the Dike Line on Wave Run-up and Wave Overtopping, ConDike-Project
Authors: Malte Schilling, Mahmoud M. Rabah, Sven Liebisch
Abstract:
Wave run-up and overtopping are the relevant parameters for the dimensioning of the crest height of dikes. Various experimental as well as numerical studies have investigated these parameters under different boundary conditions (e.g. wave conditions, structure type). Particularly for the dike design in Europe, a common approach is formulated where wave and structure properties are parameterized. However, this approach assumes equal run-up heights and overtopping discharges along the longitudinal axis. However, convex dikes have a heterogeneous crest by definition. Hence, local differences in a convex dike line are expected to cause wave-structure interactions different to a straight dike. This study aims to assess both run-up and overtopping at convexly curved dikes. To cast light on the relevance of curved dikes for the design approach mentioned above, physical model tests were conducted in a 3D wave basin of the Ludwig-Franzius-Institute Hannover. A dike of a slope of 1:6 (height over length) was tested under both regular waves and TMA wave spectra. Significant wave heights ranged from 7 to 10 cm and peak periods from 1.06 to 1.79 s. Both run-up and overtopping was assessed behind the curved and straight sections of the dike. Both measurements were compared to a dike with a straight line. It was observed that convex curvatures in the longitudinal dike line cause a redirection of incident waves leading to a concentration around the center point. Measurements prove that both run-up heights and overtopping rates are higher than on the straight dike. It can be concluded that deviations from a straight longitudinal dike line have an impact on design parameters and imply uncertainties within the design approach in force. Therefore, it is recommended to consider these influencing factors for such cases.Keywords: convex dike, longitudinal curvature, overtopping, run-up
Procedia PDF Downloads 2931849 The Value of Serum Procalcitonin in Patients with Acute Musculoskeletal Infections
Authors: Mustafa Al-Yaseen, Haider Mohammed Mahdi, Haider Ali Al–Zahid, Nazar S. Haddad
Abstract:
Background: Early diagnosis of musculoskeletal infections is of vital importance to avoid devastating complications. There is no single laboratory marker which is sensitive and specific in diagnosing these infections accurately. White blood cell count, erythrocyte sedimentation rate, and C-reactive protein are not specific as they can also be elevated in conditions other than bacterial infections. Materials Culture and sensitivity is not a true gold standard due to its varied positivity rates. Serum Procalcitonin is one of the new laboratory markers for pyogenic infections. The objective of this study is to assess the value of PCT in the diagnosis of soft tissue, bone, and joint infections. Patients and Methods: Patients of all age groups (seventy-four patients) with a diagnosis of musculoskeletal infection are prospectively included in this study. All patients were subjected to White blood cell count, erythrocyte sedimentation rate, C-reactive protein, and serum Procalcitonin measurements. A healthy non infected outpatient group (twenty-two patients) taken as a control group and underwent the same evaluation steps as the study group. Results: The study group showed mean Procalcitonin levels of 1.3 ng/ml. Procalcitonin, at 0.5 ng/ml, was (42.6%) sensitive and (95.5%) specific in diagnosing of musculoskeletal infections with (positive predictive value of 87.5% and negative predictive value of 48.3%) and (positive likelihood ratio of 9.3 and negative likelihood ratio of 0.6). Conclusion: Serum Procalcitonin, at a cut – off of 0.5 ng/ml, is a specific but not sensitive marker in the diagnosis of musculoskeletal infections, and it can be used effectively to rule in the diagnosis of infection but not to rule out it.Keywords: procalcitonin, infection, labratory markers, musculoskeletal
Procedia PDF Downloads 163