Search results for: total error rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16787

Search results for: total error rate

10817 A Foodborne Cholera Outbreak in a School Caused by Eating Contaminated Fried Fish: Hoima Municipality, Uganda, February 2018

Authors: Dativa Maria Aliddeki, Fred Monje, Godfrey Nsereko, Benon Kwesiga, Daniel Kadobera, Alex Riolexus Ario

Abstract:

Background: Cholera is a severe gastrointestinal disease caused by Vibrio cholera. It has caused several pandemics. On 26 February 2018, a suspected cholera outbreak, with one death, occurred in School X in Hoima Municipality, western Uganda. We investigated to identify the scope and mode of transmission of the outbreak, and recommend evidence-based control measures. Methods: We defined a suspected case as onset of diarrhea, vomiting, or abdominal pain in a student or staff of School X or their family members during 14 February–10 March. A confirmed case was a suspected case with V. cholerae cultured from stool. We reviewed medical records at Hoima Hospital and searched for cases at School X. We conducted descriptive epidemiologic analysis and hypothesis-generating interviews of 15 case-patients. In a retrospective cohort study, we compared attack rates between exposed and unexposed persons. Results: We identified 15 cases among 75 students and staff of School X and their family members (attack rate=20%), with onset from 25-28 February. One patient died (case-fatality rate=6.6%). The epidemic curve indicated a point-source exposure. On 24 February, a student brought fried fish from her home in a fishing village, where a cholera outbreak was ongoing. Of the 21 persons who ate the fish, 57% developed cholera, compared with 5.6% of 54 persons who did not eat (RR=10; 95% CI=3.2-33). None of 4 persons who recooked the fish before eating, compared with 71% of 17 who did not recook it, developed cholera (RR=0.0, 95%CIFisher exact=0.0-0.95). Of 12 stool specimens cultured, 6 yielded V. cholerae. Conclusion: This cholera outbreak was caused by eating fried fish, which might have been contaminated with V. cholerae in a village with an ongoing outbreak. Lack of thorough cooking of the fish might have facilitated the outbreak. We recommended thoroughly cooking fish before consumption.

Keywords: cholera, disease outbreak, foodborne, global health security, Uganda

Procedia PDF Downloads 182
10816 Synthesis and Characterization of a Type Oxide Ca1-x Srx MnO3

Authors: A. Guemache, M. Omari

Abstract:

Oxides with formula Ca1-xSrx MnO3(0≤x≤0.2) were synthesized using co precipitation method. The identification of the obtained phase was carried out using infrared spectroscopy and x-ray diffraction. Thermogravimetric and differential analysis was permitted to characterize different transformations of precursors which take place during one heating cycle. The study of electrochemical behavior was carried out by cyclic voltammetry and impedance spectroscopy. The obtained results show that apparent catalytic activity improved when increasing the concentration of strontium. Anodic current densities varies from 1.3 to 5.9 mA/cm2 at the rate scan of 20 mV.s-1 and a potential 0.8 V for oxides with composition x=0 to 0.2.

Keywords: oxide, co-precipitation, thermal analysis, electrochemical properties

Procedia PDF Downloads 345
10815 Determinants of Standard Audit File for Tax Purposes Accounting Legal Obligation Compliance Costs: Empirical Study for Portuguese SMEs of Leiria District

Authors: Isa Raquel Alves Soeiro, Cristina Isabel Branco de Sá

Abstract:

In Portugal, since 2008, there has been a requirement to export the Standard Audit File for Tax Purposes (SAF-T) standard file (in XML format). This file thus gathers tax-relevant information from a company relating to a specific period of taxation. There are two types of SAF-T files that serve different purposes: the SAF-T of revenues and the SAF-T of accounting, which requires taxpayers and accounting firms to invest in order to adapt the accounting programs to the legal requirements. The implementation of the SAF-T accounting file aims to facilitate the collection of relevant tax data by tax inspectors as support of taxpayers' tax returns for the analysis of accounting records or other information with tax relevance (Portaria No. 321-A/2007 of March 26 and Portaria No. 302/2016 of December 2). The main objective of this research project is to verify, through quantitative analysis, what is the cost of compliance of Small and Medium Enterprises (SME) in the district of Leiria in the introduction and implementation of the tax obligation of SAF-T - Standard Audit File for Tax Purposes of accounting. The information was collected through a questionnaire sent to a population of companies selected through the SABI Bureau Van Dijk database in 2020. Based on the responses obtained to the questionnaire, the companies were divided into two groups: Group 1 -companies who are self-employed and whose main activity is accounting services; and Group 2 -companies that do not belong to the accounting sector. In general terms, the conclusion is that there are no statistically significant differences in the costs of complying with the accounting SAF-T between the companies in Group 1 and Group 2 and that, on average, the internal costs of both groups represent the largest component of the total cost of compliance with the accounting SAF-T. The results obtained show that, in both groups, the total costs of complying with the SAF-T of accounting are regressive, which appears to be similar to international studies, although these are related to different tax obligations. Additionally, we verified that the variables volume of business, software used, number of employees, and legal form explain the differences in the costs of complying with accounting SAF-T in the Leiria district SME.

Keywords: compliance costs, SAF-T accounting, SME, Portugal

Procedia PDF Downloads 66
10814 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of Video Quality Assessment (VQA) and metrics like VMAF, SSIM etc. are said to be some of the best performing metrics, but the evaluation of these metrics is dominantly done on professionally taken video contents using professional tools, lighting conditions etc. No study particularly pinpointing the performance of the metrics on the contents taken by users on very commonly available devices has been done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective VQA metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and android smartphone, an IOS smartphone and a DSLR. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied on addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics didn’t perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using HEVC codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, SSIM and VMAF have performed significantly better.

Keywords: distortion, metrics, performance, resolution, video quality assessment

Procedia PDF Downloads 191
10813 An Efficient Backward Semi-Lagrangian Scheme for Nonlinear Advection-Diffusion Equation

Authors: Soyoon Bak, Sunyoung Bu, Philsu Kim

Abstract:

In this paper, a backward semi-Lagrangian scheme combined with the second-order backward difference formula is designed to calculate the numerical solutions of nonlinear advection-diffusion equations. The primary aims of this paper are to remove any iteration process and to get an efficient algorithm with the convergence order of accuracy 2 in time. In order to achieve these objects, we use the second-order central finite difference and the B-spline approximations of degree 2 and 3 in order to approximate the diffusion term and the spatial discretization, respectively. For the temporal discretization, the second order backward difference formula is applied. To calculate the numerical solution of the starting point of the characteristic curves, we use the error correction methodology developed by the authors recently. The proposed algorithm turns out to be completely iteration-free, which resolves the main weakness of the conventional backward semi-Lagrangian method. Also, the adaptability of the proposed method is indicated by numerical simulations for Burgers’ equations. Throughout these numerical simulations, it is shown that the numerical results are in good agreement with the analytic solution and the present scheme offer better accuracy in comparison with other existing numerical schemes. Semi-Lagrangian method, iteration-free method, nonlinear advection-diffusion equation, second-order backward difference formula

Keywords: Semi-Lagrangian method, iteration free method, nonlinear advection-diffusion equation, second-order backward difference formula

Procedia PDF Downloads 310
10812 Trusting the Eyes: The Changing Landscape of Eyewitness Testimony

Authors: Manveen Singh

Abstract:

Since the very advent of law enforcement, eyewitness testimony has played a pivotal role in identifying, arresting and convicting suspects. Reliant heavily on the accuracy of human memory, nothing seems to carry more weight with the judiciary than the testimony of an actual witness. The acceptance of eyewitness testimony as a substantive piece of evidence lies embedded in the assumption that the human mind is adept at recording and storing events. Research though, has proven otherwise. Having carried out extensive study in the field of eyewitness testimony for the past 40 years, psychologists have concluded that human memory is fragile and needs to be treated carefully. The question that arises then, is how reliable is eyewitness testimony? The credibility of eyewitness testimony, simply put, depends on several factors leaving it reliable at times while not so much at others. This is further substantiated by the fact that as per scientific research, over 75 percent of all eyewitness testimonies may stand in error; quite a few of these cases resulting in life sentences. Although the advancement of scientific techniques, especially DNA testing, helped overturn many of these eyewitness testimony-based convictions, yet eyewitness identifications continue to form the backbone of most police investigations and courtroom decisions till date. What then is the solution to this long standing concern regarding the accuracy of eyewitness accounts? The present paper shall analyze the linkage between human memory and eyewitness identification as well as look at the various factors governing the credibility of eyewitness testimonies. Furthermore, it shall elaborate upon some best practices developed over the years to help reduce mistaken identifications. Thus, in the process, trace out the changing landscape of eyewitness testimony amidst the evolution of DNA and trace evidence.

Keywords: DNA, eyewitness, identification, testimony, evidence

Procedia PDF Downloads 316
10811 Developing the Principal Change Leadership Non-Technical Competencies Scale: An Exploratory Factor Analysis

Authors: Tai Mei Kin, Omar Abdull Kareem

Abstract:

In light of globalization, educational reform has become a top priority for many countries. However, the task of leading change effectively requires a multidimensional set of competencies. Over the past two decades, technical competencies of principal change leadership have been extensively analysed and discussed. Comparatively, little research has been conducted in Malaysian education context on non-technical competencies or popularly known as emotional intelligence, which is equally crucial for the success of change. This article provides a validation of the Principal Change Leadership Non-Technical Competencies (PCLnTC) Scale, a tool that practitioners can easily use to assess school principals’ level of change leadership non-technical competencies that facilitate change and maximize change effectiveness. The overall coherence of the PCLnTC model was constructed by incorporating three theories: a)the change leadership theory whereby leading change is the fundamental role of a leader; b)competency theory in which leadership can be taught and learned; and c)the concept of emotional intelligence whereby it can be developed, fostered and taught. An exploratory factor analysis (EFA) was used to determine the underlying factor structure of PCLnTC model. Before conducting EFA, five important pilot test approaches were conducted to ensure the validity and reliability of the instrument: a)reviewed by academic colleagues; b)verification and comments from panel; c)evaluation on questionnaire format, syntax, design, and completion time; d)evaluation of item clarity; and e)assessment of internal consistency reliability. A total of 335 teachers from 12 High Performing Secondary School in Malaysia completed the survey. The PCLnTCS with six points Liker-type scale were subjected to Principal Components Analysis. The analysis yielded a three-factor solution namely, a)Interpersonal Sensitivity; b)Flexibility; and c)Motivation, explaining a total 74.326 per cent of the variance. Based on the results, implications for instrument revisions are discussed and specifications for future confirmatory factor analysis are delineated.

Keywords: exploratory factor analysis, principal change leadership non-technical competencies (PCLnTC), interpersonal sensitivity, flexibility, motivation

Procedia PDF Downloads 410
10810 Cotton Transplantation as a Practice to Escape Infection with Some Soil-Borne Pathogens

Authors: E. M. H. Maggie, M. N. A. Nazmey, M. A. Abdel-Sattar, S. A. Saied

Abstract:

A successful trial of transplanting cotton is reported. Seeds grown in trays for 4-5 weeks in an easily prepared supporting medium such as peat moss or similar plant waste are tried. Careful transplanting of seedlings, with root system as intact as possible, is being made in the permanent field. The practice reduced damping-off incidence rate and allowed full winter crop revenues. Further work is needed to evaluate certain parameters such as growth curve, flowering curve, and yield at economic bases.

Keywords: cotton, transplanting cotton, damping-off diseases, environment sciences

Procedia PDF Downloads 344
10809 Evaluation of the Boiling Liquid Expanding Vapor Explosion Thermal Effects in Hassi R'Mel Gas Processing Plant Using Fire Dynamics Simulator

Authors: Brady Manescau, Ilyas Sellami, Khaled Chetehouna, Charles De Izarra, Rachid Nait-Said, Fati Zidani

Abstract:

During a fire in an oil and gas refinery, several thermal accidents can occur and cause serious damage to people and environment. Among these accidents, the BLEVE (Boiling Liquid Expanding Vapor Explosion) is most observed and remains a major concern for risk decision-makers. It corresponds to a violent vaporization of explosive nature following the rupture of a vessel containing a liquid at a temperature significantly higher than its normal boiling point at atmospheric pressure. Their effects on the environment generally appear in three ways: blast overpressure, radiation from the fireball if the liquid involved is flammable and fragment hazards. In order to estimate the potential damage that would be caused by such an explosion, risk decision-makers often use quantitative risk analysis (QRA). This analysis is a rigorous and advanced approach that requires a reliable data in order to obtain a good estimate and control of risks. However, in most cases, the data used in QRA are obtained from the empirical correlations. These empirical correlations generally overestimate BLEVE effects because they are based on simplifications and do not take into account real parameters like the geometry effect. Considering that these risk analyses are based on an assessment of BLEVE effects on human life and plant equipment, more precise and reliable data should be provided. From this point of view, the CFD modeling of BLEVE effects appears as a solution to the empirical law limitations. In this context, the main objective is to develop a numerical tool in order to predict BLEVE thermal effects using the CFD code FDS version 6. Simulations are carried out with a mesh size of 1 m. The fireball source is modeled as a vertical release of hot fuel in a short time. The modeling of fireball dynamics is based on a single step combustion using an EDC model coupled with the default LES turbulence model. Fireball characteristics (diameter, height, heat flux and lifetime) issued from the large scale BAM experiment are used to demonstrate the ability of FDS to simulate the various steps of the BLEVE phenomenon from ignition up to total burnout. The influence of release parameters such as the injection rate and the radiative fraction on the fireball heat flux is also presented. Predictions are very encouraging and show good agreement in comparison with BAM experiment data. In addition, a numerical study is carried out on an operational propane accumulator in an Algerian gas processing plant of SONATRACH company located in the Hassi R’Mel Gas Field (the largest gas field in Algeria).

Keywords: BLEVE effects, CFD, FDS, fireball, LES, QRA

Procedia PDF Downloads 175
10808 Relationship between Iron-Related Parameters and Soluble Tumor Necrosis Factor-Like Weak Inducer of Apoptosis in Obese Children

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Iron is physiologically essential. However, it also participates in the catalysis of free radical formation reactions. Its deficiency is associated with amplified health risks. This trace element establishes some links with another physiological process related to cell death, apoptosis. Both iron deficiency and iron overload are closely associated with apoptosis. Soluble tumor necrosis factor-like weak inducer of apoptosis (sTWEAK) has the ability to trigger apoptosis and plays a dual role in the physiological versus pathological inflammatory responses of tissues. The aim of this study was to investigate the status of these parameters as well as the associations among them in children with obesity, a low-grade inflammatory state. The study was performed on groups of children with normal body mass index (N-BMI) and obesity. Forty-three children were included in each group. Based upon age- and sex-adjusted BMI percentile tables prepared by World Health Organization, children whose values varied between 85 and 15 were included in N-BMI group. Children whose BMI percentile values were between 99 and 95 comprised obese (OB) group. Institutional ethical committee approval and informed consent forms were taken prior to the study. Anthropometric measurements (weight, height, waist circumference, hip circumference, head circumference, neck circumference) and blood pressure values (systolic blood pressure and diastolic blood pressure) were recorded. Routine biochemical analysis including serum iron, total iron binding capacity (TIBC), transferrin saturation percent (Tf Sat %), and ferritin were performed. Soluble tumor necrosis factor-like weak inducer of apoptosis levels were determined by enzyme-linked immunosorbent assay. Study data was evaluated using appropriate statistical tests performed by the statistical program SPSS. Serum iron levels were 91±34 mcrg/dl and 75±31 mcrg/dl in N-BMI and OB children, respectively. The corresponding values for TIBC, Tf Sat %, ferritin were 265 mcrg/dl vs 299 mcrg/dl, 37.2±19.1 % vs 26.7±14.6 %, and 41±25 ng/ml vs 44±26 ng/ml. in N-BMI and OB groups, sTWEAK concentrations were measured as 351 ng/L and 325 ng/L, respectively (p>0.05). Correlation analysis revealed significant associations between sTWEAK levels and iron related parameters (p<0.05) except ferritin. In conclusion, iron contributes to apoptosis. Children with iron deficiency have decreased apoptosis rate in comparison with that of healthy children. sTWEAK is inducer of apoptosis. Obese children had lower levels of both iron and sTWEAK. Low levels of sTWEAK are associated with several types of cancers and poor survival. Although iron deficiency state was not observed in this study, the correlations detected between decreased sTWEAK and decreased iron as well as Tf Sat % values were valuable findings, which point out decreased apoptosis. This may induce a proinflammatory state, potentially leading to malignancies in the future lives of obese children.

Keywords: apoptosis, children, iron-related parameters, obesity, soluble tumor necrosis factor-like weak inducer of apoptosis

Procedia PDF Downloads 119
10807 The Development of a Digitally Connected Factory Architecture to Enable Product Lifecycle Management for the Assembly of Aerostructures

Authors: Nicky Wilson, Graeme Ralph

Abstract:

Legacy aerostructure assembly is defined by large components, low build rates, and manual assembly methods. With an increasing demand for commercial aircraft and emerging markets such as the eVTOL (electric vertical take-off and landing) market, current methods of manufacturing are not capable of efficiently hitting these higher-rate demands. This project will look at how legacy manufacturing processes can be rate enabled by taking a holistic view of data usage, focusing on how data can be collected to enable fully integrated digital factories and supply chains. The study will focus on how data is flowed both up and down the supply chain to create a digital thread specific to each part and assembly while enabling machine learning through real-time, closed-loop feedback systems. The study will also develop a bespoke architecture to enable connectivity both within the factory and the wider PLM (product lifecycle management) system, moving away from traditional point-to-point systems used to connect IO devices to a hub and spoke architecture that will exploit report-by-exception principles. This paper outlines the key issues facing legacy aircraft manufacturers, focusing on what future manufacturing will look like from adopting Industry 4 principles. The research also defines the data architecture of a PLM system to enable the transfer and control of a digital thread within the supply chain and proposes a standardised communications protocol to enable a scalable solution to connect IO devices within a production environment. This research comes at a critical time for aerospace manufacturers, who are seeing a shift towards the integration of digital technologies within legacy production environments, while also seeing build rates continue to grow. It is vital that manufacturing processes become more efficient in order to meet these demands while also securing future work for many manufacturers.

Keywords: Industry 4, digital transformation, IoT, PLM, automated assembly, connected factories

Procedia PDF Downloads 63
10806 Design of Low Power FSK Receiver

Authors: M. Aeysha Parvin, J. Asha, J. Jenifer

Abstract:

This letter presents a novel frequency-shift keying(FSK) receiver using PLL-based FSK demodulator, thereby achieving high sensitivity and low power consumption. The proposed receiver comprises a power amplifier, mixer, 3-stage ring oscillator, PLL based demodulator. Moreover, the proposed receiver is fabricated using 0.12µm CMOS process and consumes 0.7Mw. Measurement results demonstrate that the proposed receiver has a sensitivity of -93dbm with 1Mbps data rate in receiving a 2.4 GHz FSK signal.

Keywords: CMOS FSK receiver, phase locked loop (PLL), 3-stage ring oscillator, FSK signal

Procedia PDF Downloads 482
10805 Potential Opportunity and Challenge of Developing Organic Rankine Cycle Geothermal Power Plant in China Based on an Energy-Economic Model

Authors: Jiachen Wang, Dongxu Ji

Abstract:

Geothermal power generation is a mature technology with zero carbon emission and stable power output, which could play a vital role as an optimum substitution of base load technology in China’s future decarbonization society. However, the development of geothermal power plants in China is stagnated for a decade due to the underestimation of geothermal energy and insufficient favoring policy. Lack of understanding of the potential value of base-load technology and environmental benefits is the critical reason for disappointed policy support. This paper proposed a different energy-economic model to uncover the potential benefit of developing a geothermal power plant in Puer, including the value of base-load power generation, and environmental and economic benefits. Optimization of the Organic Rankine Cycle (ORC) for maximum power output and minimum Levelized cost of electricity was first conducted. This process aimed at finding the optimum working fluid, turbine inlet pressure, pinch point temperature difference and superheat degrees. Then the optimal ORC model was sent to the energy-economic model to simulate the potential economic and environmental benefits. Impact of geothermal power plants based on the scenarios of implementing carbon trade market, the direct subsidy per electricity generation and nothing was tested. In addition, a requirement of geothermal reservoirs, including geothermal temperature and mass flow rate for a competitive power generation technology with other renewables, was listed. The result indicated that the ORC power plant has a significant economic and environmental benefit over other renewable power generation technologies when implementing carbon trading market and subsidy support. At the same time, developers must locate the geothermal reservoirs with minimum temperature and mass flow rate of 130 degrees and 50 m/s to guarantee a profitable project under nothing scenarios.

Keywords: geothermal power generation, optimization, energy model, thermodynamics

Procedia PDF Downloads 54
10804 A Cross Culture Analysis of Medicinal Plants and Phytotherapies: Highly Effective for Gastropathic Disorders among Three Ethnic Communities of South West Pakistan

Authors: Sheikh Z. Ul Abidin, Raees Khan, Rainer W. Bussmann, Mushtaq Ahmad, Shayan Jamshed, Humera Jabeen, Ajmal Khan

Abstract:

Gastropathic disorders are increasing rapidly and millions patients are reported every years across the world. Herbal medicines and traditional phytotherapies are very effective for many diseases including gastropathic ailments. Many communities and study region have their own unique remedies for such diseases. The current study was aimed to investigate and document high valued medicinal plants and folk remedies for different gastropathic disorders among the three ethnic groups of three regions in South West Pakistan. A total of 104 semi-structured interviews involving experts of traditional knowledge in 21 localities of the three regions (D.I. Khan, Zhob and Mianwali) were conducted. The interviews were especially focused on the documentation of folk herbal remedies. The collected data was analyzed using different quantitative methods. The highly effective plants from all localities were identified with the help of local interviewers and collected for proper taxonomic identification. A total of 56 medicinal plants and 33 effective recipes for 12 gastropathic diseases were documented from all the three ethnic groups in 21 localities. Fabaceae and Asteraceae were most prominently used for different gastropathic diseases. Diarrhea, vomiting and dysentery were the most commonly diseases treated with herbal remedies. It was observed that the three communities shared knowledge about the use of medicinal plants, 35 species were commonly reported from all three areas. However, each community had also their own unique uses of medicinal plants, e.g. 23 plants species were only used in Zhob, 20 plant species were only reported in D.I. Khan and 16 species in Mianwali. The present study reveals that different communities and ethnic groups share some traditional knowledge and also have their own unique knowledge of plants utilization. Gastropathic disorder is increasing very rapidly and the traditional cross-cultural knowledge of medicinal plants use can be very effective for its cure.

Keywords: cross cultural, ethnic groups, gastropathy, phytotherapies, South West Pakistan

Procedia PDF Downloads 280
10803 Prevalence and Hypertension Management among the Nomadic Migratory Community of Marsabit County, Kenya: Lessons Learned and Wayforward

Authors: Wesley Too, Christine Chesiror

Abstract:

Hypertension is a public health challenge that globally, with the World Health Organization estimating that by 2025, more than 1.5 billion people would have been diagnosed with it. Kenya’s prevalence of hypertension is estimated at 24.6 percent; however, 55% of the affected have uncontrolled blood pressure, which is worst in some parts of the country with different lifestyle: nomads and migratory communities. Kenyan pastoralists comprise 20% of the nation's population and are constantly on the move for search of water, pasture for their herd, and desertification have driven nomadic populations to the brink, given their unique and dynamic challenges. Nomads face myriad of challenges and barriers towards the management of their health care problems. Nomadic area is predominantly rural, with a low population density and a nomadic population. Health care access and quality are further hampered by poor telecommunications, infrastructure, and security. In Kenya, nomadic communities experience the worst health outcomes, disproportionate health disparities, and inequalities due to unresponsive, culturally sensitive health care system to nomad’s lifestyle and their health care needs. Marsabit covering a surface area of 66,923.1 km2, is the second largest county in Kenya, constituting about 2.3 million people of North-Eastern region, with only 2.3 percent and 1.9 percent of Kenya's total number of doctors and nurses in the country. In Kenya, there are scanty research on hypertension managementin this region and, at best, non-existent study on hypertension among nomads-migratory communities of Northern Kenya. Therefore, the purpose seeks to determine the prevalence of hypertension among nomads and document nomads' practices regarding early detections, management, and levels of control of hypertension in one of the Counties in Kenya with high- hypertensive case load per year. Methods: A cross-sectional study design was used to collect data from multiple sites and health facilities. A total of 260 participants were enrolled into the study. The study is currently ongoing. It is anticipated that by September, we will have initial findings & recommendations to share for conference

Keywords: pastoralists, hypertension, health, kenya

Procedia PDF Downloads 96
10802 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability

Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris

Abstract:

Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.

Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector

Procedia PDF Downloads 204
10801 Outcomes of the Gastrocnemius Flap Performed by Orthopaedic Surgeons in Salvage Revision Knee Arthroplasty: A Retrospective Study at a Tertiary Orthopaedic Centre

Authors: Amirul Adlan, Robert McCulloch, Scott Evans, Michael Parry, Jonathan Stevenson, Lee Jeys

Abstract:

Background and Objectives: The gastrocnemius myofascial flap is used to manage soft-tissue defects over the anterior aspect of the knee in the context of a patient presenting with a sinus and periprosthetic joint infection (PJI) or extensor mechanism failure. The aim of this study was twofold: firstly, to evaluate the outcomes of gastrocnemius flaps performed by appropriately trained orthopaedic surgeons in the context of PJI and, secondly, to evaluate the infection-free survival of this patient group. Methods: We retrospectively reviewed 30 patients who underwent gastrocnemius flap reconstruction during staged revision total knee arthroplasty for prosthetic joint infection (PJI). All flaps were performed by an orthopaedic surgeon with orthoplastics training. Patients had a mean age of 68.9 years (range 50–84) and were followed up for a mean of 50.4 months (range 2–128 months). A total of 29 patients (97 %) were categorized into Musculoskeletal Infection Society (MSIS) local extremity grade 3 (greater than two compromising factors), and 52 % of PJIs were polymicrobial. The primary outcome measure was flap failure, and the secondary outcome measure was a recurrent infection. Results: Flap survival was 100% with no failures or early returns to theatre for flap problems such as necrosis or haematoma. Overall infection-free survival during the study period was 48% (13 of 27 infected cases). Using limb salvage as the outcome, 77% (23 of 30 patients) retained the limb. Infection recurrence occurred in 48% (10 patients) in the type B3 cohort and 67% (4 patients) in the type C3 cohort (p = 0.65). Conclusion: The surgical technique for a gastrocnemius myofascial flap is reliable and reproducible when performed by appropriately trained orthopaedic surgeons, even in high-risk groups. However, the risks of recurrent infection and amputation remain high within our series due to poor host and extremity factors.

Keywords: gastrocnemius flap, limb salvage, revision arthroplasty, outcomes

Procedia PDF Downloads 98
10800 Vibration and Freeze-Thaw Cycling Tests on Fuel Cells for Automotive Applications

Authors: Gema M. Rodado, Jose M. Olavarrieta

Abstract:

Hydrogen fuel cell technologies have experienced a great boost in the last decades, significantly increasing the production of these devices for both stationary and portable (mainly automotive) applications; these are influenced by two main factors: environmental pollution and energy shortage. A fuel cell is an electrochemical device that converts chemical energy directly into electricity by using hydrogen and oxygen gases as reactive components and obtaining water and heat as byproducts of the chemical reaction. Fuel cells, specifically those of Proton Exchange Membrane (PEM) technology, are considered an alternative to internal combustion engines, mainly because of the low emissions they produce (almost zero), high efficiency and low operating temperatures (< 373 K). The introduction and use of fuel cells in the automotive market requires the development of standardized and validated procedures to test and evaluate their performance in different environmental conditions including vibrations and freeze-thaw cycles. These situations of vibration and extremely low/high temperatures can affect the physical integrity or even the excellent operation or performance of the fuel cell stack placed in a vehicle in circulation or in different climatic conditions. The main objective of this work is the development and validation of vibration and freeze-thaw cycling test procedures for fuel cell stacks that can be used in a vehicle in order to consolidate their safety, performance, and durability. In this context, different experimental tests were carried out at the facilities of the National Hydrogen Centre (CNH2). The experimental equipment used was: A vibration platform (shaker) for vibration test analysis on fuel cells in three axes directions with different vibration profiles. A walk-in climatic chamber to test the starting, operating, and stopping behavior of fuel cells under defined extreme conditions. A test station designed and developed by the CNH2 to test and characterize PEM fuel cell stacks up to 10 kWe. A 5 kWe PEM fuel cell stack in off-operation mode was used to carry out two independent experimental procedures. On the one hand, the fuel cell was subjected to a sinusoidal vibration test on the shaker in the three axes directions. It was defined by acceleration and amplitudes in the frequency range of 7 to 200 Hz for a total of three hours in each direction. On the other hand, the climatic chamber was used to simulate freeze-thaw cycles by defining a temperature range between +313 K and -243 K with an average relative humidity of 50% and a recommended ramp up and rump down of 1 K/min. The polarization curve and gas leakage rate were determined before and after the vibration and freeze-thaw tests at the fuel cell stack test station to evaluate the robustness of the stack. The results were very similar, which indicates that the tests did not affect the fuel cell stack structure and performance. The proposed procedures were verified and can be used as an initial point to perform other tests with different fuel cells.

Keywords: climatic chamber, freeze-thaw cycles, PEM fuel cell, shaker, vibration tests

Procedia PDF Downloads 104
10799 Mathematical Modelling of Ultrasound Pre-Treatment in Microwave Dried Strawberry (Fragaria L.) Slices

Authors: Hilal Uslu, Salih Eroglu, Betul Ozkan, Ozcan Bulantekin, Alper Kuscu

Abstract:

In this study, the strawberry (Fragaria L.) fruits, which were pretreated with ultrasound (US), were worked on in the microwave by using 90W power. Then mathematical modelling was applied to dried fruits by using different experimental thin layer models. The sliced fruits were subjected to ultrasound treatment at a frequency of 40 kHz for 10, 20, and 30 minutes, in an ultrasonic water bath, with a ratio of 1:4 to fruit/water. They are then dried in the microwave (90W). The drying process continued until the product moisture was below 10%. By analyzing the moisture change of the products at a certain time, eight different thin-layer drying models, (Newton, page, modified page, Midilli, Henderson and Pabis, logarithmic, two-term, Wang and Singh) were tested for verification of experimental data. MATLAB R2015a statistical program was used for the modelling, and the best suitable model was determined with R²adj (coefficient of determination of compatibility), and root mean square error (RMSE) values. According to analysis, the drying model that best describes the drying behavior for both drying conditions was determined as the Midilli model by high R²adj and low RMSE values. Control, 10, 20, and 30 min US for groups R²adj and RMSE values was established as respectively; 0,9997- 0,005298; 0,9998- 0,004735; 0,9995- 0,007031; 0,9917-0,02773. In addition, effective diffusion coefficients were calculated for each group and were determined as 3,80x 10⁻⁸, 3,71 x 10⁻⁸, 3,26 x10⁻⁸ ve 3,5 x 10⁻⁸ m/s, respectively.

Keywords: mathematical modelling, microwave drying, strawberry, ultrasound

Procedia PDF Downloads 144
10798 Polymeric Micelles Based on Block Copolymer α-Tocopherol Succinate-g-Carboxymethyl Chitosan for Tamoxifen Delivery

Authors: Sunil K. Jena, Sanjaya K. Samal, Mahesh Chand, Abhay T. Sangamwar

Abstract:

Tamoxifen (TMX) and its analogues are approved as a first line therapy for the treatment of estrogen receptor-positive tumors. However, clinical development of TMX has been hampered by its low bioavailability and severe hepatotoxicity. Herein, we attempt to design a new drug delivery vehicle that could enhance the pharmacokinetic performance of TMX. Initially, high-molecular weight carboxymethyl chitosan was hydrolyzed to low-molecular weight carboxymethyl chitosan (LMW CMC) with hydrogen peroxide under the catalysis of phosphotungstic acid. Amphiphilic block copolymers of LMW CMC were synthesized via amidation reaction between the carboxyl group of α-tocopherol succinate (TS) and an amine group of LMW CMC. These amphiphilic block copolymers were self-assembled to nanosize core-shell-structural micelles in the aqueous medium. The critical micelle concentration (CMC) decreased with the increasing substitution of TS on LMW CMC, which ranged from 1.58 × 10-6 to 7.94 × 10-8 g/mL. Maximum TMX loading up to 8.08 ± 0.98% was achieved with Cmc-TS4.5 (TMX/Cmc-TS4.5 with 1:8 weight ratio). Both blank and TMX-loaded polymeric micelles (TMX-PM) of Cmc-TS4.5 exhibits spherical shape with the particle size below 200 nm. TMX-PM has been found to be stable in the gastrointestinal conditions and released only 44.5% of the total drug content by the first 72 h in simulated gastric fluid (SGF), pH 1.2. However, the presence of pepsin does not significantly increased the TMX release in SGF, pH 1.2, released only about 46.2% by the first 72 h suggesting its inability to cleave the peptide bond. In contrast, the release of TMX from TMX-PM4.5 in SIF, pH 6.8 (without pancreatin) was slow and sustained, released only about 10.43% of the total drug content within the first 30 min and nearly about 12.41% by the first 72 h. The presence of pancreatin in SIF, pH 6.8 led to an improvement in drug release. About 28.09% of incorporated TMX was released in the presence of pancreatin in 72 h. A cytotoxicity study demonstrated that TMX-PM exhibited time-delayed cytotoxicity in human MCF-7 breast cancer cells. Pharmacokinetic studies on Sprague-Dawley rats revealed a remarkable increase in oral bioavailability (1.87-fold) with significant (p < 0.0001) enhancement in AUC0-72 h, t1/2 and MRT of TMX-PM4.5 than that of TMX-suspension. Thus, the results suggested that CMC-TS micelles are a promising carrier for TMX delivery.

Keywords: carboxymethyl chitosan, d-α-tocopherol succinate, pharmacokinetic, polymeric micelles, tamoxifen

Procedia PDF Downloads 317
10797 Sensitivity Based Robust Optimization Using 9 Level Orthogonal Array and Stepwise Regression

Authors: K. K. Lee, H. W. Han, H. L. Kang, T. A. Kim, S. H. Han

Abstract:

For the robust optimization of the manufacturing product design, there are design objectives that must be achieved, such as a minimization of the mean and standard deviation in objective functions within the required sensitivity constraints. The authors utilized the sensitivity of objective functions and constraints with respect to the effective design variables to reduce the computational burden associated with the evaluation of the probabilities. The individual mean and sensitivity values could be estimated easily by using the 9 level orthogonal array based response surface models optimized by the stepwise regression. The present study evaluates a proposed procedure from the robust optimization of rubber domes that are commonly used for keyboard switching, by using the 9 level orthogonal array and stepwise regression along with a desirability function. In addition, a new robust optimization process, i.e., the I2GEO (Identify, Integrate, Generate, Explore and Optimize), was proposed on the basis of the robust optimization in rubber domes. The optimized results from the response surface models and the estimated results by using the finite element analysis were consistent within a small margin of error. The standard deviation of objective function is decreasing 54.17% with suggested sensitivity based robust optimization. (Business for Cooperative R&D between Industry, Academy, and Research Institute funded Korea Small and Medium Business Administration in 2017, S2455569)

Keywords: objective function, orthogonal array, response surface model, robust optimization, stepwise regression

Procedia PDF Downloads 278
10796 Use of Statistical Correlations for the Estimation of Shear Wave Velocity from Standard Penetration Test-N-Values: Case Study of Algiers Area

Authors: Soumia Merat, Lynda Djerbal, Ramdane Bahar, Mohammed Amin Benbouras

Abstract:

Along with shear wave, many soil parameters are associated with the standard penetration test (SPT) as a dynamic in situ experiment. Both SPT-N data and geophysical data do not often exist in the same area. Statistical analysis of correlation between these parameters is an alternate method to estimate Vₛ conveniently and without additional investigations or data acquisition. Shear wave velocity is a basic engineering tool required to define dynamic properties of soils. In many instances, engineers opt for empirical correlations between shear wave velocity (Vₛ) and reliable static field test data like standard penetration test (SPT) N value, CPT (Cone Penetration Test) values, etc., to estimate shear wave velocity or dynamic soil parameters. The relation between Vs and SPT- N values of Algiers area is predicted using the collected data, and it is also compared with the previously suggested formulas of Vₛ determination by measuring Root Mean Square Error (RMSE) of each model. Algiers area is situated in high seismic zone (Zone III [RPA 2003: réglement parasismique algerien]), therefore the study is important for this region. The principal aim of this paper is to compare the field measurements of Down-hole test and the empirical models to show which one of these proposed formulas are applicable to predict and deduce shear wave velocity values.

Keywords: empirical models, RMSE, shear wave velocity, standard penetration test

Procedia PDF Downloads 323
10795 Assessing the Mass Concentration of Microplastics and Nanoplastics in Wastewater Treatment Plants by Pyrolysis Gas Chromatography−Mass Spectrometry

Authors: Yanghui Xu, Qin Ou, Xintu Wang, Feng Hou, Peng Li, Jan Peter van der Hoek, Gang Liu

Abstract:

The level and removal of microplastics (MPs) in wastewater treatment plants (WWTPs) has been well evaluated by the particle number, while the mass concentration of MPs and especially nanoplastics (NPs) remains unclear. In this study, microfiltration, ultrafiltration and hydrogen peroxide digestion were used to extract MPs and NPs with different size ranges (0.01−1, 1−50, and 50−1000 μm) across the whole treatment schemes in two WWTPs. By identifying specific pyrolysis products, pyrolysis gas chromatography−mass spectrometry were used to quantify their mass concentrations of selected six types of polymers (i.e., polymethyl methacrylate (PMMA), polypropylene (PP), polystyrene (PS), polyethylene (PE), polyethylene terephthalate (PET), and polyamide (PA)). The mass concentrations of total MPs and NPs decreased from 26.23 and 11.28 μg/L in the influent to 1.75 and 0.71 μg/L in the effluent, with removal rates of 93.3 and 93.7% in plants A and B, respectively. Among them, PP, PET and PE were the dominant polymer types in wastewater, while PMMA, PS and PA only accounted for a small part. The mass concentrations of NPs (0.01−1 μm) were much lower than those of MPs (>1 μm), accounting for 12.0−17.9 and 5.6− 19.5% of the total MPs and NPs, respectively. Notably, the removal efficiency differed with the polymer type and size range. The low-density MPs (e.g., PP and PE) had lower removal efficiency than high-density PET in both plants. Since particles with smaller size could pass the tertiary sand filter or membrane filter more easily, the removal efficiency of NPs was lower than that of MPs with larger particle size. Based on annual wastewater effluent discharge, it is estimated that about 0.321 and 0.052 tons of MPs and NPs were released into the river each year. Overall, this study investigated the mass concentration of MPs and NPs with a wide size range of 0.01−1000 μm in wastewater, which provided valuable information regarding the pollution level and distribution characteristics of MPs, especially NPs, in WWTPs. However, there are limitations and uncertainties in the current study, especially regarding the sample collection and MP/NP detection. The used plastic items (e.g., sampling buckets, ultrafiltration membranes, centrifugal tubes, and pipette tips) may introduce potential contamination. Additionally, the proposed method caused loss of MPs, especially NPs, which can lead to underestimation of MPs/NPs. Further studies are recommended to address these challenges about MPs/NPs in wastewater.

Keywords: microplastics, nanoplastics, mass concentration, WWTPs, Py-GC/MS

Procedia PDF Downloads 264
10794 The Inclusive Human Trafficking Checklist: A Dialectical Measurement Methodology

Authors: Maria C. Almario, Pam Remer, Jeff Resse, Kathy Moran, Linda Theander Adam

Abstract:

The identification of victims of human trafficking and consequential service provision is characterized by a significant disconnection between the estimated prevalence of this issue and the number of cases identified. This poses as tremendous problem for human rights advocates as it prevents data collection, information sharing, allocation of resources and opportunities for international dialogues. The current paper introduces the Inclusive Human Trafficking Checklist (IHTC) as a measurement methodology with theoretical underpinnings derived from dialectic theory. The presence of human trafficking in a person’s life is conceptualized as a dynamic and dialectic interaction between vulnerability and exploitation. The current papers explores the operationalization of exploitation and vulnerability, evaluates the metric qualities of the instrument, evaluates whether there are differences in assessment based on the participant’s profession, level of knowledge, and training, and assesses if users of the instrument perceive it as useful. A total of 201 participants were asked to rate three vignettes predetermined by experts to qualify as a either human trafficking case or not. The participants were placed in three conditions: business as usual, utilization of the IHTC with and without training. The results revealed a statistically significant level of agreement between the expert’s diagnostic and the application of the IHTC with an improvement of 40% on identification when compared with the business as usual condition While there was an improvement in identification in the group with training, the difference was found to have a small effect size. Participants who utilized the IHTC showed an increased ability to identify elements of identity-based vulnerabilities as well as elements of fraud, which according to the results, are distinctive variables in cases of human trafficking. In terms of the perceived utility, the results revealed higher mean scores for the groups utilizing the IHTC when compared to the business as usual condition. These findings suggest that the IHTC improves appropriate identification of cases and that it is perceived as a useful instrument. The application of the IHTC as a multidisciplinary instrumentation that can be utilized in legal and human services settings is discussed as a pivotal piece of helping victims restore their sense of dignity, and advocate for legal, physical and psychological reparations. It is noteworthy that this study was conducted with a sample in the United States and later re-tested in Colombia. The implications of the instrument for treatment conceptualization and intervention in human trafficking cases are discussed as opportunities for enhancement of victim well-being, restoration engagement and activism. With the idea that what is personal is also political, we believe that the careful observation and data collection in specific cases can inform new areas of human rights activism.

Keywords: exploitation, human trafficking, measurement, vulnerability, screening

Procedia PDF Downloads 318
10793 E-Waste Generation in Bangladesh: Present and Future Estimation by Material Flow Analysis Method

Authors: Rowshan Mamtaz, Shuvo Ahmed, Imran Noor, Sumaiya Rahman, Prithvi Shams, Fahmida Gulshan

Abstract:

Last few decades have witnessed a phenomenal rise in the use of electrical and electronic equipment globally in our everyday life. As these items reach the end of their lifecycle, they turn into e-wastes and contribute to the waste stream. Bangladesh, in conformity with the global trend and due to its ongoing rapid growth, is also using electronics-based appliances and equipment at an increasing rate. This has caused a corresponding increase in the generation of e-wastes. Bangladesh is a developing country; its overall waste management system, is not yet efficient, nor is it environmentally sustainable. Most of its solid wastes are disposed of in a crude way at dumping sites. Addition of e-wastes, which often contain toxic heavy metals, into its waste stream has made the situation more difficult and challenging. Assessment of generation of e-wastes is an important step towards addressing the challenges posed by e-wastes, setting targets, and identifying the best practices for their management. Understanding and proper management of e-wastes is a stated item of the Sustainable Development Goals (SDG) campaign, and Bangladesh is committed to fulfilling it. A better understanding and availability of reliable baseline data on e-wastes will help in preventing illegal dumping, promote recycling, and create jobs in the recycling sectors and thus facilitate sustainable e-waste management. With this objective in mind, the present study has attempted to estimate the amount of e-wastes and its future generation trend in Bangladesh. To achieve this, sales data on eight selected electrical and electronic products (TV, Refrigerator, Fan, Mobile phone, Computer, IT equipment, CFL (Compact Fluorescent Lamp) bulbs, and Air Conditioner) have been collected from different sources. Primary and secondary data on the collection, recycling, and disposal of the e-wastes have also been gathered by questionnaire survey, field visits, interviews, and formal and informal meetings with the stakeholders. Material Flow Analysis (MFA) method has been applied, and mathematical models have been developed in the present study to estimate e-waste amounts and their future trends up to the year 2035 for the eight selected electrical and electronic equipment. End of life (EOL) method is adopted in the estimation. Model inputs are products’ annual sale/import data, past and future sales data, and average life span. From the model outputs, it is estimated that the generation of e-wastes in Bangladesh in 2018 is 0.40 million tons and by 2035 the amount will be 4.62 million tons with an average annual growth rate of 20%. Among the eight selected products, the number of e-wastes generated from seven products are increasing whereas only one product, CFL bulb, showed a decreasing trend of waste generation. The average growth rate of e-waste from TV sets is the highest (28%) while those from Fans and IT equipment are the lowest (11%). Field surveys conducted in the e-waste recycling sector also revealed that every year around 0.0133 million tons of e-wastes enter into the recycling business in Bangladesh which may increase in the near future.

Keywords: Bangladesh, end of life, e-waste, material flow analysis

Procedia PDF Downloads 177
10792 Intervention To Prevent Infections And Reinfections With Intestinal Parasites In People Living With Human Immunodeficiency Virus In Some Parts Of Eastern Cape, South Africa

Authors: Ifeoma Anozie, Teka Apalata, Dominic Abaver

Abstract:

Introduction: Despite use of Anti-retroviral therapy to reduce the incidence of opportunistic infections among HIV/AIDS patients, rapid episodes of re-infection after deworming are still common occurrences because pharmaceutical intervention alone does not prevent reinfection. Unsafe water and inadequate personal hygiene and parasitic infections are widely expected to accelerate the progression of HIV infection. This is because the chronic immunosuppression of HIV infection encourages susceptibility to opportunistic (including parasitic) infections which is linked to CD4+ cell count of <200 cells/μl. Intestinal parasites such as G. intestinalis and Entamoeba spp are ubiquitous protozoa that remain infectious over a long time in an environment and show resistance to standard disinfection. To control re-infection, the social factors that underpin the prevention need to be controlled. This study aims at prevention of intestinal parasites in people living with HIV/AIDS by using a treatment, hygiene education and sanitation (THEdS) bundle approach. Methods: This study was conducted in four clinics (Ngangelizwe health centre, Tsolo gateway clinic, Idutywa health centre and Nqamakwe health centre) across the seven districts in Eastern cape, South Africa. The four clinics were divided in two: experimental and control, for the purpose of intervention. Data was collected from March 2019 to February 2020. Six hundred participants were screened for intestinal parasitic infections. Stool samples were collected and analysed twice: before (Pre-test infection screening) and after (Post-test re-infection) THEdS bundle intervention. The experimental clinics received full intervention package, which include therapeutic treatment, health education on personal hygiene and sanitation training, while the control clinics received only therapeutic treatment for those found with intestinal parasitic infections. Results: Baseline prevalence of Intestinal Parasites isolated shows 12 intestinal parasites with overall frequency of 65, with Ascaris lumbricoides having most frequency (44.6%). The intervention had a cure rate of 60%, with odd ratio of 1.42, which indicates that the intervention group is 1.42 times more likely of parasite clearing as compared to the control group. The relative risk ratio of 1.17 signifies that there is 1.17 times more likelihood to clear intestinal parasite if there no intervention. Discussion and conclusion: Infection with multiple parasites can cause health defects, especially among HIV/AIDS patients. Efficiency of some HIV vaccines in HIV/AIDS patients is affected because treatment of re-infection amplifies drug resistance, affects the efficacy of the front-line drugs, and still permits transmission. In South Africa, treatment of intestinal parasites is usually offered to clinic attending HIV/AIDS patients upon suspicion but not as a mandate for patients being initiated into Antiretroviral (ART) program. The effectiveness of THEdS bundle advocates for inclusiveness of mandatory screening for intestinal parasitic infections among attendees of HIV/Aids clinics on regular basis.

Keywords: cure rate, , HIV/AIDS patients, intestinal parasites, intervention studies, reinfection rate

Procedia PDF Downloads 63
10791 Predictors of Motor and Cognitive Domains of Functional Performance after Rehabilitation of Individuals with Acute Stroke

Authors: A. F. Jaber, E. Dean, M. Liu, J. He, D. Sabata, J. Radel

Abstract:

Background: Stroke is a serious health care concern and a major cause of disability in the United States. This condition impacts the individual’s functional ability to perform daily activities. Predicting functional performance of people with stroke assists health care professionals in optimizing the delivery of health services to the affected individuals. The purpose of this study was to identify significant predictors of Motor FIM and of Cognitive FIM subscores among individuals with stroke after discharge from inpatient rehabilitation (typically 4-6 weeks after stroke onset). A second purpose is to explore the relation among personal characteristics, health status, and functional performance of daily activities within 2 weeks of stroke onset. Methods: This study used a retrospective chart review to conduct a secondary analysis of data obtained from the Healthcare Enterprise Repository for Ontological Narration (HERON) database. The HERON database integrates de-identified clinical data from seven different regional sources including hospital electronic medical record systems of the University of Kansas Health System. The initial HERON data extract encompassed 1192 records and the final sample consisted of 207 participants who were mostly white (74%) males (55%) with a diagnosis of ischemic stroke (77%). The outcome measures collected from HERON included performance scores on the National Institute of Health Stroke Scale (NIHSS), the Glasgow Coma Scale (GCS), and the Functional Independence Measure (FIM). The data analysis plan included descriptive statistics, Pearson correlation analysis, and Stepwise regression analysis. Results: significant predictors of discharge Motor FIM subscores included age, baseline Motor FIM subscores, discharge NIHSS scores, and comorbid electrolyte disorder (R2 = 0.57, p <0.026). Significant predictors of discharge Cognitive FIM subscores were age, baseline cognitive FIM subscores, client cooperative behavior, comorbid obesity, and the total number of comorbidities (R2 = 0.67, p <0.020). Functional performance on admission was significantly associated with age (p < 0.01), stroke severity (p < 0.01), and length of hospital stay (p < 0.05). Conclusions: our findings show that younger age, good motor and cognitive abilities on admission, mild stroke severity, fewer comorbidities, and positive client attitude all predict favorable functional outcomes after inpatient stroke rehabilitation. This study provides health care professionals with evidence to evaluate predictors of favorable functional outcomes early at stroke rehabilitation, to tailor individualized interventions based on their client’s anticipated prognosis, and to educate clients about the benefits of making lifestyle changes to improve their anticipated rate of functional recovery.

Keywords: functional performance, predictors, stroke, recovery

Procedia PDF Downloads 131
10790 Polycyclic Aromatic Hydrocarbons: Pollution and Ecological Risk Assessment in Surface Soil of the Tezpur Town, on the North Bank of the Brahmaputra River, Assam, India

Authors: Kali Prasad Sarma, Nibedita Baul, Jinu Deka

Abstract:

In the present study, pollution level of polycyclic aromatic hydrocarbon (PAH) in surface soil of historic Tezpur town located in the north bank of the River Brahmaputra were evaluated. In order to determine the seasonal distribution and concentration level of 16 USEPA priority PAHs surface soil samples were collected from 12 different sampling sites with various land use type. The total concentrations of 16 PAHs (∑16 PAHs) varied from 242.68µgkg-1to 7901.89µgkg-1. Concentration of total probable carcinogenic PAH ranged between 7.285µgkg-1 and 479.184 µgkg-1 in different seasons. However, the concentration of BaP, the most carcinogenic PAH, was found in the range of BDL to 50.01 µgkg-1. The composition profiles of PAHs in 3 different seasons were characterized by following two different types of ring: (1) 4-ring PAHs, contributed to highest percentage of total PAHs (43.75%) (2) while in pre- and post- monsoon season 3- ring compounds dominated the PAH profile, contributing 65.58% and 74.41% respectively. A high PAHs concentration with significant seasonality and high abundance of LMWPAHs was observed in Tezpur town. Soil PAHs toxicity was evaluated taking toxic equivalency factors (TEFs), which quantify the carcinogenic potential of other PAHs relative to BaP and estimate benzo[a]pyrene-equivalent concentration (BaPeq). The calculated BaPeq value signifies considerable risk to contact with soil PAHs. We applied cluster analysis and principal component analysis (PCA) with multivariate linear regression (MLR) to apportion sources of polycyclic aromatic hydrocarbons (PAHs) in surface soil of Tezpur town, based on the measured PAH concentrations. The results indicate that petrogenic and pyrogenic sources are the important sources of PAHs. A combination of chemometric and molecular indices were used to identify the sources of PAHs, which could be attributed to vehicle emissions, a mixed source input, natural gas combustion, wood or biomass burning and coal combustion. Source apportionment using absolute principle component scores–multiple linear regression showed that the main sources of PAHs are 22.3% mix sources comprising of diesel and biomass combustion and petroleum spill,13.55% from vehicle emission, 9.15% from diesel and natural gas burning, 38.05% from wood and biomass burning and 16.95% contribute coal combustion. Pyrogenic input was found to dominate source of PAHs origin with more contribution from vehicular exhaust. PAHs have often been found to co-emit with other environmental pollutants like heavy metals due to similar source of origin. A positive correlation was observed between PAH with Cr and Pb (r2 = 0.54 and 0.55 respectively) in monsoon season and PAH with Cd and Pb (r2 = 0.54 and 0.61 respectively) indicating their common source. Strong correlation was observed between PAH and OC during pre- and post- monsoon (r2=0.46 and r2=0.65 respectively) whereas during monsoon season no significant correlation was observed (r2=0.24).

Keywords: polycyclic aromatic hydrocarbon, Tezpur town, chemometric analysis, ecological risk assessment, pollution

Procedia PDF Downloads 195
10789 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 112
10788 Blade-Coating Deposition of Semiconducting Polymer Thin Films: Light-To-Heat Converters

Authors: M. Lehtihet, S. Rosado, C. Pradère, J. Leng

Abstract:

Poly(3,4-ethylene dioxythiophene) polystyrene sulfonate (PEDOT: PSS), is a polymer mixture well-known for its semiconducting properties and is widely used in the coating industry for its visible transparency and high electronic conductivity (up to 4600 S/cm) as a transparent non-metallic electrode and in organic light-emitting diodes (OLED). It also possesses strong absorption properties in the Near Infra-Red (NIR) range (λ ranging between 900 nm to 2.5 µm). In the present work, we take advantage of this absorption to explore its potential use as a transparent light-to-heat converter. PEDOT: PSS aqueous dispersions are deposited onto a glass substrate using a blade-coating technique in order to produce uniform coatings with controlled thicknesses ranging in ≈ 400 nm to 2 µm. Blade-coating technique allows us good control of the deposit thickness and uniformity by the tuning of several experimental conditions (blade velocity, evaporation rate, temperature, etc…). This liquid coating technique is a well-known, non-expensive technique to realize thin film coatings on various substrates. For coatings on glass substrates destined to solar insulation applications, the ideal coating would be made of a material able to transmit all the visible range while reflecting the NIR range perfectly, but materials possessing similar properties still have unsatisfactory opacity in the visible too (for example, titanium dioxide nanoparticles). NIR absorbing thin films is a more realistic alternative for such an application. Under solar illumination, PEDOT: PSS thin films heat up due to absorption of NIR light and thus act as planar heaters while maintaining good transparency in the visible range. Whereas they screen some NIR radiation, they also generate heat which is then conducted into the substrate that re-emits this energy by thermal emission in every direction. In order to quantify the heating power of these coatings, a sample (coating on glass) is placed in a black enclosure and illuminated with a solar simulator, a lamp emitting a calibrated radiation very similar to the solar spectrum. The temperature of the rear face of the substrate is measured in real-time using thermocouples and a black-painted Peltier sensor measures the total entering flux (sum of transmitted and re-emitted fluxes). The heating power density of the thin films is estimated from a model of the thin film/glass substrate describing the system, and we estimate the Solar Heat Gain Coefficient (SHGC) to quantify the light-to-heat conversion efficiency of such systems. Eventually, the effect of additives such as dimethyl sulfoxide (DMSO) or optical scatterers (particles) on the performances are also studied, as the first one can alter the IR absorption properties of PEDOT: PSS drastically and the second one can increase the apparent optical path of light within the thin film material.

Keywords: PEDOT: PSS, blade-coating, heat, thin-film, Solar spectrum

Procedia PDF Downloads 146