Search results for: standard of competition
85 The Design of a Phase I/II Trial of Neoadjuvant RT with Interdigitated Multiple Fractions of Lattice RT for Large High-grade Soft-Tissue Sarcoma
Authors: Georges F. Hatoum, Thomas H. Temple, Silvio Garcia, Xiaodong Wu
Abstract:
Soft Tissue Sarcomas (STS) represent a diverse group of malignancies with heterogeneous clinical and pathological features. The treatment of extremity STS aims to achieve optimal local tumor control, improved survival, and preservation of limb function. The National Comprehensive Cancer Network guidelines, based on the cumulated clinical data, recommend radiation therapy (RT) in conjunction with limb-sparing surgery for large, high-grade STS measuring greater than 5 cm in size. Such treatment strategy can offer a cure for patients. However, when recurrence occurs (in nearly half of patients), the prognosis is poor, with a median survival of 12 to 15 months and with only palliative treatment options available. The spatially-fractionated-radiotherapy (SFRT), with a long history of treating bulky tumors as a non-mainstream technique, has gained new attention in recent years due to its unconventional therapeutic effects, such as bystander/abscopal effects. Combining single fraction of GRID, the original form of SFRT, with conventional RT was shown to have marginally increased the rate of pathological necrosis, which has been recognized to have a positive correlation to overall survival. In an effort to consistently increase the pathological necrosis rate over 90%, multiple fractions of Lattice RT (LRT), a newer form of 3D SFRT, interdigitated with the standard RT as neoadjuvant therapy was conducted in a preliminary clinical setting. With favorable results of over 95% of necrosis rate in a small cohort of patients, a Phase I/II clinical study was proposed to exam the safety and feasibility of this new strategy. Herein the design of the clinical study is presented. In this single-arm, two-stage phase I/II clinical trial, the primary objectives are >80% of the patients achieving >90% tumor necrosis and to evaluation the toxicity; the secondary objectives are to evaluate the local control, disease free survival and overall survival (OS), as well as the correlation between clinical response and the relevant biomarkers. The study plans to accrue patients over a span of two years. All patient will be treated with the new neoadjuvant RT regimen, in which one of every five fractions of conventional RT is replaced by a LRT fraction with vertices receiving dose ≥10Gy while keeping the tumor periphery at or close to 2 Gy per fraction. Surgical removal of the tumor is planned to occur 6 to 8 weeks following the completion of radiation therapy. The study will employ a Pocock-style early stopping boundary to ensure patient safety. The patients will be followed and monitored for a period of five years. Despite much effort, the rarity of the disease has resulted in limited novel therapeutic breakthroughs. Although a higher rate of treatment-induced tumor necrosis has been associated with improved OS, with the current techniques, only 20% of patients with large, high-grade tumors achieve a tumor necrosis rate exceeding 50%. If this new neoadjuvant strategy is proven effective, an appreciable improvement in clinical outcome without added toxicity can be anticipated. Due to the rarity of the disease, it is hoped that such study could be orchestrated in a multi-institutional setting.Keywords: lattice RT, necrosis, SFRT, soft tissue sarcoma
Procedia PDF Downloads 6084 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification
Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos
Abstract:
Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology
Procedia PDF Downloads 14983 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process
Authors: Reyna Singh, David Lokhat, Milan Carsky
Abstract:
The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.Keywords: catalyst, coal, liquefaction, temperature-staged
Procedia PDF Downloads 64882 Isolation and Probiotic Characterization of Lactobacillus plantarum and Lactococcus lactis from Gut Microbiome of Rohu (Labeo rohita)
Authors: Prem Kumar, Anuj Tyagi, Harsh Panwar, Vaneet Inder Kaur
Abstract:
Though aquaculture started as an occupation for poor and weak farmers for livelihood, it has now acquired the shape of one of the biggest industry to grow live protein in the form of aquatic organisms. Industrialization of the aquaculture sector has led to intensification resulting in stress on aquatic organisms and frequent disease outbreaks leading to huge economic impacts. Indiscriminate use of antibiotics as growth promoter and prophylactic agent in aquaculture has resulted in rapid emergence and spread of antibiotic resistance in bacterial pathogens. Over the past few years, use of probiotics (as an alternative of antibiotics) in aquaculture has gained attention due to their immunostimulant and growth promoting properties. It has now well known that after administration, a probiotic bacterium has to compete and establish itself against native microbiota to show its eventual beneficial properties. Due to their non-fish origin, commercial probiotics sometimes may display poor probiotic functionalities and antagonistic effects. Thus, isolation and characterization of probiotic bacteria from same fish host is very much necessary. In this study, attempts were made to isolate potent probiotic lactic acid bacteria (LAB) from intestinal microflora of rohu fish. Twenty-five experimental rohu fishes (mean weight 400 ± 20gm, mean standard length 20 ± 3cm) were used in the study to collect fish gut after dissection in a sterile condition. A total of 150 tentative LAB isolates from selective agar media (de Man-Rogosa-Sharpe (MRS)) were screened for their antimicrobial activity against Aeromonas hydrophila and Microccocus leuteus. A total of 17 isolates, identified as Lactobacillus plantarum and Lactococcus lactis, identified by biochemical tests and PCR amplification and sequencing of 16S rRNA gene fragment, displayed promising antimicrobial activity against both the pathogens. Two isolates from each species (FLB1, FLB2 from L. plantarum; and FLC1, FLC2 from L. lactis) were subjected to downstream probiotic potential characterization. These isolates were compared in vitro for their hemolytic activity, acid and bile tolerance for growth kinetics, auto-aggregation, cell-surface hydrophobicity against xylene, and chloroform, tolerance to phenol, cell adhesion, and safety parameters (by intraperitoneal and intramuscular injections). None of the tested isolates showed any hemolytic activity indicating their potential safety. Moreover, these isolates were tolerant to 0.3% bile (75-82% survival), phenol stress (96-99% survival) with 100% viability at pH 3 over a period of 3 h. Antibiotic sensitivity test revealed that all the tested LAB isolates were resistant to vancomycin, gentamicin, streptomycin, and erythromycin and sensitive to Erythromycin, Chloramphenicol, Ampicillin, Trimethoprim, and Nitrofurantoin. Tetracycline resistance was found in L. plantarum (FLB1 and FLB2 isolates), whereas L. lactis were susceptible to it. Intramuscular and intraperitoneal challenges to fingerlings of rohu fish (5 ± 1gm weight) with FLB1 showed no pathogenicity and occurrence of disease symptoms in fishes over an observation period of 7 days. The results revealed FLB1 as a potential probiotic candidate for aquaculture application among other isolates.Keywords: aquaculture, Lactobacillus plantarum, Lactococcus lactis, probiotics
Procedia PDF Downloads 13681 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing
Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May
Abstract:
Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models
Procedia PDF Downloads 27480 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population
Authors: Ye Xue, Zhenhua Deng
Abstract:
Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool
Procedia PDF Downloads 5879 Soil Composition in Different Agricultural Crops under Application of Swine Wastewater
Authors: Ana Paula Almeida Castaldelli Maciel, Gabriela Medeiros, Amanda de Souza Machado, Maria Clara Pilatti, Ralpho Rinaldo dos Reis, Silvio Cesar Sampaio
Abstract:
Sustainable agricultural systems are crucial to ensuring global food security and the long-term production of nutritious food. Comprehensive soil and water management practices, including nutrient management, balanced fertilizer use, and appropriate waste management, are essential for sustainable agriculture. Swine wastewater (SWW) treatment has become a significant focus due to environmental concerns related to heavy metals, antibiotics, resistant pathogens, and nutrients. In South America, small farms use soil to dispose of animal waste, a practice that is expected to increase with global pork production. The potential of SWW as a nutrient source is promising, contributing to global food security, nutrient cycling, and mineral fertilizer reduction. Short- and long-term studies evaluated the effects of SWW on soil and plant parameters, such as nutrients, heavy metals, organic matter (OM), cation exchange capacity (CEC), and pH. Although promising results have been observed in short- and medium-term applications, long-term applications require more attention due to heavy metal concentrations. Organic soil amendment strategies, due to their economic and ecological benefits, are commonly used to reduce the bioavailability of heavy metals. However, the rate of degradation and initial levels of OM must be monitored to avoid changes in soil pH and release of metals. The study aimed to evaluate the long-term effects of SWW application on soil fertility parameters, focusing on calcium (Ca), magnesium (Mg), and potassium (K), in addition to CEC and OM. Experiments were conducted at the Universidade Estadual do Oeste do Paraná, Brazil, using 24 drainage lysimeters for nine years, with different application rates of SWW and mineral fertilization. Principal Component Analysis (PCA) was then conducted to summarize the composite variables, known as principal components (PC), and limit the dimensionality to be evaluated. The retained PCs were then correlated with the original variables to identify the level of association between each variable and each PC. Data were interpreted using Analysis of Variance - ANOVA for general linear models (GLM). As OM was not measured in the 2007 soybean experiment, it was assessed separately from PCA to avoid loss of information. PCA and ANOVA indicated that crop type, SWW, and mineral fertilization significantly influenced soil nutrient levels. Soybeans presented higher concentrations of Ca, Mg, and CEC. The application of SWW influenced K levels, with higher concentrations observed in SWW from biodigesters and higher doses of swine manure. Variability in nutrient concentrations in SWW due to factors such as animal age and feed composition makes standard recommendations challenging. OM levels increased in SWW-treated soils, improving soil fertility and structure. In conclusion, the application of SWW can increase soil fertility and crop productivity, reducing environmental risks. However, careful management and long-term monitoring are essential to optimize benefits and minimize adverse effects.Keywords: contamination, water research, biodigester, nutrients
Procedia PDF Downloads 5978 Urban Flood Resilience Comprehensive Assessment of "720" Rainstorm in Zhengzhou Based on Multiple Factors
Authors: Meiyan Gao, Zongmin Wang, Haibo Yang, Qiuhua Liang
Abstract:
Under the background of global climate change and rapid development of modern urbanization, the frequency of climate disasters such as extreme precipitation in cities around the world is gradually increasing. In this paper, Hi-PIMS model is used to simulate the "720" flood in Zhengzhou, and the continuous stages of flood resilience are determined with the urban flood stages are divided. The flood resilience curve under the influence of multiple factors were determined and the urban flood toughness was evaluated by combining the results of resilience curves. The flood resilience of urban unit grid was evaluated based on economy, population, road network, hospital distribution and land use type. Firstly, the rainfall data of meteorological stations near Zhengzhou and the remote sensing rainfall data from July 17 to 22, 2021 were collected. The Kriging interpolation method was used to expand the rainfall data of Zhengzhou. According to the rainfall data, the flood process generated by four rainfall events in Zhengzhou was reproduced. Based on the results of the inundation range and inundation depth in different areas, the flood process was divided into four stages: absorption, resistance, overload and recovery based on the once in 50 years rainfall standard. At the same time, based on the levels of slope, GDP, population, hospital affected area, land use type, road network density and other aspects, the resilience curve was applied to evaluate the urban flood resilience of different regional units, and the difference of flood process of different precipitation in "720" rainstorm in Zhengzhou was analyzed. Faced with more than 1,000 years of rainstorm, most areas are quickly entering the stage of overload. The influence levels of factors in different areas are different, some areas with ramps or higher terrain have better resilience, and restore normal social order faster, that is, the recovery stage needs shorter time. Some low-lying areas or special terrain, such as tunnels, will enter the overload stage faster in the case of heavy rainfall. As a result, high levels of flood protection, water level warning systems and faster emergency response are needed in areas with low resilience and high risk. The building density of built-up area, population of densely populated area and road network density all have a certain negative impact on urban flood resistance, and the positive impact of slope on flood resilience is also very obvious. While hospitals can have positive effects on medical treatment, they also have negative effects such as population density and asset density when they encounter floods. The result of a separate comparison of the unit grid of hospitals shows that the resilience of hospitals in the distribution range is low when they encounter floods. Therefore, in addition to improving the flood resistance capacity of cities, through reasonable planning can also increase the flood response capacity of cities. Changes in these influencing factors can further improve urban flood resilience, such as raise design standards and the temporary water storage area when floods occur, train the response speed of emergency personnel and adjust emergency support equipment.Keywords: urban flood resilience, resilience assessment, hydrodynamic model, resilience curve
Procedia PDF Downloads 4077 Impact of Air Pressure and Outlet Temperature on Physicochemical and Functional Properties of Spray-dried Skim Milk Powder
Authors: Adeline Meriaux, Claire Gaiani, Jennifer Burgain, Frantz Fournier, Lionel Muniglia, Jérémy Petit
Abstract:
Spray-drying process is widely used for the production of dairy powders for food and pharmaceuticals industries. It involves the atomization of a liquid feed into fine droplets, which are subsequently dried through contact with a hot air flow. The resulting powders permit transportation cost reduction and shelf life increase but can also exhibit various interesting functionalities (flowability, solubility, protein modification or acid gelation), depending on operating conditions and milk composition. Indeed, particles porosity, surface composition, lactose crystallization, protein denaturation, protein association or crust formation may change. Links between spray-drying conditions and physicochemical and functional properties of powders were investigated by a design of experiment methodology and analyzed by principal component analysis. Quadratic models were developed, and multicriteria optimization was carried out by the use of genetic algorithm. At the time of abstract submission, verification spray-drying trials are ongoing. To perform experiments, milk from dairy farm was collected, skimmed, froze and spray-dried at different air pressure (between 1 and 3 bars) and outlet temperature (between 75 and 95 °C). Dry matter, minerals content and proteins content were determined by standard method. Solubility index, absorption index and hygroscopicity were determined by method found in literature. Particle size distribution were obtained by laser diffraction granulometry. Location of the powder color in the Cielab color space and water activity were characterized by a colorimeter and an aw-value meter, respectively. Flow properties were characterized with FT4 powder rheometer; in particular compressibility and shearing test were performed. Air pressure and outlet temperature are key factors that directly impact the drying kinetics and powder characteristics during spray-drying process. It was shown that the air pressure affects the particle size distribution by impacting the size of droplet exiting the nozzle. Moreover, small particles lead to more cohesive powder and less saturated color of powders. Higher outlet temperature results in lower moisture level particles which are less sticky and can explain a spray-drying yield increase and the higher cohesiveness; it also leads to particle with low water activity because of the intense evaporation rate. However, it induces a high hygroscopicity, thus, powders tend to get wet rapidly if they are not well stored. On the other hand, high temperature provokes a decrease of native serum proteins which is positively correlated to gelation properties (gel point and firmness). Partial denaturation of serum proteins can improve functional properties of powder. The control of air pressure and outlet temperature during the spray-drying process significantly affects the physicochemical and functional properties of powder. This study permitted to better understand the links between physicochemical and functional properties of powder, to identify correlations between air pressure and outlet temperature. Therefore, mathematical models have been developed and the use of genetic algorithm will allow the optimization of powder functionalities.Keywords: dairy powders, spray-drying, powders functionalities, design of experiment
Procedia PDF Downloads 9276 In-Process Integration of Resistance-Based, Fiber Sensors during the Braiding Process for Strain Monitoring of Carbon Fiber Reinforced Composite Materials
Authors: Oscar Bareiro, Johannes Sackmann, Thomas Gries
Abstract:
Carbon fiber reinforced polymer composites (CFRP) are used in a wide variety of applications due to its advantageous properties and design versatility. The braiding process enables the manufacture of components with good toughness and fatigue strength. However, failure mechanisms of CFRPs are complex and still present challenges associated with their maintenance and repair. Within the broad scope of structural health monitoring (SHM), strain monitoring can be applied to composite materials to improve reliability, reduce maintenance costs and safely exhaust service life. Traditional SHM systems employ e.g. fiber optics, piezoelectrics as sensors, which are often expensive, time consuming and complicated to implement. A cost-efficient alternative can be the exploitation of the conductive properties of fiber-based sensors such as carbon, copper, or constantan - a copper-nickel alloy – that can be utilized as sensors within composite structures to achieve strain monitoring. This allows the structure to provide feedback via electrical signals to a user which are essential for evaluating the structural condition of the structure. This work presents a strategy for the in-process integration of resistance-based sensors (Elektrisola Feindraht AG, CuNi23Mn, Ø = 0.05 mm) into textile preforms during its manufacture via the braiding process (Herzog RF-64/120) to achieve strain monitoring of braided composites. For this, flat samples of instrumented composite laminates of carbon fibers (Toho Tenax HTS40 F13 24K, 1600 tex) and epoxy resin (Epikote RIMR 426) were manufactured via vacuum-assisted resin infusion. These flat samples were later cut out into test specimens and the integrated sensors were wired to the measurement equipment (National Instruments, VB-8012) for data acquisition during the execution of mechanical tests. Quasi-static tests were performed (tensile, 3-point bending tests) following standard protocols (DIN EN ISO 527-1 & 4, DIN EN ISO 14132); additionally, dynamic tensile tests were executed. These tests were executed to assess the sensor response under different loading conditions and to evaluate the influence of the sensor presence on the mechanical properties of the material. Several orientations of the sensor with regards to the applied loading and sensor placements inside the laminate were tested. Strain measurements from the integrated sensors were made by programming a data acquisition code (LabView) written for the measurement equipment. Strain measurements from the integrated sensors were then correlated to the strain/stress state for the tested samples. From the assessment of the sensor integration approach it can be concluded that it allows for a seamless sensor integration into the textile preform. No damage to the sensor or negative effect on its electrical properties was detected during inspection after integration. From the assessment of the mechanical tests of instrumented samples it can be concluded that the presence of the sensors does not alter significantly the mechanical properties of the material. It was found that there is a good correlation between resistance measurements from the integrated sensors and the applied strain. It can be concluded that the correlation is of sufficient accuracy to determinate the strain state of a composite laminate based solely on the resistance measurements from the integrated sensors.Keywords: braiding process, in-process sensor integration, instrumented composite material, resistance-based sensor, strain monitoring
Procedia PDF Downloads 10675 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 38774 Benzenepropanamine Analogues as Non-detergent Microbicidal Spermicide for Effective Pre-exposure Prophylaxis
Authors: Veenu Bala, Yashpal S. Chhonker, Bhavana Kushwaha, Rabi S. Bhatta, Gopal Gupta, Vishnu L. Sharma
Abstract:
According to UNAIDS 2013 estimate nearly 52% of all individuals living with HIV are now women of reproductive age (15–44 years). Seventy-five percent cases of HIV acquisition are through heterosexual contacts and sexually transmitted infections (STIs), attributable to unsafe sexual behaviour. Each year, an estimated 500 million people acquire atleast one of four STIs: chlamydia, gonorrhoea, syphilis and trichomoniasis. Trichomonas vaginalis (TV) is exclusively sexually transmitted in adults, accounting for 30% of STI cases and associated with pelvic inflammatory disease (PID), vaginitis and pregnancy complications in women. TV infection resulted in impaired vaginal milieu, eventually favoring HIV transmission. In the absence of an effective prophylactic HIV vaccine, prevention of new infections has become a priority. It was thought worthwhile to integrate HIV prevention and reproductive health services including unintended pregnancy protection for women as both are related with unprotected sex. Initially, nonoxynol-9 (N-9) had been proposed as a spermicidal agent with microbicidal activity but on the contrary it increased HIV susceptibility due to surfactant action. Thus, to accomplish an urgent need of novel woman controlled non-detergent microbicidal spermicides benzenepropanamine analogues have been synthesized. At first, five benzenepropanamine-dithiocarbamate hybrids have been synthesized and evaluated for their spermicidal, anti-Trichomonas and anti-fungal activities along with safety profiling to cervicovaginal cells. In order to further enhance the scope of above study benzenepropanamine was hybridized with thiourea as to introduce anti-HIV potential. The synthesized hybrid molecules were evaluated for their reverse transcriptase (RT) inhibition, spermicidal, anti-Trichomonas and antimicrobial activities as well as their safety against vaginal flora and cervical cells. simulated vaginal fluid (SVF) stability and pharmacokinetics of most potent compound versus N-9 was examined in female Newzealand (NZ) rabbits to observe its absorption into systemic circulation and subsequent exposure in blood plasma through vaginal wall. The study resulted in the most promising compound N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) exhibiting better activity profile than N-9 as it showed RT inhibition (72.30 %), anti-Trichomonas (MIC, 46.72 µM against MTZ susceptible and MIC, 187.68 µM against resistant strain), spermicidal (MEC, 0.01%) and antifungal activity (MIC, 3.12–50 µg/mL) against four fungal strains. The high safety against vaginal epithelium (HeLa cells) and compatibility with vaginal flora (lactobacillus), SVF stability and least vaginal absorption supported its suitability for topical vaginal application. Docking study was performed to gain an insight into the binding mode and interactions of the most promising compound, N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) with HIV-1 Reverse Transcriptase. The docking study has revealed that compound (29) interacted with HIV-1 RT similar to standard drug Nevirapine. It may be concluded that hybridization of benzenepropanamine and thiourea moiety resulted into novel lead with multiple activities including RT inhibition. A further lead optimization may result into effective vaginal microbicides having spermicidal, anti-Trichomonas, antifungal and anti-HIV potential altogether with enhanced safety to cervico-vaginal cells in comparison to Nonoxynol-9.Keywords: microbicidal, nonoxynol-9, reverse transcriptase, spermicide
Procedia PDF Downloads 34473 An Integrated Solid Waste Management Strategy for Semi-Urban and Rural Areas of Pakistan
Authors: Z. Zaman Asam, M. Ajmal, R. Saeed, H. Miraj, M. Muhammad Ahtisham, B. Hameed, A. -Sattar Nizami
Abstract:
In Pakistan, environmental degradation and consequent human health deterioration has rapidly accelerated in the past decade due to solid waste mismanagement. As the situation worsens with time, establishment of proper waste management practices is urgently needed especially in semi urban and rural areas of Pakistan. This study uses a concept of Waste Bank, which involves a transfer station for collection of sorted waste fractions and its delivery to the targeted market such as recycling industries, biogas plants, composting facilities etc. The management efficiency and effectiveness of Waste Bank depend strongly on the proficient sorting and collection of solid waste fractions at household level. However, the social attitude towards such a solution in semi urban/rural areas of Pakistan demands certain prerequisites to make it workable. Considering these factors the objectives of this study are to: [A] Obtain reliable data about quantity and characteristics of generated waste to define feasibility of business and design factors, such as required storage area, retention time, transportation frequency of the system etc. [B] Analyze the effects of various social factors on waste generation to foresee future projections. [C] Quantify the improvement in waste sorting efficiency after awareness campaign. We selected Gujrat city of Central Punjab province of Pakistan as it is semi urban adjoined by rural areas. A total of 60 houses (20 from each of the three selected colonies), belonging to different social status were selected. Awareness sessions about waste segregation were given through brochures and individual lectures in each selected household. Sampling of waste, that households had attempted to sort, was then carried out in the three colored bags that were provided as part of the awareness campaign. Finally, refined waste sorting, weighing of various fractions and measurement of dry mass was performed in environmental laboratory using standard methods. It was calculated that sorting efficiency of waste improved from 0 to 52% as a result of the awareness campaign. The generation of waste (dry mass basis) on average from one household was 460 kg/year whereas per capita generation was 68 kg/year. Extrapolating these values for Gujrat Tehsil, the total waste generation per year is calculated to be 101921 tons dry mass (DM). Characteristics found in waste were (i) organic decomposable (29.2%, 29710 tons/year DM), (ii) recyclables (37.0%, 37726 tons/year DM) that included plastic, paper, metal and glass, and (iii) trash (33.8%, 34485 tons/year DM) that mainly comprised of polythene bags, medicine packaging, pampers and wrappers. Waste generation was more in colonies with comparatively higher income and better living standards. In future, data collection for all four seasons and improvements due to expansion of awareness campaign to educational institutes will be quantified. This waste management system can potentially fulfill vital sustainable development goals (e.g. clean water and sanitation), reduce the need to harvest fresh resources from the ecosystem, create business and job opportunities and consequently solve one of the most pressing environmental issues of the country.Keywords: integrated solid waste management, waste segregation, waste bank, community development
Procedia PDF Downloads 14172 Angiopermissive Foamed and Fibrillar Scaffolds for Vascular Graft Applications
Authors: Deon Bezuidenhout
Abstract:
Pre-seeding with autologous endothelial cells improves the long-term patency of synthetic vascular grafts levels obtained with autografts, but is limited to a single centre due to resource, time and other constraints. Spontaneous in vivo endothelialization would obviate the need for pre-seeding, but has been shown to be absent in man due to limited transanastomotic and fallout healing, and the lack of transmural ingrowth due to insufficient porosity. Two types of graft scaffolds with increased interconnected porosity for improved tissue ingrowth and healing are thus proposed and described. Foam-type polyurethane (PU) scaffolds with small, medium and large, interconnected pores were made by phase inversion and spherical porogen extraction, with and without additional surface modification with covalently attached heparin and subsequent loading with and delivery of growth factors. Fibrillar scaffolds were made either by standard electrospinning using degradable PU (Degrapol®), or by dual electrospinning using non-degradable PU. The latter process involves sacrificial fibres that are co-spun with structural fibres and subsequently removed to increased porosity and pore size. Degrapol samples were subjected to in vitro degradation, and all scaffold types were evaluated in vivo for tissue ingrowth and vascularization using rat subcutaneous model. The foam scaffolds were additionally evaluated in a circulatory (rat infrarenal aortic interposition) model that allows for the grafts to be anastomotically and/or ablumenally isolated to discern and determine endothelialization mode. Foam-type grafts with large (150 µm) pores showed improved subcutaneous healing in terms of vascularization and inflammatory response over smaller pore sizes (60 and 90µm), and vascularization of the large porosity scaffolds was significantly increased by more than 70% by heparin modification alone, and by 150% to 400% when combined with growth factors. In the circulatory model, extensive transmural endothelialization (95±10% at 12 w) was achieved. Fallout healing was shown to be sporadic and limited in groups that were ablumenally isolated to prevent transmural ingrowth (16±30% wrapped vs. 80±20% control; p<0.002). Heparinization and GF delivery improved both mural vascularization and lumenal endothelialization. Degrapol electrospun scaffolds showed decrease in molecular mass and corresponding tensile strength over the first 2 weeks, but very little decrease in mass over the 4w test period. Studies on the effect of tissue ingrowth with and without concomitant degradation of the scaffolds, are being used to develop material models for the finite element modelling. In the case of the dual-spun scaffolds, the PU fibre fraction could be controlled shown to vary linearly with porosity (P = −0.18FF +93.5, r2=0.91), which in turn showed inverse linear correlation with tensile strength and elastic modulus (r2 > 0.96). Calculated compliance and burst pressures of the scaffolds increased with fibre fraction, and compliances matching the human popliteal artery (5-10 %/100 mmHg), and high burst pressures (> 2000 mmHg) could be achieved. Increasing porosity (76 to 82 and 90%) resulted in increased tissue ingrowth from 33±7 to 77±20 and 98±1% after 28d. Transmural endothelialization of highly porous foamed grafts is achievable in a circulatory model, and the enhancement of porosity and tissue ingrowth may hold the key the development of spontaneously endothelializing electrospun grafts.Keywords: electrospinning, endothelialization, porosity, scaffold, vascular graft
Procedia PDF Downloads 29671 Development & Standardization of a Literacy Free Cognitive Rehabilitation Program for Patients Post Traumatic Brain Injury
Authors: Sakshi Chopra, Ashima Nehra, Sumit Sinha, Harsimarpreet Kaur, Ravindra Mohan Pandey
Abstract:
Background: Cognitive rehabilitation aims to retrain brain injured individuals with cognitive deficits to restore or compensate lost functions. As illiterates or people with low literacy levels represent a significant proportion of the world, specific rehabilitation modules for such populations are indispensable. Literacy is significantly associated with all neuropsychological measures and retraining programs widely use written or spoken techniques which essentially require the patient to read or write. So, the aim of the study was to develop and standardize a literacy free neuropsychological rehabilitation program for improving cognitive functioning in patients with mild and moderate Traumatic Brain Injury (TBI). Several studies have pointed out to the impairments seen in memory, executive functioning, and attention and concentration post-TBI, so the rehabilitation program focussed on these domains. Visual item memorization, stick constructions, symbol cancellations, and colouring techniques were used to construct the retraining program. Methodology: The development of the program consisted of planning, preparing, analyzing, and revising the different modules. The construction focussed on areas of retraining immediate and delayed visual memory, planning ability, focused and divided attention, concentration, and response inhibition (to control irritability and aggression). A total of 98 home based retraining modules were prepared in the 4 domains (42 for memory, 42 for executive functioning, 7 for attention and concentration, and 7 for response inhibition). The standardization was done on 20 healthy controls to review, select and edit items. For each module, the time, errors made and errors per second were noted down, to establish the difficulty level of each module and were arranged in increasing level of difficulty over a period of 6 weeks. The retraining tasks were then administered on 11 brain injured individuals (5 after Mild TBI and 6 after Moderate TBI). These patients were referred from the Trauma Centre to Clinical Neuropsychology OPD, All India Institute of Medical Sciences, New Delhi, India. Results: The time was taken, errors made and errors per second were analysed for all domains. Education levels were divided into illiterates, up to 10 years, 10 years to graduation and graduation and above. Mean and standard deviations were calculated. Between group and within group analysis was done using the t-test. The performance of 20 healthy controls was analyzed and only a significant difference was observed on the time taken for the attention tasks and all other domains had non-significant differences in performance between different education levels. Comparing the errors, time taken between patient and control group, there was a significant difference in all the domains at the 0.01 level except the errors made on executive functioning, indicating that the tool can successfully differentiate between healthy controls and patient groups. Conclusions: Apart from the time taken for symbol cancellations, the entire cognitive rehabilitation program is literacy free. As it taps the major areas of impairment post-TBI, it could be a useful tool to rehabilitate the patient population with low literacy levels across the world. The next step is already underway to test its efficacy in improving cognitive functioning in a randomized clinical controlled trial.Keywords: cognitive rehabilitation, illiterates, India, traumatic brain injury
Procedia PDF Downloads 33370 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum
Authors: Divya Palaniappan
Abstract:
Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum
Procedia PDF Downloads 36269 Study of Operating Conditions Impact on Physicochemical and Functional Properties of Dairy Powder Produced by Spray-drying
Authors: Adeline Meriaux, Claire Gaiani, Jennifer Burgain, Frantz Fournier, Lionel Muniglia, Jérémy Petit
Abstract:
Spray-drying process is widely used for the production of dairy powders for food and pharmaceuticals industries. It involves the atomization of a liquid feed into fine droplets, which are subsequently dried through contact with a hot air flow. The resulting powders permit transportation cost reduction and shelf life increase but can also exhibit various interesting functionalities (flowability, solubility, protein modification or acid gelation), depending on operating conditions and milk composition. Indeed, particles porosity, surface composition, lactose crystallization, protein denaturation, protein association or crust formation may change. Links between spray-drying conditions and physicochemical and functional properties of powders were investigated by a design of experiment methodology and analyzed by principal component analysis. Quadratic models were developed, and multicriteria optimization was carried out by the use of genetic algorithm. At the time of abstract submission, verification spray-drying trials are ongoing. To perform experiments, milk from dairy farm was collected, skimmed, froze and spray-dried at different air pressure (between 1 and 3 bars) and outlet temperature (between 75 and 95 °C). Dry matter, minerals content and proteins content were determined by standard method. Solubility index, absorption index and hygroscopicity were determined by method found in literature. Particle size distribution were obtained by laser diffraction granulometry. Location of the powder color in the Cielab color space and water activity were characterized by a colorimeter and an aw-value meter, respectively. Flow properties were characterized with FT4 powder rheometer; in particular, compressibility and shearing test were performed. Air pressure and outlet temperature are key factors that directly impact the drying kinetics and powder characteristics during spray-drying process. It was shown that the air pressure affects the particle size distribution by impacting the size of droplet exiting the nozzle. Moreover, small particles lead to more cohesive powder and less saturated color of powders. Higher outlet temperature results in lower moisture level particles which are less sticky and can explain a spray-drying yield increase and the higher cohesiveness; it also leads to particle with low water activity because of the intense evaporation rate. However, it induces a high hygroscopicity, thus, powders tend to get wet rapidly if they are not well stored. On the other hand, high temperature provokes a decrease of native serum proteins, which is positively correlated to gelation properties (gel point and firmness). Partial denaturation of serum proteins can improve functional properties of powder. The control of air pressure and outlet temperature during the spray-drying process significantly affects the physicochemical and functional properties of powder. This study permitted to better understand the links between physicochemical and functional properties of powder to identify correlations between air pressure and outlet temperature. Therefore, mathematical models have been developed, and the use of genetic algorithm will allow the optimization of powder functionalities.Keywords: dairy powders, spray-drying, powders functionalities, design of experiment
Procedia PDF Downloads 6568 The Healthcare Costs of BMI-Defined Obesity among Adults Who Have Undergone a Medical Procedure in Alberta, Canada
Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach
Abstract:
Obesity is associated with significant personal impacts on health and has a substantial economic burden on payers due to increased healthcare use. A contemporary estimate of the healthcare costs associated with obesity at the population level are lacking. This evidence may provide further rationale for weight management strategies. Methods: Adults who underwent a medical procedure between 2012 and 2019 in Alberta, Canada were categorized into the investigational cohort (had body mass index [BMI]-defined class 2 or 3 obesity based on a procedure-associated code) and the control cohort (did not have the BMI procedure-associated code); those who had bariatric surgery were excluded. Characteristics were presented and healthcare costs ($CDN) determined over a 1-year observation period (2019/2020). Logistic regression and a generalized linear model with log link and gamma distribution were used to assess total healthcare costs (comprised of hospitalizations, emergency department visits, ambulatory care visits, physician visits, and outpatient prescription drugs); potential confounders included age, sex, region of residence, and whether the medical procedure was performed within 6-months before the observation period in the partial adjustment, and also the type of procedure performed, socioeconomic status, Charlson Comorbidity Index (CCI), and seven obesity-related health conditions in the full adjustment. Cost ratios and estimated cost differences with 95% confidence intervals (CI) were reported; incremental cost differences within the adjusted models represent referent cases. Results: The investigational cohort (n=220,190) was older (mean age: 53 standard deviation [SD]±17 vs 50 SD±17 years), had more females (71% vs 57%), lived in rural areas to a greater extent (20% vs 14%), experienced a higher overall burden of disease (CCI: 0.6 SD±1.3 vs 0.3 SD±0.9), and were less socioeconomically well-off (material/social deprivation was lower [14%/14%] in the most well-off quintile vs 20%/19%) compared with controls (n=1,955,548). Unadjusted total healthcare costs were estimated to be 1.77-times (95% CI: 1.76, 1.78) higher in the investigational versus control cohort; each healthcare resource contributed to the higher cost ratio. After adjusting for potential confounders, the total healthcare cost ratio decreased, but remained higher in the investigational versus control cohort (partial adjustment: 1.57 [95% CI: 1.57, 1.58]; full adjustment: 1.21 [95% CI: 1.20, 1.21]); each healthcare resource contributed to the higher cost ratio. Among urban-dwelling 50-year old females who previously had non-operative procedures, no procedures performed within 6-months before the observation period, a social deprivation index score of 3, a CCI score of 0.32, and no history of select obesity-related health conditions, the predicted cost difference between those living with and without obesity was $386 (95% CI: $376, $397). Conclusions: If these findings hold for the Canadian population, one would expect an estimated additional $3.0 billion per year in healthcare costs nationally related to BMI-defined obesity (based on an adult obesity rate of 26% and an estimated annual incremental cost of $386 [21%]); incremental costs are higher when obesity-related health conditions are not adjusted for. Results of this study provide additional rationale for investment in interventions that are effective in preventing and treating obesity and its complications.Keywords: administrative data, body mass index-defined obesity, healthcare cost, real world evidence
Procedia PDF Downloads 10867 Multilocus Phylogenetic Approach Reveals Informative DNA Barcodes for Studying Evolution and Taxonomy of Fusarium Fungi
Authors: Alexander A. Stakheev, Larisa V. Samokhvalova, Sergey K. Zavriev
Abstract:
Fusarium fungi are among the most devastating plant pathogens distributed all over the world. Significant reduction of grain yield and quality caused by Fusarium leads to multi-billion dollar annual losses to the world agricultural production. These organisms can also cause infections in immunocompromised persons and produce the wide range of mycotoxins, such as trichothecenes, fumonisins, and zearalenone, which are hazardous to human and animal health. Identification of Fusarium fungi based on the morphology of spores and spore-forming structures, colony color and appearance on specific culture media is often very complicated due to the high similarity of these features for closely related species. Modern Fusarium taxonomy increasingly uses data of crossing experiments (biological species concept) and genetic polymorphism analysis (phylogenetic species concept). A number of novel Fusarium sibling species has been established using DNA barcoding techniques. Species recognition is best made with the combined phylogeny of intron-rich protein coding genes and ribosomal DNA sequences. However, the internal transcribed spacer of (ITS), which is considered to be universal DNA barcode for Fungi, is not suitable for genus Fusarium, because of its insufficient variability between closely related species and the presence of non-orthologous copies in the genome. Nowadays, the translation elongation factor 1 alpha (TEF1α) gene is the “gold standard” of Fusarium taxonomy, but the search for novel informative markers is still needed. In this study, we used two novel DNA markers, frataxin (FXN) and heat shock protein 90 (HSP90) to discover phylogenetic relationships between Fusarium species. Multilocus phylogenetic analysis based on partial sequences of TEF1α, FXN, HSP90, as well as intergenic spacer of ribosomal DNA (IGS), beta-tubulin (β-TUB) and phosphate permease (PHO) genes has been conducted for 120 isolates of 19 Fusarium species from different climatic zones of Russia and neighboring countries using maximum likelihood (ML) and maximum parsimony (MP) algorithms. Our analyses revealed that FXN and HSP90 genes could be considered as informative phylogenetic markers, suitable for evolutionary and taxonomic studies of Fusarium genus. It has been shown that PHO gene possesses more variable (22 %) and parsimony informative (19 %) characters than other markers, including TEF1α (12 % and 9 %, correspondingly) when used for elucidating phylogenetic relationships between F. avenaceum and its closest relatives – F. tricinctum, F. acuminatum, F. torulosum. Application of novel DNA barcodes confirmed the fact that F. arthrosporioides do not represent a separate species but only a subspecies of F. avenaceum. Phylogeny based on partial PHO and FXN sequences revealed the presence of separate cluster of four F. avenaceum strains which were closer to F. torulosum than to major F. avenaceum clade. The strain F-846 from Moldova, morphologically identified as F. poae, formed a separate lineage in all the constructed dendrograms, and could potentially be considered as a separate species, but more information is needed to confirm this conclusion. Variable sites in PHO sequences were used for the first-time development of specific qPCR-based diagnostic assays for F. acuminatum and F. torulosum. This work was supported by Russian Foundation for Basic Research (grant № 15-29-02527).Keywords: DNA barcode, fusarium, identification, phylogenetics, taxonomy
Procedia PDF Downloads 32466 A Critical Evaluation of Occupational Safety and Health Management Systems' Implementation: Case of Mutare Urban Timber Processing Factories, Zimbabwe
Authors: Johanes Mandowa
Abstract:
The study evaluated the status of Occupational Safety and Health Management Systems’ (OSHMSs) implementation by Mutare urban timber processing factories. A descriptive cross sectional survey method was utilized in the study. Questionnaires, interviews and direct observations were the techniques employed to extract primary data from the respondents. Secondary data was acquired from OSH encyclopedia, OSH journals, newspaper articles, internet, past research papers, African Newsletter on OSH and NSSA On-guard magazines among others. Analysis of data collected was conducted using statistical and descriptive methods. Results revealed an unpleasant low uptake rate (16%) of OSH Management Systems by Mutare urban timber processing factories. On a comparative basis, low implementation levels were more pronounced in small timber processing factories than in large factories. The low uptake rate of OSH Management Systems revealed by the study validates the Government of Zimbabwe and its social partners’ observation that the dismal Zimbabwe OSH performance was largely due to non implementation of safety systems at most workplaces. The results exhibited a relationship between availability of a SHE practitioner in Mutare urban timber processing factories and OSHMS implementation. All respondents and interviewees’ agreed that OSH Management Systems are handy in curbing occupational injuries and diseases. It emerged from the study that the top barriers to implementation of safety systems are lack of adequate financial resources, lack of top management commitment and lack of OSHMS implementation expertise. Key motivators for OSHMSs establishment were cited as provision of adequate resources (76%), strong employee involvement (64%) and strong senior management commitment and involvement (60%). Study results demonstrated that both OSHMSs implementation barriers and motivators affect all Mutare urban timber processing factories irrespective of size. The study recommends enactment of a law by Ministry of Public Service, Labour and Social Welfare in consultation with NSSA to make availability of an OSHMS and qualified SHE practitioner mandatory at every workplace. More so, the enacted law should prescribe minimum educational qualification required for one to practice as a SHE practitioner. Ministry of Public Service, Labour and Social Welfare and NSSA should also devise incentives such as reduced WCIF premiums for good OSH performance to cushion Mutare urban timber processing factories from OSHMS implementation costs. The study recommends the incorporation of an OSH module in the academic curriculums of all programmes offered at tertiary institutions so as to ensure that graduates who later end up assuming influential management positions in Mutare urban timber processing factories are abreast with the necessity of OSHMSs in preventing occupational injuries and diseases. In the quest to further boost management’s awareness on the importance of OSHMSs, NSSA and SAZ are urged by the study to conduct OSHMSs awareness breakfast meetings targeting executive management on a periodic basis. The Government of Zimbabwe through the Ministry of Public Service, Labour and Social Welfare should also engage ILO Country Office for Zimbabwe to solicit for ILO’s technical assistance so as to enhance the effectiveness of NSSA’s and SAZ’s OSHMSs promotional programmes.Keywords: occupational safety health management system, national social security authority, standard association of Zimbabwe, Mutare urban timber processing factories, ministry of public service, labour and social welfare
Procedia PDF Downloads 33765 Design Aspects for Developing a Microfluidics Diagnostics Device Used for Low-Cost Water Quality Monitoring
Authors: Wenyu Guo, Malachy O’Rourke, Mark Bowkett, Michael Gilchrist
Abstract:
Many devices for real-time monitoring of surface water have been developed in the past few years to provide early warning of pollutions and so to decrease the risk of environmental pollution efficiently. One of the most common methodologies used in the detection system is a colorimetric process, in which a container with fixed volume is filled with target ions and reagents to combine a colorimetric dye. The colorimetric ions can sensitively absorb a specific-wavelength radiation beam, and its absorbance rate is proportional to the concentration of the fully developed product, indicating the concentration of target nutrients in the pre-mixed water samples. In order to achieve precise and rapid detection effect, channels with dimensions in the order of micrometers, i.e., microfluidic systems have been developed and introduced into these diagnostics studies. Microfluidics technology largely reduces the surface to volume ratios and decrease the samples/reagents consumption significantly. However, species transport in such miniaturized channels is limited by the low Reynolds numbers in the regimes. Thus, the flow is extremely laminar state, and diffusion is the dominant mass transport process all over the regimes of the microfluidic channels. The objective of this present work has been to analyse the mixing effect and chemistry kinetics in a stop-flow microfluidic device measuring Nitride concentrations in fresh water samples. In order to improve the temporal resolution of the Nitride microfluidic sensor, we have used computational fluid dynamics to investigate the influence that the effectiveness of the mixing process between the sample and reagent within a microfluidic device exerts on the time to completion of the resulting chemical reaction. This computational approach has been complemented by physical experiments. The kinetics of the Griess reaction involving the conversion of sulphanilic acid to a diazonium salt by reaction with nitrite in acidic solution is set in the Laminar Finite-rate chemical reaction in the model. Initially, a methodology was developed to assess the degree of mixing of the sample and reagent within the device. This enabled different designs of the mixing channel to be compared, such as straight, square wave and serpentine geometries. Thereafter, the time to completion of the Griess reaction within a straight mixing channel device was modeled and the reaction time validated with experimental data. Further simulations have been done to compare the reaction time to effective mixing within straight, square wave and serpentine geometries. Results show that square wave channels can significantly improve the mixing effect and provides a low standard deviations of the concentrations of nitride and reagent, while for straight channel microfluidic patterns the corresponding values are 2-3 orders of magnitude greater, and consequently are less efficiently mixed. This has allowed us to design novel channel patterns of micro-mixers with more effective mixing that can be used to detect and monitor levels of nutrients present in water samples, in particular, Nitride. Future generations of water quality monitoring and diagnostic devices will easily exploit this technology.Keywords: nitride detection, computational fluid dynamics, chemical kinetics, mixing effect
Procedia PDF Downloads 20264 Enhancing the Implementation Strategy of Simultaneous Operations (SIMOPS) for the Major Turnaround at Pertamina Plaju Refinery
Authors: Fahrur Rozi, Daniswara Krisna Prabatha, Latief Zulfikar Chusaini
Abstract:
Amidst the backdrop of Pertamina Plaju Refinery, which stands as the oldest and historically less technologically advanced among Pertamina's refineries, lies a unique challenge. Originally integrating facilities established by Shell in 1904 and Stanvac (originally Standard Oil) in 1926, the primary challenge at Plaju Refinery does not solely revolve around complexity; instead, it lies in ensuring reliability, considering its operational history of over a century. After centuries of existence, Plaju Refinery has never undergone a comprehensive major turnaround encompassing all its units. The usual practice involves partial turnarounds that are sequentially conducted across its primary, secondary, and tertiary units (utilities and offsite). However, a significant shift is on the horizon. In the Q-IV of 2023, the refinery embarks on its first-ever major turnaround since its establishment. This decision was driven by the alignment of maintenance timelines across various units. Plaju Refinery's major turnaround was scheduled for October-November 2023, spanning 45 calendar days, with the objective of enhancing the operational reliability of all refinery units. The extensive job list for this turnaround encompasses 1583 tasks across 18 units/areas, involving approximately 9000 contracted workers. In this context, the Strategy of Simultaneous Operations (SIMOPS) execution emerges as a pivotal tool to optimize time efficiency and ensure safety. A Hazard Effect Management Process (HEMP) has been employed to assess the risk ratings of each task within the turnaround. Out of the tasks assessed, 22 are deemed high-risk and necessitate mitigation. The SIMOPS approach serves as a preventive measure against potential incidents. It is noteworthy that every turnaround period at Pertamina Plaju Refinery involves SIMOPS-related tasks. In this context, enhancing the implementation strategy of "Simultaneous Operations (SIMOPS)" becomes imperative to minimize the occurrence of incidents. At least four improvements have been introduced in the enhancement process for the major turnaround at Refinery Plaju. The first improvement involves conducting systematic risk assessment and potential hazard mitigation studies for SIMOPS tasks before task execution, as opposed to the previous on-site approach. The second improvement includes the completion of SIMOPS Job Mitigation and Work Matrices Sheets, which was often neglected in the past. The third improvement emphasizes comprehensive awareness to workers/contractors regarding potential hazards and mitigation strategies for SIMOPS tasks before and during the major turnaround. The final improvement is the introduction of a daily program for inspecting and observing work in progress for SIMOPS tasks. Prior to these improvements, there was no established program for monitoring ongoing activities related to SIMOPS tasks during the turnaround. This study elucidates the steps taken to enhance SIMOPS within Pertamina, drawing from the experiences of Plaju Refinery as a guide. A real actual case study will be provided from our experience in the operational unit. In conclusion, these efforts are essential for the success of the first-ever major turnaround at Plaju Refinery, with the SIMOPS strategy serving as a central component. Based on these experiences, enhancements have been made to Pertamina's official Internal Guidelines for Executing SIMOPS Risk Mitigation, benefiting all Pertamina units.Keywords: process safety management, turn around, oil refinery, risk assessment
Procedia PDF Downloads 7463 Sensitivity and Specificity of Some Serological Tests Used for Diagnosis of Bovine Brucellosis in Egypt on Bacteriological and Molecular Basis
Authors: Hosein I. Hosein, Ragab Azzam, Ahmed M. S. Menshawy, Sherin Rouby, Khaled Hendy, Ayman Mahrous, Hany Hussien
Abstract:
Brucellosis is a highly contagious bacterial zoonotic disease of a worldwide spread and has different names; Infectious or enzootic abortion and Bang's disease in animals; and Mediterranean or Malta fever, Undulant Fever and Rock fever in humans. It is caused by the different species of genus Brucella which is a Gram-negative, aerobic, non-spore forming, facultative intracellular bacterium. Brucella affects a wide range of mammals including bovines, small ruminants, pigs, equines, rodents, marine mammals as well as human resulting in serious economic losses in animal populations. In human, Brucella causes a severe illness representing a great public health problem. The disease was reported in Egypt for the first time in 1939; since then the disease remained endemic at high levels among cattle, buffalo, sheep and goat and is still representing a public health hazard. The annual economic losses due to brucellosis were estimated to be about 60 million Egyptian pounds yearly, but actual estimates are still missing despite almost 30 years of implementation of the Egyptian control programme. Despite being the gold standard, bacterial isolation has been reported to show poor sensitivity for samples with low-level of Brucella and is impractical for regular screening of large populations. Thus, serological tests still remain the corner stone for routine diagnosis of brucellosis, especially in developing countries. In the present study, a total of 1533 cows (256 from Beni-Suef Governorate, 445 from Al-Fayoum Governorate and 832 from Damietta Governorate), were employed for estimation of relative sensitivity, relative specificity, positive predictive value and negative predictive value of buffered acidified plate antigen test (BPAT), rose bengal test (RBT) and complement fixation test (CFT). The overall seroprevalence of brucellosis revealed (19.63%). Relative sensitivity, relative specificity, positive predictive value and negative predictive value of BPAT,RBT and CFT were estimated as, (96.27 %, 96.76 %, 87.65 % and 99.10 %), (93.42 %, 96.27 %, 90.16 % and 98.35%) and (89.30 %, 98.60 %, 94.35 %and 97.24 %) respectively. BPAT showed the highest sensitivity among the three employed serological tests. RBT was less specific than BPAT. CFT showed the least sensitivity 89.30 % among the three employed serological tests but showed the highest specificity. Different tissues specimens of 22 seropositive cows (spleen, retropharyngeal udder, and supra-mammary lymph nodes) were subjected for bacteriological studies for isolation and identification of Brucella organisms. Brucella melitensis biovar 3 could be recovered from 12 (54.55%) cows. Bacteriological examinations failed to classify 10 cases (45.45%) and were culture negative. Bruce-ladder PCR was carried out for molecular identification of the 12 Brucella isolates at the species level. Three fragments of 587 bp, 1071 bp and 1682 bp sizes were amplified indicating Brucella melitensis. The results indicated the importance of using several procedures to overcome the problem of escaping of some infected animals from diagnosis.Bruce-ladder PCR is an important tool for diagnosis and epidemiologic studies, providing relevant information for identification of Brucella spp.Keywords: brucellosis, relative sensitivity, relative specificity, Bruce-ladder, Egypt
Procedia PDF Downloads 35562 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)
Authors: Salvatore Luongo, Carlo Luongo
Abstract:
This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilitiesKeywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification
Procedia PDF Downloads 28561 Effects of Gym-Based and Audio-Visual Guided Home-Based Exercise Programmes on Some Anthropometric and Cardiovascular Parameters Among Overweight and Obese College Students
Authors: Abiodun Afolabi, Rufus Adesoji Adedoyin
Abstract:
This study investigated and compared the effects of gym-based exercise programme (GEBP) and audio-visual guided home-based exercise programme (AVGHBEP) on selected Anthropometric variables (Weight (W), Body Mass Index (BMI), Waist Circumference (WC), Hip Circumference (HC), Thigh Circumference (TC), Waist-Hip-Ratio (WHR), Waist-Height-Ratio (WHtR), Waist-Thigh-Ratio (WTR), Biceps Skinfold Thickness (BSFT), Triceps Skinfold Thickness (TSFT), Suprailliac Skinfold Thickness (SISFT), Subscapular Skinfold Thickness (SSSFT) and Percent Body Fat (PBF)); and Cardiovasular variables (Systolic Blood Pressure (SBP), Diastolic Blood Pressure (DBP) and Heart Rate (HR)) of overweight and obese students of Federal College of Education (Special), Oyo, Oyo State, Nigeria, with a view to providing information and evidence for GBEP and AVGHBEP in reducing overweight and obesity for promoting cardiovascular fitness. Eighty overweight and obese students (BMI ≥ 25 Kg/m²) were involved in this pretest-posttest quasi experimental study. Participants were randomly assigned into GBEP (n = 40) and AVGBBEP (n = 40) groups. Anthropometric and cardiovascular variables were measured using a weighing scale, height meter, tape measure, skinfold caliper and electronic sphygmomanometer following standard protocols. GBEP and AVGHBEP were implemented following a circuit training (aerobic and resistance training) pattern with a duration of 40-60 minutes, thrice weekly for twelve weeks. GBEP consisted of gymnasium supervised exercise programme while AVGHBEP is a Visual Display guided exercise programme conducted at the home setting. Data were analyzed by Descriptive and Inferential Statistics. The mean ages of the participants were 22.55 ± 2.55 and 23.65 ± 2.89 years for the GBEP group and AVGHBEP group, respectively. Findings showed that in the GBEP group, there were significant reductions in anthropometric variables and adiposity measures of Weight, BMI, BSFT, TSFT, SISFT, SSSFT, WC, HC, TC, WHtR, and PBF at week 12 of the study. Similarly, in the AVGHBEP group, there were significant reductions in Weight, BMI, BSFT, TSFT, SISFT, SSSFT, WC, HC, TC, WHtR and PBF at the 12th week of intervention. Comparison of the effects of GEBP and AVGHBEP on anthropometric variables and measures of adiposity showed that there was no significant difference between the two groups in weight, BMI, BSFT, TSFT, SISFT, SSSFT, WC, HC, TC, WHR, WHtR, WTR and PBF between the two groups at week 12 of the study. Furthermore, findings on the effects of exercise on programmes on cardiovascular variables revealed that significant reductions occurred in SBP in GBEP group and AVGHBEP group respectively. Comparison of the effects of GBEP and AVGHBEP on cardiovascular variables showed that there was no significant difference in SBP, DBP and HR between the two groups at week 12 of the study. It was concluded that the Audio-Visual Guided Home-based Exercise Programme was as effective as the Gym-Based Exercise Programme in causing a significant reduction in anthropometric variables and body fat among college students who are overweight and obese over a period of twelve weeks. Both Gymnasium-Based Exercise Programme and Audio-Visual Guided Home-Based Exercise Programme led to significant reduction in Systolic Blood Pressure over a period of weeks. Audio-Visual Guided Home-Based Exercise Programme can, therefore, be used as an alternative therapy in the non-pharmacological management of people who are overweight and obese.Keywords: gym-based exercises, audio-visual guided home-based exercises, anthropometric parameters, cardiovascular parameters, overweight students, obese students
Procedia PDF Downloads 3760 A Multiple Freezing/Thawing Cycles Influence Internal Structure and Mechanical Properties of Achilles Tendon
Authors: Martyna Ekiert, Natalia Grzechnik, Joanna Karbowniczek, Urszula Stachewicz, Andrzej Mlyniec
Abstract:
Tendon grafting is a common procedure performed to treat tendon rupture. Before the surgical procedure, tissues intended for grafts (i.e., Achilles tendon) are stored in ultra-low temperatures for a long time and also may be subjected to unfavorable conditions, such as repetitive freezing (F) and thawing (T). Such storage protocols may highly influence the graft mechanical properties, decrease its functionality and thus increase the risk of complications during the transplant procedure. The literature reports on the influence of multiple F/T cycles on internal structure and mechanical properties of tendons stay inconclusive, confirming and denying the negative influence of multiple F/T at the same time. An inconsistent research methodology and lack of clear limit of F/T cycles, which disqualifies tissue for surgical graft purposes, encouraged us to investigate the issue of multiple F/T cycles by the mean of biomechanical tensile tests supported with Scanning Electron Microscope (SEM) imaging. The study was conducted on male bovine Achilles tendon-derived from the local abattoir. Fresh tendons were cleaned of excessive membranes and then sectioned to obtained fascicle bundles. Collected samples were randomly assigned to 6 groups subjected to 1, 2, 4, 6, 8 and 12 cycles of freezing-thawing (F/T), respectively. Each F/T cycle included deep freezing at -80°C temperature, followed by thawing at room temperature. After final thawing, thin slices of the side part of samples subjected to 1, 4, 8 and 12 F/T cycles were collected for SEM imaging. Then, the width and thickness of all samples were measured to calculate the cross-sectional area. Biomechanical tests were performed using the universal testing machine (model Instron 8872, INSTRON®, Norwood, Massachusetts, USA) using a load cell with a maximum capacity of 250 kN and standard atmospheric conditions. Both ends of each fascicle bundle were manually clamped in grasping clamps using abrasive paper and wet cellulose wadding swabs to prevent tissue slipping while clamping and testing. Samples were subjected to the testing procedure including pre-loading, pre-cycling, loading, holding and unloading steps to obtain stress-strain curves for representing tendon stretching and relaxation. The stiffness of AT fascicles bundle samples was evaluated in terms of modulus of elasticity (Young’s modulus), calculated from the slope of the linear region of stress-strain curves. SEM imaging was preceded by chemical sample preparation including 24hr fixation in 3% glutaraldehyde buffered with 0.1 M phosphate buffer, washing with 0.1 M phosphate buffer solution and dehydration in a graded ethanol solution. SEM images (Merlin Gemini II microscope, ZEISS®) were taken using 30 000x mag, which allowed measuring a diameter of collagen fibrils. The results confirm a decrease in fascicle bundles Young’s modulus as well as a decrease in the diameter of collagen fibrils. These results confirm the negative influence of multiple F/T cycles on the mechanical properties of tendon tissue.Keywords: biomechanics, collagen, fascicle bundles, soft tissue
Procedia PDF Downloads 12559 Development and Experimental Validation of Coupled Flow-Aerosol Microphysics Model for Hot Wire Generator
Authors: K. Ghosh, S. N. Tripathi, Manish Joshi, Y. S. Mayya, Arshad Khan, B. K. Sapra
Abstract:
We have developed a CFD coupled aerosol microphysics model in the context of aerosol generation from a glowing wire. The governing equations can be solved implicitly for mass, momentum, energy transfer along with aerosol dynamics. The computationally efficient framework can simulate temporal behavior of total number concentration and number size distribution. This formulation uniquely couples standard K-Epsilon scheme with boundary layer model with detailed aerosol dynamics through residence time. This model uses measured temperatures (wire surface and axial/radial surroundings) and wire compositional data apart from other usual inputs for simulations. The model predictions show that bulk fluid motion and local heat distribution can significantly affect the aerosol behavior when the buoyancy effect in momentum transfer is considered. Buoyancy generated turbulence was found to be affecting parameters related to aerosol dynamics and transport as well. The model was validated by comparing simulated predictions with results obtained from six controlled experiments performed with a laboratory-made hot wire nanoparticle generator. Condensation particle counter (CPC) and scanning mobility particle sizer (SMPS) were used for measurement of total number concentration and number size distribution at the outlet of reactor cell during these experiments. Our model-predicted results were found to be in reasonable agreement with observed values. The developed model is fast (fully implicit) and numerically stable. It can be used specifically for applications in the context of the behavior of aerosol particles generated from glowing wire technique and in general for other similar large scale domains. Incorporation of CFD in aerosol microphysics framework provides a realistic platform to study natural convection driven systems/ applications. Aerosol dynamics sub-modules (nucleation, coagulation, wall deposition) have been coupled with Navier Stokes equations modified to include buoyancy coupled K-Epsilon turbulence model. Coupled flow-aerosol dynamics equation was solved numerically and in the implicit scheme. Wire composition and temperature (wire surface and cell domain) were obtained/measured, to be used as input for the model simulations. Model simulations showed a significant effect of fluid properties on the dynamics of aerosol particles. The role of buoyancy was highlighted by observation and interpretation of nucleation zones in the planes above the wire axis. The model was validated against measured temporal evolution, total number concentration and size distribution at the outlet of hot wire generator cell. Experimentally averaged and simulated total number concentrations were found to match closely, barring values at initial times. Steady-state number size distribution matched very well for sub 10 nm particle diameters while reasonable differences were noticed for higher size ranges. Although tuned specifically for the present context (i.e., aerosol generation from hotwire generator), the model can also be used for diverse applications, e.g., emission of particles from hot zones (chimneys, exhaust), fires and atmospheric cloud dynamics.Keywords: nanoparticles, k-epsilon model, buoyancy, CFD, hot wire generator, aerosol dynamics
Procedia PDF Downloads 14358 National Accreditation Board for Hospitals and Healthcare Reaccreditation, the Challenges and Advantages: A Qualitative Case Study
Authors: Narottam Puri, Gurvinder Kaur
Abstract:
Background: The National Accreditation Board for Hospitals & Healthcare Providers (NABH) is India’s apex standard setting accrediting body in health care which evaluates and accredits healthcare organizations. NABH requires accredited organizations to become reaccredited every three years. It is often though that once the initial accreditation is complete, the foundation is set and reaccreditation is a much simpler process. Fortis Hospital, Shalimar Bagh, a part of the Fortis Healthcare group is a 262 bed, multi-specialty tertiary care hospital. The hospital was successfully accredited in the year 2012. On completion of its first cycle, the hospital underwent a reaccreditation assessment in the year 2015. This paper aims to gain a better understanding of the challenges that accredited hospitals face when preparing for a renewal of their accreditations. Methods: The study was conducted using a cross-sectional mixed methods approach; semi-structured interviews were conducted with senior leadership team and staff members including doctors and nurses. Documents collated by the QA team while preparing for the re-assessment like the data on quality indicators: the method of collection, analysis, trending, continual incremental improvements made over time, minutes of the meetings, amendments made to the existing policies and new policies drafted was reviewed to understand the challenges. Results: The senior leadership had a concern about the cost of accreditation and its impact on the quality of health care services considering the staff effort and time consumed it. The management was however in favor of continuing with the accreditation since it offered competitive advantage, strengthened community confidence besides better pay rates from the payors. The clinicians regarded it as an increased non-clinical workload. Doctors felt accountable within a professional framework, to themselves, the patient and family, their peers and to their profession; but not to accreditation bodies and raised concerns on how the quality indicators were measured. The departmental leaders had a positive perception of accreditation. They agreed that it ensured high standards of care and improved management of their functional areas. However, they were reluctant in sparing people for the QA activities due to staffing issues. With staff turnover, a lot of work was lost as sticky knowledge and had to be redone. Listing the continual quality improvement initiatives over the last 3 years was a challenge in itself. Conclusion: The success of any quality assurance reaccreditation program depends almost entirely on the commitment and interest of the administrators, nurses, paramedical staff, and clinicians. The leader of the Quality Movement is critical in propelling and building momentum. Leaders need to recognize skepticism and resistance and consider ways in which staff can become positively engaged. Involvement of all the functional owners is the start point towards building ownership and accountability for standards compliance. Creativity plays a very valuable role. Communication by Mail Series, WhatsApp groups, Quizzes, Events, and any and every form helps. Leaders must be able to generate interest and commitment without burdening clinical and administrative staff with an activity they neither understand nor believe in.Keywords: NABH, reaccreditation, quality assurance, quality indicators
Procedia PDF Downloads 22457 Meta-Analysis of Previously Unsolved Cases of Aviation Mishaps Employing Molecular Pathology
Authors: Michael Josef Schwerer
Abstract:
Background: Analyzing any aircraft accident is mandatory based on the regulations of the International Civil Aviation Organization and the respective country’s criminal prosecution authorities. Legal medicine investigations are unavoidable when fatalities involve the flight crew or when doubts arise concerning the pilot’s aeromedical health status before the event. As a result of frequently tremendous blunt and sharp force trauma along with the impact of the aircraft to the ground, consecutive blast or fire exposition of the occupants or putrefaction of the dead bodies in cases of delayed recovery, relevant findings can be masked or destroyed and therefor being inaccessible in standard pathology practice comprising just forensic autopsy and histopathology. Such cases are of considerable risk of remaining unsolved without legal consequences for those responsible. Further, no lessons can be drawn from these scenarios to improve flight safety and prevent future mishaps. Aims and Methods: To learn from previously unsolved aircraft accidents, re-evaluations of the investigation files and modern molecular pathology studies were performed. Genetic testing involved predominantly PCR-based analysis of gene regulation, studying DNA promotor methylations, RNA transcription and posttranscriptional regulation. In addition, the presence or absence of infective agents, particularly DNA- and RNA-viruses, was studied. Technical adjustments of molecular genetic procedures when working with archived sample material were necessary. Standards for the proper interpretation of the respective findings had to be settled. Results and Discussion: Additional molecular genetic testing significantly contributes to the quality of forensic pathology assessment in aviation mishaps. Previously undetected cardiotropic viruses potentially explain e.g., a pilot’s sudden incapacitation resulting from cardiac failure or myocardial arrhythmia. In contrast, negative results for infective agents participate in ruling out concerns about an accident pilot’s fitness to fly and the aeromedical examiner’s precedent decision to issue him or her an aeromedical certificate. Care must be taken in the interpretation of genetic testing for pre-existing diseases such as hypertrophic cardiomyopathy or ischemic heart disease. Molecular markers such as mRNAs or miRNAs, which can establish these diagnoses in clinical patients, might be misleading in-flight crew members because of adaptive changes in their tissues resulting from repeated mild hypoxia during flight, for instance. Military pilots especially demonstrate significant physiological adjustments to their somatic burdens in flight, such as cardiocirculatory stress and air combat maneuvers. Their non-pathogenic alterations in gene regulation and expression will likely be misinterpreted for genuine disease by inexperienced investigators. Conclusions: The growing influence of molecular pathology on legal medicine practice has found its way into aircraft accident investigation. As appropriate quality standards for laboratory work and data interpretation are provided, forensic genetic testing supports the medico-legal analysis of aviation mishaps and potentially reduces the number of unsolved events in the future.Keywords: aviation medicine, aircraft accident investigation, forensic pathology, molecular pathology
Procedia PDF Downloads 4556 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program
Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner
Abstract:
Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NYKeywords: accessibility, COVID-19, recovery, testing
Procedia PDF Downloads 193