Search results for: gas steam combined cycle (GSCC)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4741

Search results for: gas steam combined cycle (GSCC)

1081 Project Stakeholders' Perceptions of Sustainability: A Case Example From the Turkish Construction Industry

Authors: F. Heyecan Giritli, Gizem Akgül

Abstract:

Because of the raising population of world; the need for houses, buildings and infrastructures are increasing rapidly. Energy and water consumption, waste production continues to increase. If this situation of resources continues, there will be a significant loss for next generations. Therefore, there are a lot of researches and solutions developed in the world. Also sustainability criteria are collected together by some countries to serve construction industry with certification systems. Sustainable building production process’s scope requires different path from traditional building production process. Moreover, the key objective of sustainable buildings is that the process includes whole life cycle duration. The process approaches from the decision of the project to the end of it; so the project team is needed from the beginning of the integrated project delivery model. Further more, by defining project team at the beginning of the project provides communication among the team members and defined problem solving and decision making methods. In this research includes the certification systems among the world to comprehend the head lines and assessment criteria. Therefore, it is understand that usually all green building criteria have the same contents. The aim of this research is to assess the sustainable project stakeholder’ perceptions in Turkish construction industry from the point of occupation, job title and years of experience. Therefore, a survey is made to assess the perceptions of each attendant. In Turkey, sustainability criteria are not clearly defined; on the other hand some regulations like waste management, energy efficiency are made by legal agencies. LEED certification system is the most popular system in Turkey that has attended and certificated. From the LEED official data, it’s understood that 308 project registered in Turkey. Therefore, LEED sustainability criteria are used in the survey. Head lines of LEED certification criteria; sustainable sites, water efficiency, energy and atmosphere, material and resources, indoor environmental quality, innovation and regional priority are indicated to assess the perceptions of survey participants. Moreover, only surveying of criteria are not enough; so the equipment, methods, risks and benefits also considered.

Keywords: LEED, sustainability, perceptions, stakeholders, construction, Turkey, risk, benefit

Procedia PDF Downloads 287
1080 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 139
1079 Liquefaction Potential Assessment Using Screw Driving Testing and Microtremor Data: A Case Study in the Philippines

Authors: Arturo Daag

Abstract:

The Philippine Institute of Volcanology and Seismology (PHIVOLCS) is enhancing its liquefaction hazard map towards a detailed probabilistic approach using SDS and geophysical data. Target sites for liquefaction assessment are public schools in Metro Manila. Since target sites are in highly urbanized-setting, the objective of the project is to conduct both non-destructive geotechnical studies using Screw Driving Testing (SDFS) combined with geophysical data such as refraction microtremor array (ReMi), 3 component microtremor Horizontal to Vertical Spectral Ratio (HVSR), and ground penetrating RADAR (GPR). Initial test data was conducted in liquefaction impacted areas from the Mw 6.1 earthquake in Central Luzon last April 22, 2019 Province of Pampanga. Numerous accounts of liquefaction events were documented areas underlain by quaternary alluvium and mostly covered by recent lahar deposits. SDS estimated values showed a good correlation to actual SPT values obtained from available borehole data. Thus, confirming that SDS can be an alternative tool for liquefaction assessment and more efficient in terms of cost and time compared to SPT and CPT. Conducting borehole may limit its access in highly urbanized areas. In order to extend or extrapolate the SPT borehole data, non-destructive geophysical equipment was used. A 3-component microtremor obtains a subsurface velocity model in 1-D seismic shear wave velocity of the upper 30 meters of the profile (Vs30). For the ReMi, 12 geophone array with 6 to 8-meter spacing surveys were conducted. Microtremor data were computed through the Factor of Safety, which is the quotient of Cyclic Resistance Ratio (CRR) and Cyclic Stress Ratio (CSR). Complementary GPR was used to study the subsurface structure and used to inferred subsurface structures and groundwater conditions.

Keywords: screw drive testing, microtremor, ground penetrating RADAR, liquefaction

Procedia PDF Downloads 183
1078 Isolation and Molecular Characterization of Lytic Bacteriophage against Carbapenem Resistant Klebsiella pneumoniae

Authors: Guna Raj Dhungana, Roshan Nepal, Apshara Parajuli, , Archana Maharjan, Shyam K. Mishra, Pramod Aryal, Rajani Malla

Abstract:

Introduction: Klebsiella pneumoniae is a well-known opportunistic human pathogen, primarily causing healthcare-associated infections. The global emergence of carbapenemase-producing K. pneumoniaeis a major public health burden, which is often extensively multidrug resistant.Thus, because of the difficulty to treat these ‘superbug’ and menace and some term as ‘apocalypse’ of post antibiotics era, an alternative approach to controlling this pathogen is prudent and one of the approaches is phage mediated control and/or treatment. Objective: In this study, we aimed to isolate novel bacteriophage against carbapenemase-producing K. pneumoniaeand characterize for potential use inphage therapy. Material and Methods: Twenty lytic phages were isolated from river water using double layer agar assay and purified. Biological features, physiochemical characters, burst size, host specificity and activity spectrum of phages were determined. One most potent phage: Phage TU_Kle10O was selected and characterized by electron microscopy. Whole genome sequences of the phage were analyzed for presence/absence of virulent factors, and other lysin genes. Results: Novel phage TU_Kle10O showed multiple host range within own genus and did not induce any BIM up to 5th generation of host’s life cycle. Electron microscopy confirmed that the phage was tailed and belonged to Caudovirales family. Next generation sequencing revealed its genome to be 166.2 Kb. bioinformatical analysis further confirmed that the phage genome ‘did not’ contain any ‘bacterial genes’ within phage genome, which ruled out the concern for transfer of virulent genes. Specific 'lysin’ enzyme was identified phages which could be used as 'antibiotics'. Conclusion: Extensively multidrug resistant bacteria like carbapenemase-producing K. pneumoniaecould be treated efficiently by phages.Absence of ‘virulent’ genes of bacterial origin and presence of lysin proteins within phage genome makes phages an excellent candidate for therapeutics.

Keywords: bacteriophage, Klebsiella pneumoniae, MDR, phage therapy, carbapenemase,

Procedia PDF Downloads 171
1077 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 89
1076 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation

Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri

Abstract:

In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm² are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm²). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.

Keywords: focused ultrasound therapy, histotripsy, inertial cavitation, mechanical tissue ablation

Procedia PDF Downloads 303
1075 Minimization of the Abrasion Effect of Fiber Reinforced Polymer Matrix on Stainless Steel Injection Nozzle through the Application of Laser Hardening Technique

Authors: Amessalu Atenafu Gelaw, Nele Rath

Abstract:

Currently, laser hardening process is becoming among the most efficient and effective hardening technique due to its significant advantages. The source where heat is generated, the absence of cooling media, self-quenching property, less distortion nature due to localized heat input, environmental friendly behavior and less time to finish the operation are among the main benefits to adopt this technology. This day, a variety of injection machines are used in plastic, textile, electrical and mechanical industries. Due to the fast growing of composite technology, fiber reinforced polymer matrix becoming optional solution to use in these industries. Due, to the abrasion nature of fiber reinforced polymer matrix composite on the injection components, many parts are outdated before the design period. Niko, a company specialized in injection molded products, suffers from the short lifetime of the injection nozzles of the molds, due to the use of fiber reinforced and, therefore, more abrasive polymer matrix. To prolong the lifetime of these molds, hardening the susceptible component like the injecting nozzles was a must. In this paper, the laser hardening process is investigated on Unimax, a type of stainless steel. The investigation to get optimal results for the nozzle-case was performed in three steps. First, the optimal parameters for maximum possible hardenability for the investigated nozzle material is investigated on a flat sample, using experimental testing as well as thermal simulation. Next, the effect of an inclination on the maximum temperature is analyzed both by experimental testing and validation through simulation. Finally, the data combined and applied for the nozzle. This paper describes possible strategies and methods for laser hardening of the nozzle to reach hardness of at least 720 HV for the material investigated. It has been proven, that the nozzle can be laser hardened to over 900 HV with the option of even higher results when more precise positioning of the laser can be assured.

Keywords: absorptivity, fiber reinforced matrix, laser hardening, Nd:YAG laser

Procedia PDF Downloads 144
1074 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography

Authors: O’Day Luke

Abstract:

Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.

Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison

Procedia PDF Downloads 129
1073 Sustainable Manufacturing of Concentrated Latex and Ribbed Smoked Sheets in Sri Lanka

Authors: Pasan Dunuwila, V. H. L. Rodrigo, Naohiro Goto

Abstract:

Sri Lanka is one the largest natural rubber (NR) producers of the world, where the NR industry is a major foreign exchange earner. Among the locally manufactured NR products, concentrated latex (CL) and ribbed smoked sheets (RSS) hold a significant position. Furthermore, these products become the foundation for many products utilized by the people all over the world (e.g. gloves, condoms, tires, etc.). Processing of CL and RSS costs a significant amount of material, energy, and workforce. With this background, both manufacturing lines have immensely challenged by waste, low productivity, lack of cost efficiency, rising cost of production, and many environmental issues. To face the above challenges, the adaptation of sustainable manufacturing measures that use less energy, water, materials, and produce less waste is imperative. However, these sectors lack comprehensive studies that shed light on such measures and thoroughly discuss their improvement potentials from both environmental and economic points of view. Therefore, based on a study of three CL and three RSS mills in Sri Lanka, this study deploys sustainable manufacturing techniques and tools to uncover the underlying potentials to improve performances in CL and RSS processing sectors. This study is comprised of three steps: 1. quantification of average material waste, economic losses, and greenhouse gas (GHG) emissions via material flow analysis (MFA), material flow cost accounting (MFCA), and life cycle assessment (LCA) in each manufacturing process, 2. identification of improvement options with the help of Pareto and What-if analyses, field interviews, and the existing literature; and 3. validation of the identified improvement options via the re-execution of MFA, MFCA, and LCA. With the help of this methodology, the economic and environmental hotspots, and the degrees of improvement in both systems could be identified. Results highlighted that each process could be improved to have less waste, monetary losses, manufacturing costs, and GHG emissions. Conclusively, study`s methodology and findings are believed to be beneficial for assuring the sustainable growth not only in Sri Lankan NR processing sector itself but also in NR or any other industry rooted in other developing countries.

Keywords: concentrated latex, natural rubber, ribbed smoked sheets, Sri Lanka

Procedia PDF Downloads 248
1072 A New Binder Mineral for Cement Stabilized Road Pavements Soils

Authors: Aydın Kavak, Özkan Coruk, Adnan Aydıner

Abstract:

Long-term performance of pavement structures is significantly impacted by the stability of the underlying soils. In situ subgrades often do not provide enough support required to achieve acceptable performance under traffic loading and environmental demands. NovoCrete® is a powder binder-mineral for cement stabilized road pavements soils. NovoCrete® combined with Portland cement at optimum water content increases the crystallize formations during the hydration process, resulting in higher strengths, neutralizes pH levels, and provides water impermeability. These changes in soil properties may lead to transforming existing unsuitable in-situ materials into suitable fill materials. The main features of NovoCrete® are: They are applicable to all types of soil, reduce premature cracking and improve soil properties, creating base and subbase course layers with high bearing capacity by reducing hazardous materials. It can be used also for stabilization of recyclable aggregates and old asphalt pavement aggregate, etc. There are many applications in Germany, Turkey, India etc. In this paper, a few field application in Turkey will be discussed. In the road construction works, this binder material is used for cement stabilization works. In the applications 120-180 kg cement is used for 1 m3 of soil with a 2 % of binder NovoCrete® material for the stabilization. The results of a plate loading test in a road construction site show 1 mm deformation which is very small under 7 kg/cm2 loading. The modulus of subgrade reaction increase from 611 MN/m3 to 3673 MN/m3.The soaked CBR values for stabilized soils increase from 10-20 % to 150-200 %. According to these data weak subgrade soil can be used as a base or sub base after the modification. The potential reduction in the need for quarried materials will help conserve natural resources. The use of on-site or nearby materials in fills, will significantly reduce transportation costs and provide both economic and environmental benefits.

Keywords: soil, stabilization, cement, binder, Novocrete, additive

Procedia PDF Downloads 208
1071 Determinants of Repeated Abortion among Women of Reproductive Age Attending Health Facilities in Northern Ethiopia: A Case-Control Study

Authors: Henok Yebyo Henok, Araya Abrha Araya, Alemayehu Bayray Alemayehu, Gelila Goba Gelila

Abstract:

Background: Every year, an estimated 19–20 million unsafe abortions take place, almost all in developing countries, leading to 68,000 deaths and millions more injured many permanently. Many women throughout the world, experience more than one abortion in their lifetimes. Repeat abortion is an indicator of the larger problem of unintended pregnancy. This study aimed to identify determinants of repeat abortion in Tigray Region, Ethiopia. Methods: Unmatched case-control study was conducted in hospitals in Tigray Region, Northern Ethiopia, from November 2014 to June 2015. The sample included 105 cases and 204 controls, recruited from among women seeking abortion care at public hospitals. Clients having two or more abortions (“repeat abortion”) were taken as cases, and those who had a total of one abortion were taken as controls (“single abortion”). Cases were selected consecutive based on proportional to size allocation while systematic sampling was employed for controls. Data were analyzed using SPSS version 20.0. Binary and multiple variable logistic regression analyses were calculated with 95% CI. Results: Mean age of cases was 24 years (±6.85) and 22 years (±6.25) for controls. 79.0% of cases had their sexual debut in less than 18 years of age compared to 57% of controls. 42.2% of controls and 23.8% of cases cited rape as the reason for having an abortion. Study participants who did not understand their fertility cycle and when they were most likely to conceive after menstruation (adjusted odds ratio [AOR]=2.0, 95% confidence interval [CI]: 1.1-3.7), having a previous abortion using medication(AOR=3.3, CI: 1.83, 6.11), having multiple sexual partners in the preceding 12 months (AOR=4.4, CI: 2.39,8.45), perceiving that the abortion procedure is not painful (AOR=2.3, CI: 1.31,4.26), initiating sexual intercourse before the age of 18 years (AOR=2.7, CI: 1.49, 5.23) and disclosure to a third-party about terminating the pregnancy (AOR=2.1, CI: 1.2,3.83) were independent predictors of repeat abortion. Conclusion: This study identified several factors correlated with women having repeat abortions. It may be helpful for the Government of Ethiopia to encourage women to delay sexual debut and decrease their number of sexual partners, including by promoting discussion within families about sexuality, to decrease the occurrence of repeated abortion.

Keywords: abortion, Ethiopia, repeated abortion, single abortion

Procedia PDF Downloads 256
1070 The Integrated Methodological Development of Reliability, Risk and Condition-Based Maintenance in the Improvement of the Thermal Power Plant Availability

Authors: Henry Pariaman, Iwa Garniwa, Isti Surjandari, Bambang Sugiarto

Abstract:

Availability of a complex system of thermal power plant is strongly influenced by the reliability of spare parts and maintenance management policies. A reliability-centered maintenance (RCM) technique is an established method of analysis and is the main reference for maintenance planning. This method considers the consequences of failure in its implementation, but does not deal with further risk of down time that associated with failures, loss of production or high maintenance costs. Risk-based maintenance (RBM) technique provides support strategies to minimize the risks posed by the failure to obtain maintenance task considering cost effectiveness. Meanwhile, condition-based maintenance (CBM) focuses on monitoring the application of the conditions that allow the planning and scheduling of maintenance or other action should be taken to avoid the risk of failure prior to the time-based maintenance. Implementation of RCM, RBM, CBM alone or combined RCM and RBM or RCM and CBM is a maintenance technique used in thermal power plants. Implementation of these three techniques in an integrated maintenance will increase the availability of thermal power plants compared to the use of maintenance techniques individually or in combination of two techniques. This study uses the reliability, risks and conditions-based maintenance in an integrated manner to increase the availability of thermal power plants. The method generates MPI (Priority Maintenance Index) is RPN (Risk Priority Number) are multiplied by RI (Risk Index) and FDT (Failure Defense Task) which can generate the task of monitoring and assessment of conditions other than maintenance tasks. Both MPI and FDT obtained from development of functional tree, failure mode effects analysis, fault-tree analysis, and risk analysis (risk assessment and risk evaluation) were then used to develop and implement a plan and schedule maintenance, monitoring and assessment of the condition and ultimately perform availability analysis. The results of this study indicate that the reliability, risks and conditions-based maintenance methods, in an integrated manner can increase the availability of thermal power plants.

Keywords: integrated maintenance techniques, availability, thermal power plant, MPI, FDT

Procedia PDF Downloads 778
1069 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 103
1068 The Relationship of Aromatase Activity and Being Very Overweight in East Indian Women with or Without Polycystic Ovary Disease

Authors: Dipanshu Sur, Ratnabali Chakravorty, Rimi Pal, Siddhartha Chatterjee, Joyshree Chaterjee, Amal Mallik

Abstract:

Background: Women with polycystic ovary disease (PCOD) frequently suffer from metabolic disturbances. PCOD is a common ovulatory disorder in young women, which affects 5-10% of the population and results in infertility due to anovulation. Importantly, aromatase in ovarian granulosa and luteinized granulosa cells plays an important role for women of reproductive age. Generation and metabolism of androgen is directly related to aromatase activity. The E2/T ratio provides important information about aromatase activity because conversion of androgens to estrogens is mediated by CYP19, suggesting that the E2/T ratio may be a direct marker of aromatase activity. The nature of the interaction between ovarian aromatase activity and PCOD in women has been controversial, and the impact of weight gain on aromatase activity as well as E2 levels is unknown. Aim: The objective of this study was to investigate the association and relation between aromatase activity and levels of body mass index (BMI) from a reproductive hormone perspective in a group of women with or without PCOD. Methods: We designed a cohort study which included 200 individuals. It enrolled 100 cases of PCOD based on 2006 Rotterdam criteria and 100 ovulatory normal- non PCOD, healthy, age-matched controls. Plasma sex hormones viz. estradiol (E2), testosterone (T), follicle stimulating hormone (FSH), and luteinizing hormone (LH) were measured by ELISA on the second day of the menstrual cycle, together with BMI and E2/T were calculated. Aromatase activity in PCOD patients with different BMI, T and E2 levels were compared. Results: PCOD patients showed significantly increased levels of BMI, E2 (P=0.004), T and LH, while their E2/T (P= <0.001), FSH and FSH/LH values were decreased compared with the control group. Higher E2 levels correlated with a relatively enhanced E2/T as well as T and LH levels but reduced BMI, FSH and FSH/LH levels in women with PCOD. Hyperandrogenic PCOD patients had increased E2 levels but their aromatase activity was markedly inhibited independent of their BMI values. Conclusions: We found a significant decrease of ovarian aromatase activity in women with PCOD as compared to controls. Our study showed that ovarian aromatase activity in PCOD was decreased which was independent of BMI. Enhancing aromatase activity may become an optimized strategy for developing therapies for PCOD women, especially those with obesity.

Keywords: aromatase activity, polycystic ovary disease, obesity, body mass index

Procedia PDF Downloads 206
1067 Combined Tarsal Coalition Resection and Arthroereisis in Treatment of Symptomatic Rigid Flat Foot in Pediatric Population

Authors: Michael Zaidman, Naum Simanovsky

Abstract:

Introduction. Symptomatic tarsal coalition with rigid flat foot often demands operative solution. An isolated coalition resection does not guarantee pain relief; correction of co-existing foot deformity may be required. The objective of the study was to analyze the results of combination of tarsal coalition resection and arthroereisis. Patients and methods. We retrospectively reviewed medical records and radiographs of children operatively treated in our institution for symptomatic calcaneonavicular or talocalcaneal coalition between the years 2019 and 2022. Eight patients (twelve feet), 4 boys and 4 girls with mean age 11.2 years, were included in the study. In six patients (10 feet) calcaneonavicular coalition was diagnosed, two patients (two feet) sustained talonavicular coalition. To quantify degrees of foot deformity, we used calcaneal pitch angle, lateral talar-first metatarsal (Meary's) angle, and talonavicular coverage angle. The clinical results were assessed using the American Orthopaedic Foot and Ankle Society (AOFAS) Ankle Hindfoot Score. Results. The mean follow-up was 28 month. The preoperative mean talonavicular coverage angle was 17,75º as compared with postoperative mean angle of 5.4º. The calcaneal pitch angle improved from mean 6,8º to 16,4º. The mean preoperative Meary’s angle of -11.3º improved to mean 2.8º. The preoperative mean AOFAS score improved from 54.7 to 93.1 points post-operatively. In nine of twelve feet, overall clinical outcome judged by AOFAS scale was excellent (90-100 points), in three feet was good (80-90 points). Six patients (ten feet) obviously improved their subtalar range of motion. Conclusion. For symptomatic stiff or rigid flat feet associated with tarsal coalition, the combination of coalition resection and arthroereisis leads to normalization of radiographic parameters, clinical and functional improvement with good patient’s satisfaction and likely to be more effective than the isolated procedures.

Keywords: rigid flat foot, tarsal coalition resection, arthroereisis, outcome

Procedia PDF Downloads 50
1066 Florida’s Groundwater and Surface Water System Reliability in Terms of Climate Change and Sea-Level Rise

Authors: Rahman Davtalab

Abstract:

Florida is one of the most vulnerable states to natural disasters among the 50 states of the USA. The state exposed by tropical storms, hurricanes, storm surge, landslide, etc. Besides, the mentioned natural phenomena, global warming, sea-level rise, and other anthropogenic environmental changes make a very complicated and unpredictable system for decision-makers. In this study, we tried to highlight the effects of climate change and sea-level rise on surface water and groundwater systems for three different geographical locations in Florida; Main Canal of Jacksonville Beach (in the northeast of Florida adjacent to the Atlantic Ocean), Grace Lake in central Florida, far away from surrounded coastal line, and Mc Dill in Florida and adjacent to Tampa Bay and Mexican Gulf. An integrated hydrologic and hydraulic model was developed and simulated for all three cases, including surface water, groundwater, or a combination of both. For the case study of Main Canal-Jacksonville Beach, the investigation showed that a 76 cm sea-level rise in time horizon 2060 could increase the flow velocity of the tide cycle for the main canal's outlet and headwater. This case also revealed how the sea level rise could change the tide duration, potentially affecting the coastal ecosystem. As expected, sea-level rise can raise the groundwater level. Therefore, for the Mc Dill case, the effect of groundwater rise on soil storage and the performance of stormwater retention ponds is investigated. The study showed that sea-level rise increased the pond’s seasonal high water up to 40 cm by time horizon 2060. The reliability of the retention pond is dropped from 99% for the current condition to 54% for the future. The results also proved that the retention pond could not retain and infiltrate the designed treatment volume within 72 hours, which is a significant indication of increasing pollutants in the future. Grace Lake case study investigates the effects of climate change on groundwater recharge. This study showed that using the dynamically downscaled data of the groundwater recharge can decline up to 24% by the mid-21st century.

Keywords: groundwater, surface water, Florida, retention pond, tide, sea level rise

Procedia PDF Downloads 170
1065 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 103
1064 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 269
1063 Development of a Two-Step 'Green' Process for (-) Ambrafuran Production

Authors: Lucia Steenkamp, Chris V. D. Westhuyzen, Kgama Mathiba

Abstract:

Ambergris, and more specifically its oxidation product (–)-ambrafuran, is a scarce, valuable, and sought-after perfumery ingredient. The material is used as a fixative agent to stabilise perfumes in formulations by reducing the evaporation rate of volatile substances. Ambergris is a metabolic product of the sperm whale (Physeter macrocephatus L.), resulting from intestinal irritation. Chemically, (–)-ambrafuran is produced from the natural product sclareol in eight synthetic steps – in the process using harsh and often toxic chemicals to do so. An overall yield of no more than 76% can be achieved in some routes, but generally, this is lower. A new 'green' route has been developed in our laboratory in which sclareol, extracted from the Clary sage plant, is converted to (–)-ambrafuran in two steps with an overall yield in excess of 80%. The first step uses a microorganism, Hyphozyma roseoniger, to bioconvert sclareol to an intermediate diol using substrate concentrations up to 50g/L. The yield varies between 90 and 67% depending on the substrate concentration used. The purity of the diol product is 95%, and the diol is used without further purification in the next step. The intermediate diol is then cyclodehydrated to the final product (–)-ambrafuran using a zeolite, which is not harmful to the environment and is readily recycled. The yield of the product is 96%, and following a single recrystallization, the purity of the product is > 99.5%. A preliminary LC-MS study of the bioconversion identified several intermediates produced in the fermentation broth under oxygen-restricted conditions. Initially, a short-lived ketone is produced in equilibrium with a more stable pyranol, a key intermediate in the process. The latter is oxidised under Norrish type I cleavage conditions to yield an acetate, which is hydrolysed either chemically or under lipase action to afford the primary fermentation product, an intermediate diol. All the intermediates identified point to the likely CYP450 action as the key enzyme(s) in the mechanism. This invention is an exceptional example of how the power of biocatalysis, combined with a mild, benign chemical step, can be deployed to replace a total chemical synthesis of a specific chiral antipode of a commercially relevant material.

Keywords: ambrafuran, biocatalysis, fragrance, microorganism

Procedia PDF Downloads 193
1062 Effects of Branched-Chain Amino Acid Supplementation on Sarcopenic Patients with Liver Cirrhosis

Authors: Deepak Nathiya1, Ramesh Roop Rai, Pratima Singh1, Preeti Raj1, Supriya Suman, Balvir Singh Tomar

Abstract:

Background: Sarcopenia is a catabolic state in liver cirrhosis (LC), accelerated with a breakdown of skeletal muscle to release amino acids which adversely affects survival, health-related quality of life, and response to any underlying disease. The primary objective of the study was to investigate the long-term effect of branched-chain amino acids (BCAA) supplementations on parameters associated with improved prognosis in sarcopenic patients with LC, as well as to evaluate its impact on cirrhotic-related events. Methods: We carried out a 24 week, single-center, randomized, open-label, controlled, two cohort parallel-group intervention trial comparing the efficacy of BCAA against lactoalbumin (L-ALB) on 106 sarcopenic liver cirrhotics. The BCAA (intervention) group was treated with 7.2 g BCAA per whereas, the lactoalbumin group was also given 6.3 g of L-Albumin. The primary outcome was to assess the impact of BCAA on parameters of sarcopenia: muscle mass, muscle strength, and physical performance. The secondary outcomes were to study combined survival and maintenance of liver function changes in laboratory and clinical markers in the duration of six months. Results: Treatment with BCAA leads to significant improvement in sarcopenic parameters: muscle strength, muscle function, and muscle mass. The total cirrhotic-related complications and cumulative event-free survival occurred fewer in the BCAA group than in the L-ALB group. Prognostic markers also improved significantly in the study. Conclusion: The current clinical trial demonstrated that long-term BCAAs supplementation improved sarcopenia and prognostic markers in patients with advanced liver cirrhosis.

Keywords: sarcopenia, liver cirrhosis, BCAA, quality of life

Procedia PDF Downloads 123
1061 Development of an Optimised, Automated Multidimensional Model for Supply Chains

Authors: Safaa H. Sindi, Michael Roe

Abstract:

This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.

Keywords: Leagile, automation, heuristic learning, supply chain models

Procedia PDF Downloads 377
1060 Processing and Characterization of Aluminum Matrix Composite Reinforced with Amorphous Zr₃₇.₅Cu₁₈.₆₇Al₄₃.₉₈ Phase

Authors: P. Abachi, S. Karami, K. Purazrang

Abstract:

The amorphous reinforcements (metallic glasses) can be considered as promising options for reinforcing light-weight aluminum and its alloys. By using the proper type of reinforcement, one can overcome to drawbacks such as interfacial de-cohesion and undesirable reactions which can be created at ceramic particle and metallic matrix interface. In this work, the Zr-based amorphous phase was produced via mechanical milling of elemental powders. Based on Miedema semi-empirical Model and diagrams for formation enthalpies and/or Gibbs free energies of Zr-Cu amorphous phase in comparison with the crystalline phase, the glass formability range was predicted. The composite was produced using the powder mixture of the aluminum and metallic glass and spark plasma sintering (SPS) at the temperature slightly above the glass transition Tg of the metallic glass particles. The selected temperature and rapid sintering route were suitable for consolidation of an aluminum matrix without crystallization of amorphous phase. To characterize amorphous phase formation, X-ray diffraction (XRD) phase analyses were performed on powder mixture after specified intervals of milling. The microstructure of the composite was studied by optical and scanning electron microscope (SEM). Uniaxial compression tests were carried out on composite specimens with the dimension of 4 mm long and a cross-section of 2 ˟ 2mm2. The micrographs indicated an appropriate reinforcement distribution in the metallic matrix. The comparison of stress–strain curves of the consolidated composite and the non-reinforced Al matrix alloy in compression showed that the enhancement of yield strength and mechanical strength are combined with an appreciable plastic strain at fracture. It can be concluded that metallic glasses (amorphous phases) are alternative reinforcement material for lightweight metal matrix composites capable of producing high strength and adequate ductility. However, this is in the expense of minor density increase.

Keywords: aluminum matrix composite, amorphous phase, mechanical alloying, spark plasma sintering

Procedia PDF Downloads 346
1059 A Remote Sensing Approach to Estimate the Paleo-Discharge of the Lost Saraswati River of North-West India

Authors: Zafar Beg, Kumar Gaurav

Abstract:

The lost Saraswati is described as a large perennial river which was 'lost' in the desert towards the end of the Indus-Saraswati civilisation. It has been proposed earlier that the lost Saraswati flowed in the Sutlej-Yamuna interfluve, parallel to the present day Indus River. It is believed that one of the earliest known ancient civilizations, the 'Indus-Saraswati civilization' prospered along the course of the Saraswati River. The demise of the Indus civilization is considered to be due to desiccation of the river. Today in the Sutlej-Yamuna interfluve, we observe an ephemeral river, known as Ghaggar. It is believed that along with the Ghaggar River, two other Himalayan Rivers Sutlej and Yamuna were tributaries of the lost Saraswati and made a significant contribution to its discharge. Presence of a large number of archaeological sites and the occurrence of thick fluvial sand bodies in the subsurface in the Sutlej-Yamuna interfluve has been used to suggest that the Saraswati River was a large perennial river. Further, the wider course of about 4-7 km recognized from satellite imagery of Ghaggar-Hakra belt in between Suratgarh and Anupgarh strengthens this hypothesis. Here we develop a methodology to estimate the paleo discharge and paleo width of the lost Saraswati River. In doing so, we rely on the hypothesis which suggests that the ancient Saraswati River used to carry the combined flow or some part of the Yamuna, Sutlej and Ghaggar catchments. We first established a regime relationship between the drainage area-channel width and catchment area-discharge of 29 different rivers presently flowing on the Himalayan Foreland from Indus in the west to the Brahmaputra in the East. We found the width and discharge of all the Himalayan rivers scale in a similar way when they are plotted against their corresponding catchment area. Using these regime curves, we calculate the width and discharge of paleochannels originating from the Sutlej, Yamuna and Ghaggar rivers by measuring their corresponding catchment area from satellite images. Finally, we add the discharge and width obtained from each of the individual catchments to estimate the paleo width and paleo discharge respectively of the Saraswati River. Our regime curves provide a first-order estimate of the paleo discharge of the lost Saraswati.

Keywords: Indus civilization, palaeochannel, regime curve, Saraswati River

Procedia PDF Downloads 167
1058 Comparison of Regional and Local Indwelling Catheter Techniques to Prolong Analgesia in Total Knee Arthroplasty Procedures: Continuous Peripheral Nerve Block and Continuous Periarticular Infiltration

Authors: Jared Cheves, Amanda DeChent, Joyce Pan

Abstract:

Total knee replacements (TKAs) are one of the most common but painful surgical procedures performed in the United States. Currently, the gold standard for postoperative pain management is the utilization of opioids. However, in the wake of the opioid epidemic, the healthcare system is attempting to reduce opioid consumption by trialing innovative opioid sparing analgesic techniques such as continuous peripheral nerve blocks (CPNB) and continuous periarticular infiltration (CPAI). The alleviation of pain, particularly during the first 72 hours postoperatively, is of utmost importance due to its association with delayed recovery, impaired rehabilitation, immunosuppression, the development of chronic pain, the development of rebound pain, and decreased patient satisfaction. While both CPNB and CPAI are being used today, there is limited evidence comparing the two to the current standard of care or to each other. An extensive literature review was performed to explore the safety profiles and effectiveness of CPNB and CPAI in reducing reported pain scores and decreasing opioid consumption. The literature revealed the usage of CPNB contributed to lower pain scores and decreased opioid use when compared to opioid-only control groups. Additionally, CPAI did not improve pain scores or decrease opioid consumption when combined with a multimodal analgesic (MMA) regimen. When comparing CPNB and CPAI to each other, neither unanimously lowered pain scores to a greater degree, but the literature indicates that CPNB decreased opioid consumption more than CPAI. More research is needed to further cement the efficacy of CPNB and CPAI as standard components of MMA in TKA procedures. In addition, future research can also focus on novel catheter-free applications to reduce the complications of continuous catheter analgesics.

Keywords: total knee arthroplasty, continuous peripheral nerve blocks, continuous periarticular infiltration, opioid, multimodal analgesia

Procedia PDF Downloads 74
1057 Approaching the Spatial Multi-Objective Land Use Planning Problems at Mountain Areas by a Hybrid Meta-Heuristic Optimization Technique

Authors: Konstantinos Tolidis

Abstract:

The mountains are amongst the most fragile environments in the world. The world’s mountain areas cover 24% of the Earth’s land surface and are home to 12% of the global population. A further 14% of the global population is estimated to live in the vicinity of their surrounding areas. As urbanization continues to increase in the world, the mountains are also key centers for recreation and tourism; their attraction is often heightened by their remarkably high levels of biodiversity. Due to the fact that the features in mountain areas vary spatially (development degree, human geography, socio-economic reality, relations of dependency and interaction with other areas-regions), the spatial planning on these areas consists of a crucial process for preserving the natural, cultural and human environment and consists of one of the major processes of an integrated spatial policy. This research has been focused on the spatial decision problem of land use allocation optimization which is an ordinary planning problem on the mountain areas. It is a matter of fact that such decisions must be made not only on what to do, how much to do, but also on where to do, adding a whole extra class of decision variables to the problem when combined with the consideration of spatial optimization. The utility of optimization as a normative tool for spatial problem is widely recognized. However, it is very difficult for planners to quantify the weights of the objectives especially when these are related to mountain areas. Furthermore, the land use allocation optimization problems at mountain areas must be addressed not only by taking into account the general development objectives but also the spatial objectives (e.g. compactness, compatibility and accessibility, etc). Therefore, the main research’s objective was to approach the land use allocation problem by utilizing a hybrid meta-heuristic optimization technique tailored to the mountain areas’ spatial characteristics. The results indicates that the proposed methodological approach is very promising and useful for both generating land use alternatives for further consideration in land use allocation decision-making and supporting spatial management plans at mountain areas.

Keywords: multiobjective land use allocation, mountain areas, spatial planning, spatial decision making, meta-heuristic methods

Procedia PDF Downloads 316
1056 Multi-Stage Optimization of Local Environmental Quality by Comprehensive Computer Simulated Person as Sensor for Air Conditioning Control

Authors: Sung-Jun Yoo, Kazuhide Ito

Abstract:

In this study, a comprehensive computer simulated person (CSP) that integrates computational human model (virtual manikin) and respiratory tract model (virtual airway), was applied for estimation of indoor environmental quality. Moreover, an inclusive prediction method was established by integrating computational fluid dynamics (CFD) analysis with advanced CSP which is combined with physiologically-based pharmacokinetic (PBPK) model, unsteady thermoregulation model for analysis targeting micro-climate around human body and respiratory area with high accuracy. This comprehensive method can estimate not only the contaminant inhalation but also constant interaction in the contaminant transfer between indoor spaces, i.e., a target area for indoor air quality (IAQ) assessment, and respiratory zone for health risk assessment. This study focused on the usage of the CSP as an air/thermal quality sensor in indoors, which means the application of comprehensive model for assessment of IAQ and thermal environmental quality. Demonstrative analysis was performed in order to examine the applicability of the comprehensive model to the heating, ventilation, air conditioning (HVAC) control scheme. CSP was located at the center of the simple model room which has dimension of 3m×3m×3m. Formaldehyde which is generated from floor material was assumed as a target contaminant, and flow field, sensible/latent heat and contaminant transfer analysis in indoor space were conducted by using CFD simulation coupled with CSP. In this analysis, thermal comfort was evaluated by thermoregulatory analysis, and respiratory exposure risks represented by adsorption flux/concentration at airway wall surface were estimated by PBPK-CFD hybrid analysis. These Analysis results concerning IAQ and thermal comfort will be fed back to the HVAC control and could be used to find a suitable ventilation rate and energy requirement for air conditioning system.

Keywords: CFD simulation, computer simulated person, HVAC control, indoor environmental quality

Procedia PDF Downloads 350
1055 Study of the Relationship between the Civil Engineering Parameters and the Floating of Buoy Model Which Made from Expanded Polystyrene-Mortar

Authors: Panarat Saengpanya

Abstract:

There were five objectives in this study including the study of housing type with water environment, the physical and mechanical properties of the buoy material, the mechanical properties of the buoy models, the floating of the buoy models and the relationship between the civil engineering parameters and the floating of the buoy. The buoy examples made from Expanded Polystyrene (EPS) covered by 5 mm thickness of mortar with the equal thickness on each side. Specimens are 0.05 m cubes tested at a displacement rate of 0.005 m/min. The existing test method used to assess the parameters relationship is ASTM C 109 to provide comparative results. The results found that the three type of housing with water environment were Stilt Houses, Boat House, and Floating House. EPS is a lightweight material that has been used in engineering applications since at least the 1950s. Its density is about a hundredth of that of mortar, while the mortar strength was found 72 times of EPS. One of the advantage of composite is that two or more materials could be combined to take advantage of the good characteristics of each of the material. The strength of the buoy influenced by mortar while the floating influenced by EPS. Results showed the buoy example compressed under loading. The Stress-Strain curve showed the high secant modulus before reached the peak value. The failure occurred within 10% strain then the strength reduces while the strain was continuing. It was observed that the failure strength reduced by increasing the total volume of examples. For the buoy examples with same area, an increase of the failure strength is found when the high dimension is increased. The results showed the relationship between five parameters including the floating level, the bearing capacity, the volume, the high dimension and the unit weight. The study found increases in high of buoy lead to corresponding decreases in both modulus and compressive strength. The total volume and the unit weight had relationship with the bearing capacity of the buoy.

Keywords: floating house, buoy, floating structure, EPS

Procedia PDF Downloads 126
1054 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 446
1053 Efficiency Validation of Hybrid Geothermal and Radiant Cooling System Implementation in Hot and Humid Climate Houses of Saudi Arabia

Authors: Jamil Hijazi, Stirling Howieson

Abstract:

Over one-quarter of the Kingdom of Saudi Arabia’s total oil production (2.8 million barrels a day) is used for electricity generation. The built environment is estimated to consume 77% of the total energy production. Of this amount, air conditioning systems consume about 80%. Apart from considerations surrounding global warming and CO2 production it has to be recognised that oil is a finite resource and the KSA like many other oil rich countries will have to start to consider a horizon where hydro-carbons are not the dominant energy resource. The employment of hybrid ground cooling pipes in combination with black body solar collection and radiant night cooling systems may have the potential to displace a significant proportion of oil currently used to run conventional air conditioning plant. This paper presents an investigation into the viability of such hybrid systems with the specific aim of reducing carbon emissions while providing all year round thermal comfort in a typical Saudi Arabian urban housing block. At the outset air and soil temperatures were measured in the city of Jeddah. A parametric study then was carried out by computational simulation software (Design Builder) that utilised the field measurements and predicted the cooling energy consumption of both a base case and an ideal scenario (typical block retro-fitted with insulation, solar shading, ground pipes integrated with hypocaust floor slabs/ stack ventilation and radiant cooling pipes embed in floor).Initial simulation results suggest that careful ‘ecological design’ combined with hybrid radiant and ground pipe cooling techniques can displace air conditioning systems, producing significant cost and carbon savings (both capital and running) without appreciable deprivation of amenity.

Keywords: energy efficiency, ground pipe, hybrid cooling, radiative cooling, thermal comfort

Procedia PDF Downloads 243
1052 Gamma Irradiated Sodium Alginate and Phosphorus Fertilizer Enhances Seed Trigonelline Content, Biochemical Parameters and Yield Attributes of Fenugreek (Trigonella foenum-graecum L.)

Authors: Tariq Ahmad Dar, Moinuddin, M. Masroor A. Khan

Abstract:

There is considerable need in enhancing the content and yield of active constituents of medicinal plants keeping in view their massive demand worldwide. Different strategies have been employed to enhance the active constituents of medicinal plants and the use of phytohormones has been proved effective in this regard. Gamma-irradiated Sodium alginate (ISA) is known to elicit an array of plant defense responses and biological activities in plants. Considering the medicinal importance, a pot experiment was conducted to explore the effect of ISA and phosphorus on growth, yield and quality of fenugreek (Trigonella foenum-graecum L.). ISA spray treatments (0, 40, 80 and 120 mg L-1) were applied alone and in combination with 40 kg P ha-1 (P40). Crop performance was assessed in terms of plant growth characteristics, physiological attributes, seed yield and the content of seed trigonelline. Of the ten-treatments, P40 + 80 mg L−1 of ISA proved the best. The results showed that foliar spray of ISA alone or in combination with P40 augmented the plant vegetative growth, enzymatic activities, trigonelline content, trigonelline yield and economic yield of fenugreek. Application of 80 mg L−1 of ISA applied with P40 gave the best results for almost all the parameters studied compared to control or to 80 mg L−1 of ISA applied alone. This treatment increased the total content of chlorophyll, carotenoids, leaf -N, -P and -K and trigonelline compared to the control by 24.85 and 27.40%, 15 and 23.52%, 18.70 and 16.84%, 15.88 and 18.92%, 12 and 14.44%, at 60 and 90 DAS respectively. The combined application of 80 mg L−1 of ISA along with P40 resulted in the maximum increase in seed yield, trigonelline content and trigonelline yield by146, 34 and 232.41%, respectively, over the control. Gel permeation chromatography revealed the formation of low molecular weight fractions in ISA samples, containing even less than 20,000 molecular weight oligomers, which might be responsible for plant growth promotion in this study. Trigonelline content was determined by reverse phase high performance liquid chromatography (HPLC) with C-18 column.

Keywords: gamma-irradiated sodium alginate, phosphorus, gel permeation chromatography, HPLC, trigonelline content, yield

Procedia PDF Downloads 308