Search results for: fractures of transient plasticity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 970

Search results for: fractures of transient plasticity

130 The Effect of Subsurface Dam on Saltwater Intrusion in Heterogeneous Coastal Aquifers

Authors: Antoifi Abdoulhalik, Ashraf Ahmed

Abstract:

Saltwater intrusion (SWI) in coastal aquifers has become a growing threat for many countries around the world. While various control measures have been suggested to mitigate SWI, the construction of subsurface physical barriers remains one of the most effective solutions for this problem. In this work, we used laboratory experiments and numerical simulations to investigate the effectiveness of subsurface dams in heterogeneous layered coastal aquifer with different layering patterns. Four different cases were investigated, including a homogeneous (case H), and three heterogeneous cases in which a low permeability (K) layer was set in the top part of the system (case LH), in the middle part of the system (case HLH) and the bottom part of the system (case HL). Automated image analysis technique was implemented to quantify the main SWI parameters under high spatial and temporal resolution. The method also provides transient salt concentration maps, allowing for the first time clear visualization of the spillage of saline water over the dam (advancing wedge condition) as well as the flushing of residual saline water from the freshwater area (receding wedge condition). The SEAWAT code was adopted for the numerical simulations. The results show that the presence of an overlying layer of low permeability enhanced the ability of the dam to retain the saline water. In such conditions, the rate of saline water spillage and inland extension may considerably be reduced. Conversely, the presence of an underlying low K layer led to a faster increase of saltwater volume on the seaward side of the wall, therefore considerably facilitating the spillage. The results showed that a complete removal of the residual saline water eventually occurred in all the investigated scenarios, with a rate of removal strongly affected by the hydraulic conductivity of the lower part of the aquifer. The data showed that the addition of the underlying low K layer in case HL caused the complete flushing to be almost twice longer than in the homogeneous scenario.

Keywords: heterogeneous coastal aquifers, laboratory experiments, physical barriers, seawater intrusion control

Procedia PDF Downloads 230
129 Effect of Sulphur Concentration on Microbial Population and Performance of a Methane Biofilter

Authors: Sonya Barzgar, J. Patrick, A. Hettiaratchi

Abstract:

Methane (CH4) is reputed as the second largest contributor to greenhouse effect with a global warming potential (GWP) of 34 related to carbon dioxide (CO2) over the 100-year horizon, so there is a growing interest in reducing the emissions of this gas. Methane biofiltration (MBF) is a cost effective technology for reducing low volume point source emissions of methane. In this technique, microbial oxidation of methane is carried out by methane-oxidizing bacteria (methanotrophs) which use methane as carbon and energy source. MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting methane to carbon dioxide (CO₂) and water (H₂O). Even though the biofiltration technique has been shown to be an efficient, practical and viable technology, the design and operational parameters, as well as the relevant microbial processes have not been investigated in depth. In particular, limited research has been done on the effects of sulphur on methane bio-oxidation. Since bacteria require a variety of nutrients for growth, to improve the performance of methane biofiltration, it is important to establish the input quantities of nutrients to be provided to the biofilter to ensure that nutrients are available to sustain the process. The study described in this paper was conducted with the aim of determining the influence of sulphur on methane elimination in a biofilter. In this study, a set of experimental measurements has been carried out to explore how the conversion of elemental sulphur could affect methane oxidation in terms of methanotrophs growth and system pH. Batch experiments with different concentrations of sulphur were performed while keeping the other parameters i.e. moisture content, methane concentration, oxygen level and also compost at their optimum level. The study revealed the tolerable limit of sulphur without any interference to the methane oxidation as well as the particular sulphur concentration leading to the greatest methane elimination capacity. Due to the sulphur oxidation, pH varies in a transient way which affects the microbial growth behavior. All methanotrophs are incapable of growth at pH values below 5.0 and thus apparently are unable to oxidize methane. Herein, the certain pH for the optimal growth of methanotrophic bacteria is obtained. Finally, monitoring methane concentration over time in the presence of sulphur is also presented for laboratory scale biofilters.

Keywords: global warming, methane biofiltration (MBF), methane oxidation, methanotrophs, pH, sulphur

Procedia PDF Downloads 217
128 Verification of Geophysical Investigation during Subsea Tunnelling in Qatar

Authors: Gary Peach, Furqan Hameed

Abstract:

Musaimeer outfall tunnel is one of the longest storm water tunnels in the world, with a total length of 10.15 km. The tunnel will accommodate surface and rain water received from the drainage networks from 270 km of urban areas in southern Doha with a pumping capacity of 19.7m³/sec. The tunnel is excavated by Tunnel Boring Machine (TBM) through Rus Formation, Midra Shales, and Simsima Limestone. Water inflows at high pressure, complex mixed ground, and weaker ground strata prone to karstification with the presence of vertical and lateral fractures connected to the sea bed were also encountered during mining. In addition to pre-tender geotechnical investigations, the Contractor carried out a supplementary offshore geophysical investigation in order to fine-tune the existing results of geophysical and geotechnical investigations. Electric resistivity tomography (ERT) and Seismic Reflection survey was carried out. Offshore geophysical survey was performed, and interpretations of rock mass conditions were made to provide an overall picture of underground conditions along the tunnel alignment. This allowed the critical tunnelling area and cutter head intervention to be planned accordingly. Karstification was monitored with a non-intrusive radar system facility installed on the TBM. The Boring Electric Ahead Monitoring(BEAM) was installed at the cutter head and was able to predict the rock mass up to 3 tunnel diameters ahead of the cutter head. BEAM system was provided with an online system for real time monitoring of rock mass condition and then correlated with the rock mass conditions predicted during the interpretation phase of offshore geophysical surveys. The further correlation was carried by Samples of the rock mass taken from tunnel face inspections and excavated material produced by the TBM. The BEAM data was continuously monitored to check the variations in resistivity and percentage frequency effect (PFE) of the ground. This system provided information about rock mass condition, potential karst risk, and potential of water inflow. BEAM system was found to be more than 50% accurate in picking up the difficult ground conditions and faults as predicted in the geotechnical interpretative report before the start of tunnelling operations. Upon completion of the project, it was concluded that the combined use of different geophysical investigation results can make the execution stage be carried out in a more confident way with the less geotechnical risk involved. The approach used for the prediction of rock mass condition in Geotechnical Interpretative Report (GIR) and Geophysical Reflection and electric resistivity tomography survey (ERT) Geophysical Reflection surveys were concluded to be reliable as the same rock mass conditions were encountered during tunnelling operations.

Keywords: tunnel boring machine (TBM), subsea, karstification, seismic reflection survey

Procedia PDF Downloads 211
127 An Unusual Case of Wrist Pain: Idiopathic Avascular Necrosis of the Scaphoid, Preiser’s Disease

Authors: Adae Amoako, Daniel Montero, Peter Murray, George Pujalte

Abstract:

We present a case of a 42-year-old, right-handed Caucasian male who presented to a medical orthopedics clinic with left wrist pain. The patient indicated that the pain started two months prior to the visit. He could only remember helping a friend move furniture prior to the onset of pain. Examination of the left wrist showed limited extension compared to the right. There was clicking with flexion and extension of the wrist on the dorsal aspect. Mild tenderness was noticed over the distal radioulnar joint. There was ulnar and radial deviation on provocation. Initial 4-view x-rays of the left wrist showed mild radiocarpal and scapho-trapezium-trapezoid (ST-T) osteoarthritis, with subchondral cysts seen in the lunate and scaphoid, with no obvious fractures. The patient was initially put in a wrist brace and diclofenac topical gel was prescribed for pain control, as a patient could not take non-steroidal anti-inflammatory drugs (NSAIDs) due to gastritis. Despite diclofenac topical gel use and bracing, symptoms remained, and a steroid injection with 1 mL of lidocaine with 10 mg of triamcinolone acetonide was performed under fluoroscopy. He obtained some relief but after 3 months, the injection had to be repeated. On 2-month follow up after the initial evaluation, symptoms persisted. Magnetic resonance imaging (MRI) was obtained which showed an abnormal T1 hypodense signal involving the proximal pole of the scaphoid and articular collapse proximally of the scaphoid, with marked irregularity of the overlying cartilage, suggesting a remote injury, findings consistent with avascular necrosis of the proximal pole of the scaphoid. A month after that, the patient had the left proximal pole of the scaphoid debrided and an intercompartmental supraretinacular artery vascularized. Pedicle bone graft reconstruction of the proximal pole of the left scaphoid was done. A non-vascularized autograft from the left radius was also applied. He was put in a thumb spica cast with the interphalangeal joint free for 6 weeks. On 6-week follow-up after surgery, the patient was healing well and could make a composite fist with his left hand. The diagnosis of Preiser’s disease is primarily based on radiological findings. Due to the fact that necrosis happens over a period of time, most AVNs are diagnosed at the late stages of the disease. There appear to be no specific guidelines on the management AVN of the scaphoid. In the past, immobilization and arthroscopic debridement had been used. Radial osteotomy has also been tried. Vascularized bone grafts have also been used to treat Preiser’s disease. In our patient, we used three of these treatment modalities, starting with conservative management with topical NSAIDS and immobilization, then debridement with vascularized bone grafts.

Keywords: wrist pain, avascular necrosis of the scaphoid, Preiser’s disease, vascularized bone grafts

Procedia PDF Downloads 277
126 Bone Mineralization in Children with Wilson’s Disease

Authors: Shiamaa Eltantawy, Gihan Sobhy, Alif Alaam

Abstract:

Wilson disease, or hepatolenticular degeneration, is an autosomal recessive disease that results in excess copper buildup in the body. It primarily affects the liver and basal ganglia of the brain, but it can affect other organ systems. Musculoskeletal abnormalities, including premature osteoarthritis, skeletal deformity, and pathological bone fractures, can occasionally be found in WD patients with a hepatic or neurologic type. The aim was to assess the prevalence of osteoporosis and osteopenia in Wilson’s disease patients. This case-control study was conducted on ninety children recruited from the inpatient ward and outpatient clinic of the Paediatric Hepatology, Gastroenterology, and Nutrition department of the National Liver Institute at Menofia University, aged from 1 to 18 years. Males were 49, and females were 41. Children were divided into three groups: (Group I) consisted of thirty patients with WD; (Group II) consisted of thirty patients with chronic liver disease other than WD; (Group III) consisted of thirty age- and sex-matched healthy The exclusion criteria were patients with hyperparathyroidism, hyperthyroidism, renal failure, Cushing's syndrome, and patients on certain drugs such as chemotherapy, anticonvulsants, or steroids. All patients were subjected to the following: 1- Full history-taking and clinical examination. 2-Laboratory investigations: (FBC,ALT,AST,serum albumin, total protein, total serum bilirubin,direct bilirubin,alkaline phosphatase, prothrombin time, serum critine,parathyroid hormone, serum calcium, serum phosphrus). 3-Bone mineral density (BMD, gm/cm2) values were measured by dual-energy X-ray absorptiometry (DEXA). The results revealed that there was a highly statistically significant difference between the three groups regarding the DEXA scan, and there was no statistically significant difference between groups I and II, but the WD group had the lowest bone mineral density. The WD group had a large number of cases of osteopenia and osteoporosis, but there was no statistically significant difference with the group II mean, while a high statistically significant difference was found when compared to group III. In the WD group, there were 20 patients with osteopenia, 4 patients with osteoporosis, and 6 patients who were normal. The percentages were 66.7%, 13.3%, and 20%, respectively. Therefore, the largest number of cases in the WD group had osteopenia. There was no statistically significant difference found between WD patients on different treatment regimens regarding DEXA scan results (Z-Score). There was no statistically significant difference found between patients in the WD group (normal, osteopenic, or osteoporotic) regarding phosphorus (mg/dL), but there was a highly statistically significant difference found between them regarding ionised Ca (mmol/L). Therefore, there was a decrease in bone mineral density when the Ca level was decreased. In summary, Wilson disease is associated with bone demineralization. The largest number of cases in the WD group in our study had osteopenia (66.7%). Different treatment regimens (zinc monotherapy, Artamin, and zinc) as well as different laboratory parameters have no effect on bone mineralization in WD cases. Decreased ionised Ca is associated with low BMD in WD patients. Children with WD should be investigated for BMD.

Keywords: wilson disease, Bone mineral density, liver disease, osteoporosis

Procedia PDF Downloads 39
125 Evaluation of Some Serum Proteins as Markers for Myeloma Bone Disease

Authors: V. T. Gerov, D. I. Gerova, I. D. Micheva, N. F. Nazifova-Tasinova, M. N. Nikolova, M. G. Pasheva, B. T. Galunska

Abstract:

Multiple myeloma (MM) is the most frequent plasma cell (PC) dyscrasia that involves the skeleton. Myeloma bone disease (MBD) is characterized by osteolytic bone lesions as a result of increased osteoclasts activity not followed by reactive bone formation due to osteoblasts suppression. Skeletal complications cause significant adverse effects on quality of life and lead to increased morbidity and mortality. Last decade studies revealed the implication of different proteins in osteoclast activation and osteoblast inhibition. The aim of the present study was to determine serum levels of periostin, sRANKL and osteopontin and to evaluate their role as bone markers in MBD. Materials and methods. Thirty-two newly diagnosed MM patients (mean age: 62.2 ± 10.7 years) and 33 healthy controls (mean age: 58.9 ± 7.5 years) were enrolled in the study. According to IMWG criteria 28 patients were with symptomatic MM and 4 with monoclonal gammopathy of undetermined significance (MGUS). In respect to their bone involvement all symptomatic patients were divided into two groups (G): 9 patients with 0-3 osteolytic lesions (G1) and 19 patients with >3 osteolytic lesions and/or pathologic fractures (G2). Blood samples were drawn for routine laboratory analysis and for measurement of periostin, sRANKL and osteopontin serum levels by ELISA kits (Shanghai Sunred Biological Technology, China). Descriptive analysis, Mann-Whitney test for assessment the differences between groups and non-parametric correlation analysis were performed using GraphPad Prism v8.01. Results. The median serum levels of periostin, sRANKL and osteopontin of ММ patients were significantly higher compared to controls (554.7pg/ml (IQR=424.0-720.6) vs 396.9pg/ml (IQR=308.6-471.9), p=0.0001; 8.9pg/ml (IQR=7.1-10.5) vs 5.6pg/ml (IQR=5.1-6.4, p<0.0001 and 514.0ng/ml (IQR=469.3-754.0) vs 387.0ng/ml (IQR=335.9-441.9), p<0.0001, respectively). for assessment of differences between groups and non-parametric correlation analysis were performed using GraphPad Prism v8.01. Statistical significance was found for all tested bone markers between symptomatic MM patients and controls: G1 vs controls (p<0.03), G2 vs controls (p<0.0001) for periostin; G1 vs controls (p<0.0001), G2 vs controls (p<0.0001) for sRANKL; G1 vs controls (p=0.002), G2 vs controls (p<0.0001) for osteopontin, as well between symptomatic MM patients and MGUS patients: G1 vs MGUS (p<0.003), G2 vs MGUS (p=0.003) for periostin; G1 vs MGUS (p<0.05), G2 vs MGUS (p<0.001) for sRANKL; G1 vs MGUS (p=0.011), G2 vs MGUS (p=0.0001) for osteopontin. No differences were detected between MGUS and controls and between patients in G1 and G2 groups. Spearman correlation analysis revealed moderate positive correlation between periostin and beta-2-microglobulin (r=0.416, p=0.018), percentage bone marrow myeloma PC (r=0.432, p=0.014), and serum total protein (r=0.427, p=0.015). Osteopontin levels were also positively related to beta-2-microglobulin (r=0.540, p=0.0014), percentage bone marrow myeloma PC (r=0.423, p=0.016), and serum total protein (r=0.413, p=0.019). Serum sRANKL was only related to beta-2-microglobulin levels (r=0.398, p=0.024). Conclusion: In the present study, serum levels of periostin, sRANKL and osteopontin in newly diagnosed MM patients were evaluated. They gradually increased from MGUS to more advanced stages of MM reflecting the severity of bone destruction. These results support the idea that some new protein markers could be used in monitoring the MBD as a most severe complication of MM.

Keywords: myeloma bone disease, periostin, sRANKL, osteopontin

Procedia PDF Downloads 40
124 Empirical Analysis of the Effect of Cloud Movement in a Basic Off-Grid Photovoltaic System: Case Study Using Transient Response of DC-DC Converters

Authors: Asowata Osamede, Christo Pienaar, Johan Bekker

Abstract:

Mismatch in electrical energy (power) or outage from commercial providers, in general, does not promote development to the public and private sector, these basically limit the development of industries. The necessity for a well-structured photovoltaic (PV) system is of importance for an efficient and cost-effective monitoring system. The major renewable energy potential on earth is provided from solar radiation and solar photovoltaics (PV) are considered a promising technological solution to support the global transformation to a low-carbon economy and reduction on the dependence on fossil fuels. Solar arrays which consist of various PV module should be operated at the maximum power point in order to reduce the overall cost of the system. So power regulation and conditioning circuits should be incorporated in the set-up of a PV system. Power regulation circuits used in PV systems include maximum power point trackers, DC-DC converters and solar chargers. Inappropriate choice of power conditioning device in a basic off-grid PV system can attribute to power loss, hence the need for a right choice of power conditioning device to be coupled with the system of the essence. This paper presents the design and implementation of a power conditioning devices in order to improve the overall yield from the availability of solar energy and the system’s total efficiency. The power conditioning devices taken into consideration in the project includes the Buck and Boost DC-DC converters as well as solar chargers with MPPT. A logging interface circuit (LIC) is designed and employed into the system. The LIC is designed on a printed circuit board. It basically has DC current signalling sensors, specifically the LTS 6-NP. The LIC is consequently required to program the voltages in the system (these include the PV voltage and the power conditioning device voltage). The voltage is structured in such a way that it can be accommodated by the data logger. Preliminary results which include availability of power as well as power loss in the system and efficiency will be presented and this would be used to draw the final conclusion.

Keywords: tilt and orientation angles, solar chargers, PV panels, storage devices, direct solar radiation

Procedia PDF Downloads 113
123 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 69
122 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions

Authors: Varvara Roubtsova, Mohamed Chekired

Abstract:

Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.

Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics

Procedia PDF Downloads 281
121 Neonatal Subcutaneous Fat Necrosis with Severe Hypercalcemia: Case Report

Authors: Atitallah Sofien, Bouyahia Olfa, Krifi farah, Missaoui Nada, Ben Rabeh Rania, Yahyaoui Salem, Mazigh Sonia, Boukthir Samir

Abstract:

Introduction: Subcutaneous fat necrosis of the newborn (SCFN) is a rare acute hypodermatitis characterized by skin lesions in the form of infiltrated, hard plaques and subcutaneous nodules, with a purplish-red color, occurring between the first and sixth week of life. SCFN is generally a benign condition that spontaneously regresses without sequelae, but it can be complicated by severe hypercalcemia. Methodology: This is a retrospective case report of neonatal subcutaneous fat necrosis complicated with severe hypercalcemia and nephrocalcinosis. Results: This is a case of a female newborn with a family history of a hypothyroid mother on Levothyrox, born to non-consanguineous parents and from a well-monitored pregnancy. The newborn was delivered by cesarean section at 39 weeks gestation due to severe preeclampsia. She was admitted to the Neonatal Intensive Care Unit at 1 hour of life for the management of grade 1 perinatal asphyxia and immediate neonatal respiratory distress related to transient respiratory distress. Hospitalization was complicated by a healthcare-associated infection, requiring intravenous antibiotics for ten days, with a good clinical and biological response. On the 20th day of life, she developed skin lesions in the form of indurated purplish-red nodules on the back and on both arms. A SCFN was suspected. A calcium level test was conducted, which returned a result of 3 mmol/L. The rest of the phosphocalcic assessment was normal, with early signs of nephrocalcinosis observed on renal ultrasound. The diagnosis of SCFN complicated by nephrocalcinosis associated with severe hypercalcemia was made, and the condition improved with intravenous hydration and corticosteroid therapy. Conclusion: SCFN is a rare and generally benign hypodermatitis in newborns with an etiology that is still poorly understood. Despite its benign nature, SCFN can be complicated by hypercalcemia, which can sometimes be life-threatening. Therefore, it is important to conduct a thorough skin examination of newborns, especially those with risk factors, to detect and correct any potential hypercalcemia.

Keywords: subcutaneous fat necrosis, newborn, hypercalcemia, nephrocalcinosis

Procedia PDF Downloads 41
120 Cyclic Stress and Masing Behaviour of Modified 9Cr-1Mo at RT and 300 °C

Authors: Preeti Verma, P. Chellapandi, N.C. Santhi Srinivas, Vakil Singh

Abstract:

Modified 9Cr-1Mo steel is widely used for structural components like heat exchangers, pressure vessels and steam generator in the nuclear reactors. It is also found to be a candidate material for future metallic fuel sodium cooled fast breeder reactor because of its high thermal conductivity, lower thermal expansion coefficient, micro structural stability, high irradiation void swelling resistance and higher resistance to stress corrosion cracking in water-steam systems compared to austenitic stainless steels. The components of steam generators that operate at elevated temperatures are often subjected to repeated thermal stresses as a result of temperature gradients which occur on heating and cooling during start-ups and shutdowns or during variations in operating conditions of a reactor. These transient thermal stresses give rise to LCF damage. In the present investigation strain controlled low cycle fatigue tests were conducted at room temperature and 300 °C in normalized and tempered condition using total strain amplitudes in the range from ±0.25% to ±0.5% at strain rate of 10-2 s-1. Cyclic Stress response at high strain amplitudes (±0.31% to ±0.5%) showed initial softening followed by hardening upto a few cycles and subsequent softening till failure. The extent of softening increased with increase in strain amplitude and temperature. Depends on the strain amplitude of the test the stress strain hysteresis loops displayed Masing behaviour at higher strain amplitudes and non-Masing at lower strain amplitudes at both the temperatures. It is quite opposite to the usual Masing and Non-Masing behaviour reported earlier for different materials. Low cycle fatigue damage was evaluated in terms of plastic strain and plastic strain energy approach at room temperature and 300 °C. It was observed that the plastic strain energy approach was found to be more closely matches with the experimental fatigue lives particularly, at 300 °C where dynamic strain aging was observed.

Keywords: Modified 9Cr-mo steel, low cycle fatigue, Masing behavior, cyclic softening

Procedia PDF Downloads 427
119 Exposure of Pacu, Piaractus mesopotamicus Gill Tissue to a High Stocking Density: An Ion Regulatory and Microscopy Study

Authors: Wiolene Montanari Nordi, Debora Botequio Moretti, Mariana Caroline Pontin, Jessica Pampolini, Raul Machado-Neto

Abstract:

Gills are organs responsible for respiration and osmoregulation between the fish internal environment and water. Under stress conditions, oxidative response and gill plasticity to attempt to increase gas exchange area are noteworthy, compromising the physiological processes and therefore fish health. Colostrum is a dietary source of nutrients, immunoglobulin, antioxidant and bioactive molecules, essential for immunological protection and development of the gastrointestinal epithelium. The hypothesis of this work is that antioxidant factors present in the colostrum, unprecedentedly tested in gills, can minimize or reduce the alteration of its epithelium structure of juvenile pacu (Piaractus mesopotamicus) subjected to high stocking density. The histological changes in the gills architecture were characterized by the frequency, incidence and severity of the tissue alteration and ionic status. Juvenile (50 kg fish/m3) were fed with pelleted diets containing 0, 10, 20 or 30% of lyophilized bovine colostrum (LBC) inclusion and at 30 experimental days, gill and blood samples were collected in eight fish per treatment. The study revealed differences in the type, frequency and severity (histological alterations index – HAI) of tissue alterations among the treatments, however, no distinct differences in the incidence of alteration (mean alteration value – MAV) were observed. The main histological changes in gill were elevation of the lamellar epithelium, excessive cell proliferation of the filament and lamellar epithelium causing total or partial melting of the lamella, hyperplasia and hypertrophy of lamellar and filament epithelium, uncontrolled thickening of filament and lamellar tissues, mucous and chloride cells presence in the lamella, aneurysms, vascular congestion and presence of parasites. The MAV obtained per treatment were 2.0, 2.5, 1.8 and 2.5 to fish fed diets containing 0, 10, 20 and 30% of LBC inclusion, respectively, classifying the incidence of gill alterations as slightly to moderate. The severity of alteration of individual fish of treatment 0, 10 and 20% LBC ranged values from 5 to 40 (HAI average of 20.1, 17.5 and 17.6, respectively, P > 0.05), and differs from 30% LBC, that ranged from 6 to 129 (HAI mean of 77.2, P < 0.05). The HAI value in the treatments 0, 10 and 20% LBC reveals gill tissue with injuries classified from slightly to moderate, while in 30% LBC moderate to severe, consequence of the onset of necrosis in the tissue of two fish that compromises the normal functioning of the organ. In relation to frequency of gill alterations, evaluated according to absence of alterations (0) to highly frequent (+++), histological alterations were observed in all evaluated fish, with a trend of higher frequency in 0% LBC. The concentration of Na+, Cl-, K+ and Ca2+ did not changed in all treatments (P > 0.05), indicating similar capacity of ion exchange. The concentrations of bovine colostrum used in diets of present study did not impair the alterations observed in the gills of juvenile pacu.

Keywords: histological alterations of gill tissue, ionic status, lyophilized bovine colostrum, optical microscopy

Procedia PDF Downloads 277
118 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.

Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control

Procedia PDF Downloads 143
117 Antiproliferative and Apoptotic Effects of an Enantiomerically Pure β-Dipeptide Derivative through PI3K/Akt-Dependent and -Independent Pathways in Human Hormone-Refractory Prostate Cancer Cells

Authors: Mei-Ling Chan, Jin-Ming Wu, Konstantin V. Kudryavtsev, Jih-Hwa Guh

Abstract:

Prostate cancer is one of the most common malignant disease in men. KUD983 is an enantiomerically pure β-dipeptide derivative, which may have anti-cancer effects. In the present study, KUD983 exhibits powerful activity against hormone-refractory prostate cancer (HRPC) PC-3 and DU145 cells. The IC50 values of KUD983 in PC-3 and DU145 cells are 0.56±0.07M and 0.50±0.04 M respectively. KUD983 induced G1 arrest of the cell cycle and subsequent apoptosis associated with the down-regulation of several related proteins including cyclin D1, cyclin E and Cdk4, and the de-phosphorylation of RB. The protein expressions of nuclear and total c-Myc protein, which was able to regulate the expression of both cyclin D1 and cyclin E, were significantly suppressed by KUD983. Phosphoinositide 3-kinase (PI3K)/Akt/mammalian target of rapamycin (mTOR) is an important signaling pathway that influences the energy metabolism, cell cycle, proliferation, survival and apoptosis of cells, and is associated with numerous other signaling pathways. The Western Blot data revealed that KUD983 inhibited PI3K/Akt and mTOR/p70S6K/4E-BP1 pathways. The transient transfection of constitutively active myristylated Akt (myr-Akt) cDNA significantly reversed KUD983-induced caspase activation but did not abolish the suppression of mTOR/p70S6K/4E-BP1 signaling cascade indicating the presence of both Akt-dependent and -independent pathways. Moreover, KUD983-induced effect was collaborated with the down-regulation of anti-apoptotic Bcl-2 members (e.g., Bcl-2, and Mcl-1) and IAP family members (e.g., survivin). Furthermore, KUD983 induced autophagic cell death using confocal microscopic examination, investigating the level of conversion of LC3-I to LC3-II and flow cytometric detection of AVO-positive cells. Taken together, the data suggest that KUD983 is an anticancer β-dipeptide against HRPCs through the inhibition of cell proliferation and induction of apoptotic and autophagic cell death. The suppression of signaling pathways mediated by c-Myc, PI3K/Akt and mTOR/p70S6K/4E-BP1 and the collaboration with down-regulation of Mcl-1 and survivin may indicate the mechanism of KUD983 against HRPC.

Keywords: β-dipeptide, hormone-refractory prostate cancer, mTOR, PI3K/Akt

Procedia PDF Downloads 264
116 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 160
115 Beneficial Effect of Micropropagation Coupled with Mycorrhization on Enhancement of Growth Performance of Medicinal Plants

Authors: D. H. Tejavathi

Abstract:

Medicinal plants are globally valuable sources of herbal products. Wild populations of many medicinal plants are facing threat of extinction because of their narrow distribution, endemicity, and degradation of specific habitats. Micropropagation is an established in vitro technique by which large number of clones can be obtained from a small bit of explants in a short span of time within a limited space. Mycorrhization can minimize the transient transplantation shock, experienced by the micropropagated plants when they are transferred from lab to land. AM fungal association improves the physiological status of the host plants through better uptake of water and nutrients, particularly phosphorus. Consequently, the growth performance and biosynthesis of active principles are significantly enhanced in AM fungal treated plants. Bacopa monnieri, Andrographis paniculata, Agave vera-curz, Drymaria cordata and Majorana hortensis, important medicinal plants used in various indigenous systems of medicines, are selected for the present study. They form the main constituents of many herbal formulations. Standard in vitro techniques were followed to obtain the micropropagated plants. Shoot tips and nodal segments were used as explants. Explants were cultured on Murashige and Skoog, and Phillips and Collins media supplemented with various combinations of growth regulators. Multiple shoots were obtained on a media containing both auxins and cytokinins at various concentrations and combinations. Multiple shoots were then transferred to rooting media containing auxins for root induction. Thus, obtained in vitro regenerated plants were subjected to brief acclimatization before transferring them to land. One-month-old in vitro plants were treated with AM fungi, and the symbiotic effect on the overall growth parameters was analyzed. It was found that micropropagation coupled with mycorrhization has significant effect on the enhancement of biomass and biosynthesis of active principles in these selected medicinal plants. In vitro techniques coupled with mycorrhization have opened a possibility of obtaining better clones in respect of enhancement of biomass and biosynthesis of active principles. Beneficial effects of AM fungal association with medicinal plants are discussed.

Keywords: cultivation, medicinal plants, micropropagation, mycorrhization

Procedia PDF Downloads 152
114 Self-Tuning Power System Stabilizer Based on Recursive Least Square Identification and Linear Quadratic Regulator

Authors: J. Ritonja

Abstract:

Available commercial applications of power system stabilizers assure optimal damping of synchronous generator’s oscillations only in a small part of operating range. Parameters of the power system stabilizer are usually tuned for the selected operating point. Extensive variations of the synchronous generator’s operation result in changed dynamic characteristics. This is the reason that the power system stabilizer tuned for the nominal operating point does not satisfy preferred damping in the overall operation area. The small-signal stability and the transient stability of the synchronous generators have represented an attractive problem for testing different concepts of the modern control theory. Of all the methods, the adaptive control has proved to be the most suitable for the design of the power system stabilizers. The adaptive control has been used in order to assure the optimal damping through the entire synchronous generator’s operating range. The use of the adaptive control is possible because the loading variations and consequently the variations of the synchronous generator’s dynamic characteristics are, in most cases, essentially slower than the adaptation mechanism. The paper shows the development and the application of the self-tuning power system stabilizer based on recursive least square identification method and linear quadratic regulator. Identification method is used to calculate the parameters of the Heffron-Phillips model of the synchronous generator. On the basis of the calculated parameters of the synchronous generator’s mathematical model, the synthesis of the linear quadratic regulator is carried-out. The identification and the synthesis are implemented on-line. In this way, the self-tuning power system stabilizer adapts to the different operating conditions. A purpose of this paper is to contribute to development of the more effective power system stabilizers, which would replace currently used linear stabilizers. The presented self-tuning power system stabilizer makes the tuning of the controller parameters easier and assures damping improvement in the complete operating range. The results of simulations and experiments show essential improvement of the synchronous generator’s damping and power system stability.

Keywords: adaptive control, linear quadratic regulator, power system stabilizer, recursive least square identification

Procedia PDF Downloads 222
113 Exploring the History of Chinese Music Acoustic Technology through Data Fluctuations

Authors: Yang Yang, Lu Xin

Abstract:

The study of extant musical sites can provide a side-by-side picture of historical ethnomusicological information. In their data collection on Chinese opera houses, researchers found that one Ming Dynasty opera house reached a width of nearly 18 meters, while all opera houses of the same period and after it was far from such a width, being significantly smaller than 18 meters. The historical transient fluctuations in the data dimension of width that caused Chinese theatres to fluctuate in the absence of construction scale constraints have piqued the interest of researchers as to why there is data variation in width. What factors have contributed to the lack of further expansion in the width of theatres? To address this question, this study used a comparative approach to conduct a venue experiment between this theater stage and another theater stage for non-heritage opera performances, collecting the subjective perceptions of performers and audiences at different theater stages, as well as combining BK Connect platform software to measure data such as echo and delay. From the subjective and objective results, it is inferred that the Chinese ancients discovered and understood the acoustical phenomenon of the Haas effect by exploring the effect of stage width on musical performance and appreciation of listening states during the Ming Dynasty and utilized this discovery to serve music in subsequent stage construction. This discovery marked a node of evolution in Chinese architectural acoustics technology driven by musical demands. It is also instructive to note that, in contrast to many of the world's "unsuccessful civilizations," China can use a combination of heritage and intangible cultural research to chart a clear, demand-driven course for the evolution of human music technology, and that the findings of such research will complete the course of human exploration of music acoustics. The findings of such research will complete the journey of human exploration of music acoustics, and this practical experience can be applied to the exploration and understanding of other musical heritage base data.

Keywords: Haas effect, musical acoustics, history of acoustical technology, Chinese opera stage, structure

Procedia PDF Downloads 165
112 Current Applications of Artificial Intelligence (AI) in Chest Radiology

Authors: Angelis P. Barlampas

Abstract:

Learning Objectives: The purpose of this study is to inform briefly the reader about the applications of AI in chest radiology. Background: Currently, there are 190 FDA-approved radiology AI applications, with 42 (22%) pertaining specifically to thoracic radiology. Imaging findings OR Procedure details Aids of AI in chest radiology1: Detects and segments pulmonary nodules. Subtracts bone to provide an unobstructed view of the underlying lung parenchyma and provides further information on nodule characteristics, such as nodule location, nodule two-dimensional size or three dimensional (3D) volume, change in nodule size over time, attenuation data (i.e., mean, minimum, and/or maximum Hounsfield units [HU]), morphological assessments, or combinations of the above. Reclassifies indeterminate pulmonary nodules into low or high risk with higher accuracy than conventional risk models. Detects pleural effusion . Differentiates tension pneumothorax from nontension pneumothorax. Detects cardiomegaly, calcification, consolidation, mediastinal widening, atelectasis, fibrosis and pneumoperitoneum. Localises automatically vertebrae segments, labels ribs and detects rib fractures. Measures the distance from the tube tip to the carina and localizes both endotracheal tubes and central vascular lines. Detects consolidation and progression of parenchymal diseases such as pulmonary fibrosis or chronic obstructive pulmonary disease (COPD).Can evaluate lobar volumes. Identifies and labels pulmonary bronchi and vasculature and quantifies air-trapping. Offers emphysema evaluation. Provides functional respiratory imaging, whereby high-resolution CT images are post-processed to quantify airflow by lung region and may be used to quantify key biomarkers such as airway resistance, air-trapping, ventilation mapping, lung and lobar volume, and blood vessel and airway volume. Assesses the lung parenchyma by way of density evaluation. Provides percentages of tissues within defined attenuation (HU) ranges besides furnishing automated lung segmentation and lung volume information. Improves image quality for noisy images with built-in denoising function. Detects emphysema, a common condition seen in patients with history of smoking and hyperdense or opacified regions, thereby aiding in the diagnosis of certain pathologies, such as COVID-19 pneumonia. It aids in cardiac segmentation and calcium detection, aorta segmentation and diameter measurements, and vertebral body segmentation and density measurements. Conclusion: The future is yet to come, but AI already is a helpful tool for the daily practice in radiology. It is assumed, that the continuing progression of the computerized systems and the improvements in software algorithms , will redder AI into the second hand of the radiologist.

Keywords: artificial intelligence, chest imaging, nodule detection, automated diagnoses

Procedia PDF Downloads 48
111 Climate Change Law and Transnational Corporations

Authors: Manuel Jose Oyson

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.

Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations

Procedia PDF Downloads 332
110 Comparison of the Efficacy of Ketamine-Propofol versus Thiopental Sodium-Fentanyl in Procedural Sedation in the Emergency Department: A Randomized Double-Blind Clinical Trial

Authors: Maryam Bahreini, Mostafa Talebi Garekani, Fatemeh Rasooli, Atefeh Abdollahi

Abstract:

Introduction: Procedural sedation and analgesia have been desirable to handle painful procedures. The trend to find the agent with more efficacy and less complications is still controversial; thus, many sedative regimens have been studied. This study tried to assess the effectiveness and adverse effects of thiopental sodium-fentanyl with the known medication, ketamine-propofol for procedural sedation in the emergency department. Methods: Consenting patients were enrolled in this randomized double-blind trial to receive either 1:1 ketamine-propofol (KP) or thiopental-fentanyl (TF) 1:1 mg: Mg proportion on a weight-based dosing basis to reach the sedation level of American Society of Anesthesiologist class III/IV. The respiratory and hemodynamic complications, nausea and vomiting, recovery agitation, patient recall and satisfaction, provider satisfaction and recovery time were compared. The study was registered in Iranian randomized Control Trial Registry (Code: IRCT2015111325025N1). Results: 96 adult patients were included and randomized, 47 in the KP group and 49 in the TF group. 2.1% in the KP group and 8.1 % in the TF group experienced transient hypoxia leading to performing 4.2 % versus 8.1 % airway maneuvers for 2 groups, respectively; however, no statistically significant difference was observed between 2 combinations, and there was no report of endotracheal placement or further admission. Patient and physician satisfaction were significantly higher in the KP group. There was no difference in respiratory, gastrointestinal, cardiovascular and psychiatric adverse events, recovery time and patient recall of the procedure between groups. The efficacy and complications were not related to the type of procedure or patients’ smoking or addiction trends. Conclusion: Ketamine-propofol and thiopental-fentanyl combinations were effectively comparable although KP resulted in higher patient and provider satisfaction. It is estimated that thiopental fentanyl combination can be as potent and efficacious as ketofol with relatively similar incidence of adverse events in procedural sedation.

Keywords: adverse effects, conscious sedation, fentanyl, propofol, ketamine, safety, thiopental

Procedia PDF Downloads 195
109 Partially Aminated Polyacrylamide Hydrogel: A Novel Approach for Temporary Oil and Gas Well Abandonment

Authors: Hamed Movahedi, Nicolas Bovet, Henning Friis Poulsen

Abstract:

Following the advent of the Industrial Revolution, there has been a significant increase in the extraction and utilization of hydrocarbon and fossil fuel resources. However, a new era has emerged, characterized by a shift towards sustainable practices, namely the reduction of carbon emissions and the promotion of renewable energy generation. Given the substantial number of mature oil and gas wells that have been developed inside the petroleum reservoir domain, it is imperative to establish an environmental strategy and adopt appropriate measures to effectively seal and decommission these wells. In general, the cement plug serves as a material for plugging purposes. Nevertheless, there exist some scenarios in which the durability of such a plug is compromised, leading to the potential escape of hydrocarbons via fissures and fractures within cement plugs. Furthermore, cement is often not considered a practical solution for temporary plugging, particularly in the case of well sites that have the potential for future gas storage or CO2 injection. The Danish oil and gas industry has promising potential as a prospective candidate for future carbon dioxide (CO2) injection, hence contributing to the implementation of carbon capture strategies within Europe. The primary reservoir component consists of chalk, a rock characterized by limited permeability. This work focuses on the development and characterization of a novel hydrogel variant. The hydrogel is designed to be injected via a low-permeability reservoir and afterward undergoes a transformation into a high-viscosity gel. The primary objective of this research is to explore the potential of this hydrogel as a new solution for effectively plugging well flow. Initially, the synthesis of polyacrylamide was carried out using radical polymerization inside the confines of the reaction flask. Subsequently, with the application of the Hoffman rearrangement, the polymer chain undergoes partial amination, facilitating its subsequent reaction with the crosslinker and enabling the formation of a hydrogel in the subsequent stage. The organic crosslinker, glutaraldehyde, was employed in the experiment to facilitate the formation of a gel. This gel formation occurred when the polymeric solution was subjected to heat within a specified range of reservoir temperatures. Additionally, a rheological survey and gel time measurements were conducted on several polymeric solutions to determine the optimal concentration. The findings indicate that the gel duration is contingent upon the starting concentration and exhibits a range of 4 to 20 hours, hence allowing for manipulation to accommodate diverse injection strategies. Moreover, the findings indicate that the gel may be generated in environments characterized by acidity and high salinity. This property ensures the suitability of this substance for application in challenging reservoir conditions. The rheological investigation indicates that the polymeric solution exhibits the characteristics of a Herschel-Bulkley fluid with somewhat elevated yield stress prior to solidification.

Keywords: polyacrylamide, hofmann rearrangement, rheology, gel time

Procedia PDF Downloads 60
108 Comparison of the Chest X-Ray and Computerized Tomography Scans Requested from the Emergency Department

Authors: Sahabettin Mete, Abdullah C. Hocagil, Hilal Hocagil, Volkan Ulker, Hasan C. Taskin

Abstract:

Objectives and Goals: An emergency department is a place where people can come for a multitude of reasons 24 hours a day. As it is an easy, accessible place, thanks to self-sacrificing people who work in emergency departments. But the workload and overcrowding of emergency departments are increasing day by day. Under these circumstances, it is important to choose a quick, easily accessible and effective test for diagnosis. This results in laboratory and imaging tests being more than 40% of all emergency department costs. Despite all of the technological advances in imaging methods and available computerized tomography (CT), chest X-ray, the older imaging method, has not lost its appeal and effectiveness for nearly all emergency physicians. Progress in imaging methods are very convenient, but physicians should consider the radiation dose, cost, and effectiveness, as well as imaging methods to be carefully selected and used. The aim of the study was to investigate the effectiveness of chest X-ray in immediate diagnosis against the advancing technology by comparing chest X-ray and chest CT scan results of the patients in the emergency department. Methods: Patients who applied to Bulent Ecevit University Faculty of Medicine’s emergency department were investigated retrospectively in between 1 September 2014 and 28 February 2015. Data were obtained via MIAMED (Clear Canvas Image Server v6.2, Toronto, Canada), information management system which patients’ files are saved electronically in the clinic, and were retrospectively scanned. The study included 199 patients who were 18 or older, had both chest X-ray and chest CT imaging. Chest X-ray images were evaluated by the emergency medicine senior assistant in the emergency department, and the findings were saved to the study form. CT findings were obtained from already reported data by radiology department in the clinic. Chest X-ray was evaluated with seven questions in terms of technique and dose adequacy. Patients’ age, gender, application complaints, comorbid diseases, vital signs, physical examination findings, diagnosis, chest X-ray findings and chest CT findings were evaluated. Data saved and statistical analyses have made via using SPSS 19.0 for Windows. And the value of p < 0.05 were accepted statistically significant. Results: 199 patients were included in the study. In 38,2% (n=76) of all patients were diagnosed with pneumonia and it was the most common diagnosis. The chest X-ray imaging technique was appropriate in patients with the rate of 31% (n=62) of all patients. There was not any statistically significant difference (p > 0.05) between both imaging methods (chest X-ray and chest CT) in terms of determining the rates of displacement of the trachea, pneumothorax, parenchymal consolidation, increased cardiothoracic ratio, lymphadenopathy, diaphragmatic hernia, free air levels in the abdomen (in sections including the image), pleural thickening, parenchymal cyst, parenchymal mass, parenchymal cavity, parenchymal atelectasis and bone fractures. Conclusions: When imaging findings, showing cases that needed to be quickly diagnosed, were investigated, chest X-ray and chest CT findings were matched at a high rate in patients with an appropriate imaging technique. However, chest X-rays, evaluated in the emergency department, were frequently taken with an inappropriate technique.

Keywords: chest x-ray, chest computerized tomography, chest imaging, emergency department

Procedia PDF Downloads 167
107 LTE Modelling of a DC Arc Ignition on Cold Electrodes

Authors: O. Ojeda Mena, Y. Cressault, P. Teulet, J. P. Gonnet, D. F. N. Santos, MD. Cunha, M. S. Benilov

Abstract:

The assumption of plasma in local thermal equilibrium (LTE) is commonly used to perform electric arc simulations for industrial applications. This assumption allows to model the arc using a set of magneto-hydromagnetic equations that can be solved with a computational fluid dynamic code. However, the LTE description is only valid in the arc column, whereas in the regions close to the electrodes the plasma deviates from the LTE state. The importance of these near-electrode regions is non-trivial since they define the energy and current transfer between the arc and the electrodes. Therefore, any accurate modelling of the arc must include a good description of the arc-electrode phenomena. Due to the modelling complexity and computational cost of solving the near-electrode layers, a simplified description of the arc-electrode interaction was developed in a previous work to study a steady high-pressure arc discharge, where the near-electrode regions are introduced at the interface between arc and electrode as boundary conditions. The present work proposes a similar approach to simulate the arc ignition in a free-burning arc configuration following an LTE description of the plasma. To obtain the transient evolution of the arc characteristics, appropriate boundary conditions for both the near-cathode and the near-anode regions are used based on recent publications. The arc-cathode interaction is modeled using a non-linear surface heating approach considering the secondary electron emission. On the other hand, the interaction between the arc and the anode is taken into account by means of the heating voltage approach. From the numerical modelling, three main stages can be identified during the arc ignition. Initially, a glow discharge is observed, where the cold non-thermionic cathode is uniformly heated at its surface and the near-cathode voltage drop is in the order of a few hundred volts. Next, a spot with high temperature is formed at the cathode tip followed by a sudden decrease of the near-cathode voltage drop, marking the glow-to-arc discharge transition. During this stage, the LTE plasma also presents an important increase of the temperature in the region adjacent to the hot spot. Finally, the near-cathode voltage drop stabilizes at a few volts and both the electrode and plasma temperatures reach the steady solution. The results after some seconds are similar to those presented for thermionic cathodes.

Keywords: arc-electrode interaction, thermal plasmas, electric arc simulation, cold electrodes

Procedia PDF Downloads 100
106 The Effects of Adding Vibrotactile Feedback to Upper Limb Performance during Dual-Tasking and Response to Misleading Visual Feedback

Authors: Sigal Portnoy, Jason Friedman, Eitan Raveh

Abstract:

Introduction: Sensory substitution is possible due to the capacity of our brain to adapt to information transmitted by a synthetic receptor via an alternative sensory system. Practical sensory substitution systems are being developed in order to increase the functionality of individuals with sensory loss, e.g. amputees. For upper limb prosthetic-users the loss of tactile feedback compels them to allocate visual attention to their prosthesis. The effect of adding vibrotactile feedback (VTF) to the applied force has been studied, however its effect on the allocation if visual attention during dual-tasking and the response during misleading visual feedback have not been studied. We hypothesized that VTF will improve the performance and reduce visual attention during dual-task assignments in healthy individuals using a robotic hand and improve the performance in a standardized functional test, despite the presence of misleading visual feedback. Methods: For the dual-task paradigm, twenty healthy subjects were instructed to toggle two keyboard arrow keys with the left hand to retain a moving virtual car on a road on a screen. During the game, instructions for various activities, e.g. mix the sugar in the glass with a spoon, appeared on the screen. The subject performed these tasks with a robotic hand, attached to the right hand. The robotic hand was controlled by the activity of the flexors and extensors of the right wrist, recorded using surface EMG electrodes. Pressure sensors were attached at the tips of the robotic hand and induced VTF using vibrotactile actuators attached to the right arm of the subject. An eye-tracking system tracked to visual attention of the subject during the trials. The trials were repeated twice, with and without the VTF. Additionally, the subjects performed the modified box and blocks, hidden from eyesight, in a motion laboratory. A virtual presentation of a misleading visual feedback was be presented on a screen so that twice during the trial, the virtual block fell while the physical block was still held by the subject. Results: This is an ongoing study, which current results are detailed below. We are continuing these trials with transradial myoelectric prosthesis-users. In the healthy group, the VTF did not reduce the visual attention or improve performance during dual-tasking for the tasks that were typed transfer-to-target, e.g. place the eraser on the shelf. An improvement was observed for other tasks. For example, the average±standard deviation of time to complete the sugar-mixing task was 13.7±17.2s and 19.3±9.1s with and without the VTF, respectively. Also, the number of gaze shifts from the screen to the hand during this task were 15.5±23.7 and 20.0±11.6, with and without the VTF, respectively. The response of the subjects to the misleading visual feedback did not differ between the two conditions, i.e. with and without VTF. Conclusions: Our interim results suggest that the performance of certain activities of daily living may be improved by VTF. The substitution of visual sensory input by tactile feedback might require a long training period so that brain plasticity can occur and allow adaptation to the new condition.

Keywords: prosthetics, rehabilitation, sensory substitution, upper limb amputation

Procedia PDF Downloads 317
105 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 250
104 Giving Children with Osteogenesis Imperfecta a Voice: Overview of a Participatory Approach for the Development of an Interactive Communication Tool

Authors: M. Siedlikowski, F. Rauch, A. Tsimicalis

Abstract:

Osteogenesis Imperfecta (OI) is a genetic disorder of childhood onset that causes frequent fractures after minimal physical stress. To date, OI research has focused on medically- and surgically-oriented outcomes with little attention on the perspective of the affected child. It is a challenge to elicit the child’s voice in health care, in other words, their own perspective on their symptoms, but software development offers a way forward. Sisom (Norwegian acronym derived from ‘Si det som det er’ meaning ‘Tell it as it is’) is an award-winning, rigorously tested, interactive, computerized tool that helps children with chronic illnesses express their symptoms to their clinicians. The successful Sisom software tool, that addresses the child directly, has not yet been adapted to attend to symptoms unique to children with OI. The purpose of this study was to develop a Sisom paper prototype for children with OI by seeking the perspectives of end users, particularly, children with OI and clinicians. Our descriptive qualitative study was conducted at Shriners Hospitals for Children® – Canada, which follows the largest cohort of children with OI in North America. Purposive sampling was used to recruit 12 children with OI over three cycles. Nine clinicians oversaw the development process, which involved determining the relevance of current Sisom symptoms, vignettes, and avatars, as well as generating new Sisom OI components. Data, including field notes, transcribed audio-recordings, and drawings, were deductively analyzed using content analysis techniques. Guided by the following framework, data pertaining to symptoms, vignettes, and avatars were coded into five categories: a) Relevant; b) Irrelevant; c) To modify; d) To add; e) Unsure. Overall, 70.8% of Sisom symptoms were deemed relevant for inclusion, with 49.4% directly incorporated, and 21.3% incorporated with changes to syntax, and/or vignette, and/or location. Three additions were made to the ‘Avatar’ island. This allowed children to celebrate their uniqueness: ‘Makes you feel like you’re not like everybody else.’ One new island, ‘About Me’, was added to capture children’s worldviews. One new sub-island, ‘Getting Around’, was added to reflect accessibility issues. These issues were related to the children’s independence, their social lives, as well as the perceptions of others. In being consulted as experts throughout the co-creation of the Sisom OI paper prototype, children coded the Sisom symptoms and provided sound rationales for their chosen codes. In rationalizing their codes, all children shared personal stories about themselves and their relationships, insights about their OI, and an understanding of the strengths and challenges they experience on a day-to-day basis. The child’s perspective on their health is a basic right, and allowing it to be heard is the next frontier in the care of children with genetic diseases. Sisom OI, a methodological breakthrough within OI research, will offer clinicians an innovative and child-centered approach to capture this neglected perspective. It will provide a tool for the delivery of health care in the center that established the worldwide standard of care for children with OI.

Keywords: child health, interactive computerized communication tool, participatory approach, symptom management

Procedia PDF Downloads 137
103 School Accidents in Educational Establishment in Tunisia: A Five Years Retrospective Survey in the Governorate of Mahdia

Authors: Lamia Bouzgarrou, Amira Omrane, Leila Mrabet, Taoufik Khalfallah

Abstract:

Background and aims: School accidents are one of the leading causes of morbidity and mortality among pupils and students. Indeed, they may induce an elevated number of lost school days, heavy emotional and physical disabilities, and financial costs on the victims and their families. This study aims to evaluate the annual incidence of school accidents in the central Tunisian governorate of Mahdia and to identify the epidemiological profile of victims and risk factors of these accidents. Methods: A retrospective study was conducted over the period of 5 school years, focusing on school accidents that occurred in public educational institutions (primary, basic, secondary and university) in the governorate of Mahdia (area = 2 966 km² and number of inhabitants in 2014 = 410 812). All accidents declared near the only official insurance of this type of injuries (MASU: Mutual School and University Accidents), and initially taken in charge at the University Hospital of Mahdia were included. Data was collected from the MASU reporting forms and the medical records of emergency and other specialized hospital departments. Results: With 3248 identified victims, the annual incidence of school accidents was equal to 0.69 per 100 pupils and students per year. The average age of victims was 14.51 ± 0.059 years and the sex ratio was 1.58. Pupils aged between 12 and 15 years, were concerned by 46.7% of the identified accidents. The practice of sports was the most relevant circumstances of these accidents (76.2 %). In 56.58 % of cases, falls were the leading mechanism. Bruises and fractures were the most frequent lesions (32.43 % and 30.51 %). Serious school accidents were noted in 28% of cases with hospitalization in 2.27 % of them. The average lost school days, was 12.23±1.73 days. Accidents occurring during sports or leisure activities were significantly more serious (p= 0.021). Furthermore, the frequency of hospitalization was significantly higher among boys (2.81% vs. 1.43%; p= 0.035), students ≤11 years (p= 0.008), and following crush trauma (p= 0.000). In addition, the surgical interventions were statistically more frequent among male victims (p=0.00), accidents occurring during physical education sessions (p=0.000); those associated to falls (p=0.000) and to crushes mechanisms (p=0.002), and injuries affecting lower limbs (p=0.000). Following this Multi-varied analysis concluded that the severity of school accident is correlated to the activity practiced during the trauma and the geographical location of the school. Conclusion: Children and adolescents are one of the most vulnerable groups against incidents with the risk of permanent disability, mainly related to the perturbation of the growth process and physiological limitations. Our five-year study, objectified a real elevate incidence of school accident among children and adolescents, with a considerable rate of severe injuries. In any community, the promotion of adolescents and children’s health is an important indicator of the public health level. Thus, it’s important to develop a multidisciplinary prevention strategy of school accident, based on safety and security rules and adapted to the specificity of our context.

Keywords: children and adolescents, children health, injuries and disability, school accident

Procedia PDF Downloads 99
102 Self-Healing Coatings and Electrospun Fibers

Authors: M. Grandcolas, N. Rival, H. Bu, S. Jahren, R. Schmid, H. Johnsen

Abstract:

The concept of an autonomic self-healing material, where initiation of repair is integrated to the material, is now being considered for engineering applications and is a hot topic in the literature. Among several concepts/techniques, two are most interesting: i) Capsules: Integration of microcapsules in or at the surface of coatings or fibre-like structures has recently gained much attention. Upon damage-induced cracking, the microcapsules are broken by the propagating crack fronts resulting in a release of an active chemical (healing agent) by capillary action, subsequently repairing and avoiding further crack growth. ii) Self-healing polymers: Interestingly, the introduction of dynamic covalent bonds into polymer networks has also recently been used as a powerful approach towards the design of various intrinsically self-healing polymer systems. The idea behind this is to reconnect the chemical crosslinks which are broken when a material fractures, restoring the integrity of the material and thereby prolonging its lifetime. We propose here to integrate both self-healing concepts (capsules, self-healing polymers) in electrospun fibres and coatings. Different capsule preparation approaches have been investigated in SINTEF. The most advanced method to produce capsules is based on emulsification to create a water-in-oil emulsion before polymerisation. The healing agent is a polyurethane-based dispersion that was encapsulated in shell materials consisting of urea-benzaldehyde resins. Results showed the successful preparation of microcapsules and release of the agent when capsules break. Since capsules are produced in water-in-oil systems we mainly investigated organic solvent based coatings while a major challenge resides in the incorporation of capsules into water-based coatings. We also focused on developing more robust microcapsules to prevent premature rupture of the capsules. The capsules have been characterized in terms of size, and encapsulation and release might be visualized by incorporating fluorescent dyes and examine the capsules by microscopy techniques. Alternatively, electrospinning is an innovative technique that has attracted enormous attention due to unique properties of the produced nano-to-micro fibers, ease of fabrication and functionalization, and versatility in controlling parameters. Especially roll-to-roll electrospinning is a unique method which has been used in industry to produce nanofibers continuously. Electrospun nanofibers can usually reach a diameter down to 100 nm, depending on the polymer used, which is of interest for the concept with self-healing polymer systems. In this work, we proved the feasibility of fabrication of POSS-based (POSS: polyhedral oligomeric silsesquioxanes, tradename FunzioNano™) nanofibers via electrospinning. Two different formulations based on aqueous or organic solvents have shown nanofibres with a diameter between 200 – 450nm with low defects. The addition of FunzioNano™ in the polymer blend also showed enhanced properties in term of wettability, promising for e.g. membrane technology. The self-healing polymer systems developed are here POSS-based materials synthesized to develop dynamic soft brushes.

Keywords: capsules, coatings, electrospinning, fibers

Procedia PDF Downloads 241
101 Investigation of Wind Farm Interaction with Ethiopian Electric Power’s Grid: A Case Study at Ashegoda Wind Farm

Authors: Fikremariam Beyene, Getachew Bekele

Abstract:

Ethiopia is currently on the move with various projects to raise the amount of power generated in the country. The progress observed in recent years indicates this fact clearly and indisputably. The rural electrification program, the modernization of the power transmission system, the development of wind farm is some of the main accomplishments worth mentioning. As it is well known, currently, wind power is globally embraced as one of the most important sources of energy mainly for its environmentally friendly characteristics, and also that once it is installed, it is a source available free of charge. However, integration of wind power plant with an existing network has many challenges that need to be given serious attention. In Ethiopia, a number of wind farms are either installed or are under construction. A series of wind farm is planned to be installed in the near future. Ashegoda Wind farm (13.2°, 39.6°), which is the subject of this study, is the first large scale wind farm under construction with the capacity of 120 MW. The first phase of 120 MW (30 MW) has been completed and is expected to be connected to the grid soon. This paper is concerned with the investigation of the wind farm interaction with the national grid under transient operating condition. The main concern is the fault ride through (FRT) capability of the system when the grid voltage drops to exceedingly low values because of short circuit fault and also the active and reactive power behavior of wind turbines after the fault is cleared. On the wind turbine side, a detailed dynamic modelling of variable speed wind turbine of a 1 MW capacity running with a squirrel cage induction generator and full-scale power electronics converters is done and analyzed using simulation software DIgSILENT PowerFactory. On the Ethiopian electric power corporation side, after having collected sufficient data for the analysis, the grid network is modeled. In the model, a fault ride-through (FRT) capability of the plant is studied by applying 3-phase short circuit on the grid terminal near the wind farm. The results show that the Ashegoda wind farm can ride from voltage deep within a short time and the active and reactive power performance of the wind farm is also promising.

Keywords: squirrel cage induction generator, active and reactive power, DIgSILENT PowerFactory, fault ride-through capability, 3-phase short circuit

Procedia PDF Downloads 145