Search results for: sink limitation
29 Tele-Rehabilitation for Multiple Sclerosis: A Case Study
Authors: Sharon Harel, Rachel Kizony, Yoram Feldman, Gabi Zeilig, Mordechai Shani
Abstract:
Multiple Sclerosis (MS) is a neurological disease that may cause restriction in participation in daily activities of young adults. Main symptoms include fatigue, weakness and cognitive decline. The appearance of symptoms, their severity and deterioration rate, change between patients. The challenge of health services is to provide long-term rehabilitation services to people with MS. The objective of this presentation is to describe a course of tele-rehabilitation service of a woman with MS. Methods; R is a 48 years-old woman, diagnosed with MS when she was 22. She started to suffer from weakness of her non-dominant left upper extremity about ten years after the diagnosis. She was referred to the tele-rehabilitation service by her rehabilitation team, 16 years after diagnosis. Her goals were to improve ability to use her affected upper extremity in daily activities. On admission her score in the Mini-Mental State Exam was 30/30. Her Fugl-Meyer Assessment (FMA) score of the left upper extremity was 48/60, indicating mild weakness and she had a limitation of her shoulder abduction (90 degrees). In addition, she reported little use of her arm in daily activities as shown in her responses to the Motor Activity Log (MAL) that were equal to 1.25/5 in amount and 1.37 in quality of use. R. received two 30 minutes on-line sessions per week in the tele-rehabilitation service, with the CogniMotion system. These were complemented by self-practice with the system. The CogniMotion system provides a hybrid (synchronous-asynchronous), the home-based tele-rehabilitation program to improve the motor, cognitive and functional status of people with neurological deficits. The system consists of a computer, large monitor, and the Microsoft’s Kinect 3D sensor. This equipment is located in the client’s home and connected to a clinician’s computer setup in a remote clinic via WiFi. The client sits in front of the monitor and uses his body movements to interact with games and tasks presented on the monitor. The system provides feedback in the form of ‘knowledge of results’ (e.g., the success of a game) and ‘knowledge of performance’ (e.g., alerts for compensatory movements) to enhance motor learning. The games and tasks were adapted for R. motor abilities and level of difficulty was gradually increased according to her abilities. The results of her second assessment (after 35 on-line sessions) showed improvement in her FMA score to 52 and shoulder abduction to 140 degrees. Moreover, her responses to the MAL indicated an increased amount (2.4) and quality (2.2) of use of her left upper extremity in daily activities. She reported high level of enjoyment from the treatments (5/5), specifically the combination of cognitive challenges while moving her body. In addition, she found the system easy to use as reflected by her responses to the System Usability Scale (85/100). To-date, R. continues to receive treatments in the tele-rehabilitation service. To conclude, this case report shows the potential of using tele-rehabilitation for people with MS to provide strategies to enhance the use of the upper extremity in daily activities as well as for maintaining motor function.Keywords: motor function, multiple-sclerosis, tele-rehabilitation, daily activities
Procedia PDF Downloads 18228 Development of Adaptive Proportional-Integral-Derivative Feeding Mechanism for Robotic Additive Manufacturing System
Authors: Andy Alubaidy
Abstract:
In this work, a robotic additive manufacturing system (RAMS) that is capable of three-dimensional (3D) printing in six degrees of freedom (DOF) with very high accuracy and virtually on any surface has been designed and built. One of the major shortcomings in existing 3D printer technology is the limitation to three DOF, which results in prolonged fabrication time. Depending on the techniques used, it usually takes at least two hours to print small objects and several hours for larger objects. Another drawback is the size of the printed objects, which is constrained by the physical dimensions of most low-cost 3D printers, which are typically small. In such cases, large objects are produced by dividing them into smaller components that fit the printer’s workable area. They are then glued, bonded or otherwise attached to create the required object. Another shortcoming is material constraints and the need to fabricate a single part using different materials. With the flexibility of a six-DOF robot, the RAMS has been designed to overcome these problems. A feeding mechanism using an adaptive Proportional-Integral-Derivative (PID) controller is utilized along with a national instrument compactRIO (NI cRIO), an ABB robot, and off-the-shelf sensors. The RAMS have the ability to 3D print virtually anywhere in six degrees of freedom with very high accuracy. It is equipped with an ABB IRB 120 robot to achieve this level of accuracy. In order to convert computer-aided design (CAD) files to digital format that is acceptable to the robot, Hypertherm Robotic Software Inc.’s state-of-the-art slicing software called “ADDMAN” is used. ADDMAN is capable of converting any CAD file into RAPID code (the programing language for ABB robots). The robot uses the generated code to perform the 3D printing. To control the entire process, National Instrument (NI) compactRIO (cRio 9074), is connected and communicated with the robot and a feeding mechanism that is designed and fabricated. The feeding mechanism consists of two major parts, cold-end and hot-end. The cold-end consists of what is conventionally known as an extruder. Typically, a stepper-motor is used to control the push on the material, however, for optimum control, a DC motor is used instead. The hot-end consists of a melt-zone, nozzle, and heat-brake. The melt zone ensures a thorough melting effect and consistent output from the nozzle. Nozzles are made of brass for thermo-conductivity while the melt-zone is comprised of a heating block and a ceramic heating cartridge to transfer heat to the block. The heat-brake ensures that there is no heat creep-up effect as this would swell the material and prevent consistent extrusion. A control system embedded in the cRio is developed using NI Labview which utilizes adaptive PID to govern the heating cartridge in conjunction with a thermistor. The thermistor sends temperature feedback to the cRio, which will issue heat increase or decrease based on the system output. Since different materials have different melting points, our system will allow us to adjust the temperature and vary the material.Keywords: robotic, additive manufacturing, PID controller, cRIO, 3D printing
Procedia PDF Downloads 21827 PARP1 Links Transcription of a Subset of RBL2-Dependent Genes with Cell Cycle Progression
Authors: Ewelina Wisnik, Zsolt Regdon, Kinga Chmielewska, Laszlo Virag, Agnieszka Robaszkiewicz
Abstract:
Apart from protecting genome, PARP1 has been documented to regulate many intracellular processes inter alia gene transcription by physically interacting with chromatin bound proteins and by their ADP-ribosylation. Our recent findings indicate that expression of PARP1 decreases during the differentiation of human CD34+ hematopoietic stem cells to monocytes as a consequence of differentiation-associated cell growth arrest and formation of E2F4-RBL2-HDAC1-SWI/SNF repressive complex at the promoter of this gene. Since the RBL2 complexes repress genes in a E2F-dependent manner and are widespread in the genome in G0 arrested cells, we asked (a) if RBL2 directly contributes to defining monocyte phenotype and function by targeting gene promoters and (b) if RBL2 controls gene transcription indirectly by repressing PARP1. For identification of genes controlled by RBL2 and/or PARP1,we used primer libraries for surface receptors and TLR signaling mediators, genes were silenced by siRNA or shRNA, analysis of gene promoter occupation by selected proteins was carried out by ChIP-qPCR, while statistical analysis in GraphPad Prism 5 and STATISTICA, ChIP-Seq data were analysed in Galaxy 2.5.0.0. On the list of 28 genes regulated by RBL2, we identified only four solely repressed by RBL2-E2F4-HDAC1-BRM complex. Surprisingly, 24 out of 28 emerged genes controlled by RBL2 were co-regulated by PARP1 in six different manners. In one mode of RBL2/PARP1 co-operation, represented by MAP2K6 and MAPK3, PARP1 was found to associate with gene promoters upon RBL2 silencing, which was previously shown to restore PARP1 expression in monocytes. PARP1 effect on gene transcription was observed only in the presence of active EP300, which acetylated gene promoters and activated transcription. Further analysis revealed that PARP1 binding to MA2K6 and MAPK3 promoters enabled recruitment of EP300 in monocytes, while in proliferating cancer cell lines, which actively transcribe PARP1, this protein maintained EP300 at the promoters of MA2K6 and MAPK3. Genome-wide analysis revealed a similar distribution of PARP1 and EP300 around transcription start sites and the co-occupancy of some gene promoters by PARP1 and EP300 in cancer cells. Here, we described a new RBL2/PARP1/EP300 axis which controls gene transcription regardless of the cell type. In this model cell, cycle-dependent transcription of PARP1 regulates expression of some genes repressed by RBL2 upon cell cycle limitation. Thus, RBL2 may indirectly regulate transcription of some genes by controlling the expression of EP300-recruiting PARP1. Acknowledgement: This work was financed by Polish National Science Centre grants nr DEC-2013/11/D/NZ2/00033 and DEC-2015/19/N/NZ2/01735. L.V. is funded by the National Research, Development and Innovation Office grants GINOP-2.3.2-15-2016-00020 TUMORDNS, GINOP-2.3.2-15-2016-00048-STAYALIVE and OTKA K112336. AR is supported by Polish Ministry of Science and Higher Education 776/STYP/11/2016.Keywords: retinoblastoma transcriptional co-repressor like 2 (RBL2), poly(ADP-ribose) polymerase 1 (PARP1), E1A binding protein p300 (EP300), monocytes
Procedia PDF Downloads 21026 Improvement of Autism Diagnostic Observation Schedule Scores after Comprehensive Intensive Early Interventions in a Clinical Setting
Authors: Nils Haglund, Svenolof Dahlgren, Maria Rastam, Peik Gustafsson, Karin Kalien
Abstract:
In Sweden, like in most developed countries, there is a substantial increase of children diagnosed with autism and other conditions within the autism spectrum (ASD). The rapid increase of ASD rates stresses the importance of developing care programs to provide support and comprehensive interventions for affected families. The current observational study was conducted in order to evaluate an ongoing Comprehensive Intensive Early Intervention (CIEI) program for children with autism in southern Sweden. The change in autism symptoms among children participating in CIEI (intervention group, n=67) was compared with children who received traditional habilitation services only (comparison group, n=27). Children of parents who accepted the offered CIEI-program, constituted the intervention group, whereas children, whose parents (for some reason) were not interested in the offered CIEI-program, constituted the comparison group. The CIEI-program was individualized to each child by experienced applied behavior analysis (ABA) specialists with different backgrounds as psychologists, speech pathologists or special education teachers, in cooperation with parents and preschool staff. Due to the individualization, the intervention could vary in intensity and techniques. The intensity was calculated to 15-25 hours each week at home and the preschool altogether. Each child was assigned one 'trainer', who was often employed as a preschool teacher but could have another educational background. An agreement between supervisor- parents and preschool staff was reached to confirm the intensity and content of the CIEI- program over an approximately two-year intervention period. Symptom changes were measured as evaluation-ADOS-2-scores, total- and severity-scores, minus the corresponding baseline-scores, divided by the time between baseline and evaluation. The difference between the study-groups regarding change of ADOS-2-scores was estimated using ANCOVA. In the current study, children in the CIEI-group improved their ADOS-2-total scores between baseline and evaluation (-0.8 scores per year; 95%CI: -1.2 to -0.4), whereas no such improvement was detected in the comparison group (+0.1 scores per year; 95%CI: -0.7 to +0.9). The change difference (change in the CIEI-group vs. change in the comparison group) was statistically significant, both crude and after adjusting for possible confounders (-1.1; 95%CI -1.9 to -0.4). Children in the CIEI-group also significantly improved their ADOS-calibrated severity scores, but not significantly differently so from the comparison group. The results from the current study indicate that the CIEI program significantly improves social and communicative skills among children with autism and that children with developmental delay could benefit to a similar degree as other children. The results support earlier studies reporting on the improvement of autism symptoms after early intensive interventions. The results from observational studies are difficult to interpret, but it is nevertheless of uttermost importance to evaluate costly autism intervention programs. Such results may be of immediate importance to healthcare organizations when allocating the already strained resources to different patient groups. Albeit the obvious limitation of the current naturalistic study, the results support previous positive studies and indicate that children with autism benefit from participating in early comprehensive, intensive programs and that investments in these programs may be highly justifiable.Keywords: autism symptoms, ADOS-scores, evaluation, intervention program
Procedia PDF Downloads 14525 Effects of the In-Situ Upgrading Project in Afghanistan: A Case Study on the Formally and Informally Developed Areas in Kabul
Authors: Maisam Rafiee, Chikashi Deguchi, Akio Odake, Minoru Matsui, Takanori Sata
Abstract:
Cities in Afghanistan have been rapidly urbanized; however, many parts of these cities have been developed with no detailed land use plan or infrastructure. In other words, they have been informally developed without any government leadership. The new government started the In-situ Upgrading Project in Kabul to upgrade roads, the water supply network system, and the surface water drainage system on the existing street layout in 2002, with the financial support of international agencies. This project is an appropriate emergency improvement for living life, but not an essential improvement of living conditions and infrastructure problems because the life expectancies of the improved facilities are as short as 10–15 years, and residents cannot obtain land tenure in the unplanned areas. The Land Readjustment System (LRS) conducted in Japan has good advantages that rearrange irregularly shaped land lots and develop the infrastructure effectively. This study investigates the effects of the In-situ Upgrading Project on private investment, land prices, and residents’ satisfaction with projects in Kart-e-Char, where properties are registered, and in Afshar-e-Silo Lot 1, where properties are unregistered. These projects are located 5 km and 7 km from the CBD area of Kabul, respectively. This study discusses whether LRS should be applied to the unplanned area based on the questionnaire and interview responses of experts experienced in the In-situ Upgrading Project who have knowledge of LRS. The analysis results reveal that, in Kart-e-Char, a lot of private investment has been made in the construction of medium-rise (five- to nine-story) buildings for commercial and residential purposes. Land values have also incrementally increased since the project, and residents are commonly satisfied with the road pavement, drainage systems, and water supplies, but dissatisfied with the poor delivery of electricity as well as the lack of public facilities (e.g., parks and sport facilities). In Afshar-e-Silo Lot 1, basic infrastructures like paved roads and surface water drainage systems have improved from the project. After the project, a few four- and five-story residential buildings were built with very low-level private investments, but significant increases in land prices were not evident. The residents are satisfied with the contribution ratio, drainage system, and small increase in land price, but there is still no drinking water supply system or tenure security; moreover, there are substandard paved roads and a lack of public facilities, such as parks, sport facilities, mosques, and schools. The results of the questionnaire and interviews with the four engineers highlight the problems that remain to be solved in the unplanned areas if LRS is applied—namely, land use differences, types and conditions of the infrastructure still to be installed by the project, and time spent for positive consensus building among the residents, given the project’s budget limitation.Keywords: in-situ upgrading, Kabul city, land readjustment, land value, planned area, private investment, residents' satisfaction, unplanned area
Procedia PDF Downloads 20524 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators
Authors: N. Naz, A. D. Domenico, M. N. Huda
Abstract:
Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator
Procedia PDF Downloads 9023 Mesenchymal Stem Cells (MSC)-Derived Exosomes Could Alleviate Neuronal Damage and Neuroinflammation in Alzheimer’s Disease (AD) as Potential Therapy-Carrier Dual Roles
Authors: Huan Peng, Chenye Zeng, Zhao Wang
Abstract:
Alzheimer’s disease (AD) is an age-related neurodegenerative disease that is a leading cause of dementia syndromes and has become a huge burden on society and families. The main pathological features of AD involve excessive deposition of β-amyloid (Aβ) and Tau proteins in the brain, resulting in loss of neurons, expansion of neuroinflammation, and cognitive dysfunction in patients. Researchers have found effective drugs to clear the brain of error-accumulating proteins or to slow the loss of neurons, but their direct administration has key bottlenecks such as single-drug limitation, rapid blood clearance rate, impenetrable blood-brain barrier (BBB), and poor ability to target tissues and cells. Therefore, we are committed to seeking a suitable and efficient delivery system. Inspired by the possibility that exosomes may be involved in the secretion and transport mechanism of many signaling molecules or proteins in the brain, exosomes have attracted extensive attention as natural nanoscale drug carriers. We selected exosomes derived from bone marrow mesenchymal stem cells (MSC-EXO) with low immunogenicity and exosomes derived from hippocampal neurons (HT22-EXO) that may have excellent homing ability to overcome the deficiencies of oral or injectable pathways and bypass the BBB through nasal administration and evaluated their delivery ability and effect on AD. First, MSC-EXO and HT22 cells were isolated and cultured, and MSCs were identified by microimaging and flow cytometry. Then MSC-EXO and HT22-EXO were obtained by gradient centrifugation and qEV SEC separation column, and a series of physicochemical characterization were performed by transmission electron microscope, western blot, nanoparticle tracking analysis and dynamic light scattering. Next, exosomes labeled with lipophilic fluorescent dye were administered to WT mice and APP/PS1 mice to obtain fluorescence images of various organs at different times. Finally, APP/PS1 mice were administered intranasally with two exosomes 20 times over 40 days and 20 μL each time. Behavioral analysis and pathological section analysis of the hippocampus were performed after the experiment. The results showed that MSC-EXO and HT22-EXO were successfully isolated and characterized, and they had good biocompatibility. MSC-EXO showed excellent brain enrichment in APP/PS1 mice after intranasal administration, could improve the neuronal damage and reduce inflammation levels in the hippocampus of APP/PS1 mice, and the improvement effect was significantly better than HT22-EXO. However, intranasal administration of the two exosomes did not cause depression and anxious-like phenotypes in APP/PS1 mice, nor significantly improved the short-term or spatial learning and memory ability of APP/PS1 mice, and had no significant effect on the content of Aβ plaques in the hippocampus, which also meant that MSC-EXO could use their own advantages in combination with other drugs to clear Aβ plaques. The possibility of realizing highly effective non-invasive synergistic treatment for AD provides new strategies and ideas for clinical research.Keywords: Alzheimer’s disease, exosomes derived from mesenchymal stem cell, intranasal administration, therapy-carrier dual roles
Procedia PDF Downloads 6322 Force Sensor for Robotic Graspers in Minimally Invasive Surgery
Authors: Naghmeh M. Bandari, Javad Dargahi, Muthukumaran Packirisamy
Abstract:
Robot-assisted minimally invasive surgery (RMIS) has been widely performed around the world during the last two decades. RMIS demonstrates significant advantages over conventional surgery, e.g., improving the accuracy and dexterity of a surgeon, providing 3D vision, motion scaling, hand-eye coordination, decreasing tremor, and reducing x-ray exposure for surgeons. Despite benefits, surgeons cannot touch the surgical site and perceive tactile information. This happens due to the remote control of robots. The literature survey identified the lack of force feedback as the riskiest limitation in the existing technology. Without the perception of tool-tissue contact force, the surgeon might apply an excessive force causing tissue laceration or insufficient force causing tissue slippage. The primary use of force sensors has been to measure the tool-tissue interaction force in real-time in-situ. Design of a tactile sensor is subjected to a set of design requirements, e.g., biocompatibility, electrical-passivity, MRI-compatibility, miniaturization, ability to measure static and dynamic force. In this study, a planar optical fiber-based sensor was proposed to mount at the surgical grasper. It was developed based on the light intensity modulation principle. The deflectable part of the sensor was a beam modeled as a cantilever Euler-Bernoulli beam on rigid substrates. A semi-cylindrical indenter was attached to the bottom surface the beam at the mid-span. An optical fiber was secured at both ends on the same rigid substrates. The indenter was in contact with the fiber. External force on the sensor caused deflection in the beam and optical fiber simultaneously. The micro-bending of the optical fiber would consequently result in light power loss. The sensor was simulated and studied using finite element methods. A laser light beam with 800nm wavelength and 5mW power was used as the input to the optical fiber. The output power was measured using a photodetector. The voltage from photodetector was calibrated to the external force for a chirp input (0.1-5Hz). The range, resolution, and hysteresis of the sensor were studied under monotonic and harmonic external forces of 0-2.0N with 0 and 5Hz, respectively. The results confirmed the validity of proposed sensing principle. Also, the sensor demonstrated an acceptable linearity (R2 > 0.9). A minimum external force was observed below which no power loss was detectable. It is postulated that this phenomenon is attributed to the critical angle of the optical fiber to observe total internal reflection. The experimental results were of negligible hysteresis (R2 > 0.9) and in fair agreement with the simulations. In conclusion, the suggested planar sensor is assessed to be a cost-effective solution, feasible, and easy to use the sensor for being miniaturized and integrated at the tip of robotic graspers. Geometrical and optical factors affecting the minimum sensible force and the working range of the sensor should be studied and optimized. This design is intrinsically scalable and meets all the design requirements. Therefore, it has a significant potential of industrialization and mass production.Keywords: force sensor, minimally invasive surgery, optical sensor, robotic surgery, tactile sensor
Procedia PDF Downloads 23121 Evaluation of Forensic Pathology Practice Outside Germany – Experiences From 20 Years of Second Look Autopsies in Cooperation with the Institute of Legal Medicine Munich
Authors: Michael Josef Schwerer, Oliver Peschel
Abstract:
Background: The sense and purpose of forensic postmortem examinations are undoubtedly the same in Institutes of Legal Medicine all over the world. Cause and manner of death must be determined, persons responsible for unnatural death must be brought to justice, and accidents demand changes in the respective scenarios to avoid future mishaps. The latter particularly concerns aircraft accidents, not only regarding consequences from criminal or civil law but also in pursuance of the International Civil Aviation Authority’s regulations, which demand lessons from mishap investigations to improve flight safety. Irrespective of the distinct circumstances of a given casualty or the respective questions in subsequent death investigations, a forensic autopsy is the basis for all further casework, the clue to otherwise hidden solutions, and the crucial limitation for final success when not all possible findings have been properly collected. This also implies that the targeted work of police forces and expert witnesses strongly depends on the quality of forensic pathology practice. Deadly events in foreign countries, which lead to investigations not only abroad but also in Germany, can be challenging in this context. Frequently, second-look autopsies after the repatriation of the deceased to Germany are requested by the legal authorities to ensure proper and profound documentation of all relevant findings. Aims and Methods: To validate forensic postmortem practice abroad, a retrospective study using the findings in the corresponding second-look autopsies in the Institute of Legal Medicine Munich over the last 20 years was carried out. New findings unreported in the previous autopsy were recorded and judged for their relevance to solving the respective case. Further, the condition of the corpse at the time of the second autopsy was rated to discuss artifacts mimicking evidence or the possibility of lost findings resulting from, e.g., decomposition. Recommendations for future handling of death cases abroad and efficient autopsy practice were pursued. Results and Discussion: Our re-evaluation confirmed a high quality of autopsy practice abroad in the vast majority of cases. However, in some casework, incomplete documentation of pathology findings was revealed along with either insufficient or misconducted dissection of organs. Further, some of the bodies showed missing parts of some organs, most probably resulting from sampling for histology studies during the first postmortem. For the aeromedical evaluation of a decedent’s health status prior to an aviation mishap, particularly lost or obscured findings in the heart, lungs, and brain impeded expert testimony. Moreover, incomplete fixation of the body or body parts for repatriation was seen in several cases. This particularly involved previously dissected organs deposited back into the body cavities at the end of the first autopsy. Conclusions and Recommendations: Detailed preparation in the first forensic autopsy avoids the necessity of a second-look postmortem in the majority of cases. To limit decomposition changes during repatriation from abroad, special care must be taken to include pre-dissected organs in the chemical fixation process, particularly when they are separated from the blood vessels and just deposited back into the body cavities.Keywords: autopsy practice, second-look autopsy, retrospective study, quality standards, decomposition changes, repatriation
Procedia PDF Downloads 5120 Reduction and Smelting of Magnetic Fraction Obtained by Magnetic-Gravimetric-Separation (MGS) of Electric Arc Furnace Dust
Authors: Sara Scolari, Davide Mombelli, Gianluca Dall'Osto, Jasna Kastivnik, Gašper Tavčar, Carlo Mapelli
Abstract:
The EIT Raw Materials RIS-DustRec-II project aims to transform Electric Arc Furnace Dust (EAFD) into a valuable resource by overcoming the challenges associated with traditional recycling approaches. EAFD, a zinc-rich industrial by-product typically recycled by the Waelz process, contains complex oxides such as franklinite (ZnFe₂O₄), which hinder the efficient extraction of zinc, by also introducing other valuable elements (Fe, Ni, Cr, Cu, …) in the slag. The project aims to develop a multistage multidisciplinary approach to separate EAFD into two streams: a magnetic and non-magnetic one. In this paper the production of self-reducing briquettes from the magnetic stream of EAFD with a reducing agent, aiming to drive carbothermic reduction and recover iron as a usable alloy, was investigated. Research was focused on optimizing the magnetic and subsequent gravimetric separation (MGS) processes, followed by high-temperature smelting to evaluate reduction efficiency and phase separation. The characterization of selected two different raw EAFD samples and their magnetic-gravitational separation to isolate zinc- and iron-rich fractions was performed by X-ray diffraction and scanning electron microscope. The iron-enriched concentrates were then agglomerated into self-reducing briquettes by mixing them with either biochar (olive pomace pyrolyzed at 350 and 750°C and wood chips pyrolyzed at 750 °C) and a Cupola Furnace dust as reducing agents, combined with gelatinized corn starch as a binder. Cylindrical briquettes were produced and cured for 14 days to ensure structural integrity during subsequent thermal treatments. Smelting tests were carried out at 1400 °C in an inert argon atmosphere to assess the metallization efficiency and the separation between metal and slag phases. A carbon/oxides mass ratio of 0.262 (C/(ZnO+Fe₂O₃)) was used in these tests to maintain continuity with previous studies and to standardize reduction conditions. The magnetic and gravimetric separations effectively isolated zinc- and iron-enriched fractions, particularly for one of the two EAFD, where the concentration of Zn in the concentration fraction was reduced by 8 wt.% while Fe reached 45 wt.%. The reduction tests conducted at 1400 °C showed that the chosen carbon/oxides ratio was sufficient for the smelting of the reducible oxides within the briquettes. However, an important limitation became apparent: the amount of carbon, exceeding the stochiometric value, proved to be excessive for the effective coalescence of metal droplets, preventing clear metal-slag separation. To address this, further smelting tests were carried out in an air atmosphere rather than inert conditions to burn off excess carbon. This paper demonstrates the potential of controlled carbothermic reduction for EAFD recycling. By carefully optimizing the C/(ZnO+Fe₂O₃) ratio, the process can maximize metal recovery while achieving better separation of the metal and slag phases. This approach offers a promising alternative to traditional EAFD recycling methods, with further studies recommended to refine the parameters for industrial application.Keywords: biochars, electrical arc furnace dust, metallization, smelting
Procedia PDF Downloads 1419 A Single Cell Omics Experiments as Tool for Benchmarking Bioinformatics Oncology Data Analysis Tools
Authors: Maddalena Arigoni, Maria Luisa Ratto, Raffaele A. Calogero, Luca Alessandri
Abstract:
The presence of tumor heterogeneity, where distinct cancer cells exhibit diverse morphological and phenotypic profiles, including gene expression, metabolism, and proliferation, poses challenges for molecular prognostic markers and patient classification for targeted therapies. Understanding the causes and progression of cancer requires research efforts aimed at characterizing heterogeneity, which can be facilitated by evolving single-cell sequencing technologies. However, analyzing single-cell data necessitates computational methods that often lack objective validation. Therefore, the establishment of benchmarking datasets is necessary to provide a controlled environment for validating bioinformatics tools in the field of single-cell oncology. Benchmarking bioinformatics tools for single-cell experiments can be costly due to the high expense involved. Therefore, datasets used for benchmarking are typically sourced from publicly available experiments, which often lack a comprehensive cell annotation. This limitation can affect the accuracy and effectiveness of such experiments as benchmarking tools. To address this issue, we introduce omics benchmark experiments designed to evaluate bioinformatics tools to depict the heterogeneity in single-cell tumor experiments. We conducted single-cell RNA sequencing on six lung cancer tumor cell lines that display resistant clones upon treatment of EGFR mutated tumors and are characterized by driver genes, namely ROS1, ALK, HER2, MET, KRAS, and BRAF. These driver genes are associated with downstream networks controlled by EGFR mutations, such as JAK-STAT, PI3K-AKT-mTOR, and MEK-ERK. The experiment also featured an EGFR-mutated cell line. Using 10XGenomics platform with cellplex technology, we analyzed the seven cell lines together with a pseudo-immunological microenvironment consisting of PBMC cells labeled with the Biolegend TotalSeq™-B Human Universal Cocktail (CITEseq). This technology allowed for independent labeling of each cell line and single-cell analysis of the pooled seven cell lines and the pseudo-microenvironment. The data generated from the aforementioned experiments are available as part of an online tool, which allows users to define cell heterogeneity and generates count tables as an output. The tool provides the cell line derivation for each cell and cell annotations for the pseudo-microenvironment based on CITEseq data by an experienced immunologist. Additionally, we created a range of pseudo-tumor tissues using different ratios of the aforementioned cells embedded in matrigel. These tissues were analyzed using 10XGenomics (FFPE samples) and Curio Bioscience (fresh frozen samples) platforms for spatial transcriptomics, further expanding the scope of our benchmark experiments. The benchmark experiments we conducted provide a unique opportunity to evaluate the performance of bioinformatics tools for detecting and characterizing tumor heterogeneity at the single-cell level. Overall, our experiments provide a controlled and standardized environment for assessing the accuracy and robustness of bioinformatics tools for studying tumor heterogeneity at the single-cell level, which can ultimately lead to more precise and effective cancer diagnosis and treatment.Keywords: single cell omics, benchmark, spatial transcriptomics, CITEseq
Procedia PDF Downloads 11918 Cell-free Bioconversion of n-Octane to n-Octanol via a Heterogeneous and Bio-Catalytic Approach
Authors: Shanna Swart, Caryn Fenner, Athanasios Kotsiopoulos, Susan Harrison
Abstract:
Linear alkanes are produced as by-products from the increasing use of gas-to-liquid fuel technologies for synthetic fuel production and offer great potential for value addition. Their current use as low-value fuels and solvents do not maximize this potential. Therefore, attention has been drawn towards direct activation of these aliphatic alkanes to more useful products such as alcohols, aldehydes, carboxylic acids and derivatives. Cytochrome P450 monooxygenases (P450s) can be used for activation of these aliphatic alkanes using whole-cells or cell-free systems. Some limitations of whole-cell systems include reduced mass transfer, stability and possible side reactions. Since the P450 systems are little studied as cell-free systems, they form the focus of this study. Challenges of a cell-free system include co-factor regeneration, substrate availability and enzyme stability. Enzyme immobilization offers a positive outlook on this dilemma, as it may enhance stability of the enzyme. In the present study, 2 different P450s (CYP153A6 and CYP102A1) as well as the relevant accessory enzymes required for electron transfer (ferredoxin and ferredoxin reductase) and co-factor regeneration (glucose dehydrogenase) have been expressed in E. coli and purified by metal affinity chromatography. Glucose dehydrogenase (GDH), was used as a model enzyme to assess the potential of various enzyme immobilization strategies including; surface attachment on MagReSyn® microspheres with various functionalities and on electrospun nanofibers, using self-assembly based methods forming Cross Linked Enzymes (CLE), Cross Linked Enzyme Aggregates (CLEAs) and spherezymes as well as in a sol gel. The nanofibers were synthesized by electrospinning, which required the building of an electrospinning machine. The nanofiber morphology has been analyzed by SEM and binding will be further verified by FT-IR. Covalent attachment based methods showed limitations where only ferredoxin reductase and GDH retained activity after immobilization which were largely attributed to insufficient electron transfer and inactivation caused by the crosslinkers (60% and 90% relative activity loss for the free enzyme when using 0.5% glutaraldehyde and glutaraldehyde/ethylenediamine (1:1 v/v), respectively). So far, initial experiments with GDH have shown the most potential when immobilized via their His-tag onto the surface of MagReSyn® microspheres functionalized with Ni-NTA. It was found that Crude GDH could be simultaneously purified and immobilized with sufficient activity retention. Immobilized pure and crude GDH could be recycled 9 and 10 times, respectively, with approximately 10% activity remaining. The immobilized GDH was also more stable than the free enzyme after storage for 14 days at 4˚C. This immobilization strategy will also be applied to the P450s and optimized with regards to enzyme loading and immobilization time, as well as characterized and compared with the free enzymes. It is anticipated that the proposed immobilization set-up will offer enhanced enzyme stability (as well as reusability and easy recovery), minimal mass transfer limitation, with continuous co-factor regeneration and minimal enzyme leaching. All of which provide a positive outlook on this robust multi-enzyme system for efficient activation of linear alkanes as well as the potential for immobilization of various multiple enzymes, including multimeric enzymes for different bio-catalytic applications beyond alkane activation.Keywords: alkane activation, cytochrome P450 monooxygenase, enzyme catalysis, enzyme immobilization
Procedia PDF Downloads 22717 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 7216 Review of Urbanization Pattern in Kabul City
Authors: Muhammad Hanif Amiri, Edris Sadeqy, Ahmad Freed Osman
Abstract:
International Conference on Architectural Engineering and Skyscraper (ICAES 2016) on January 18 - 19, 2016 is aimed to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration. Therefore, we are very keen to participate and share our issues in order to get valuable feedbacks of the conference participants. Urbanization is a controversial issue all around the world. Substandard and unplanned urbanization has many implications on a social, cultural and economic situation of population life. Unplanned and illegal construction has become a critical issue in Afghanistan particularly Kabul city. In addition, lack of municipal bylaws, poor municipal governance, lack of development policies and strategies, budget limitation, low professional capacity of ainvolved private sector in development and poor coordination among stakeholders are the other factors which made the problem more complicated. The main purpose of this research paper is to review urbanization pattern of Kabul city and find out the improvement solutions and to evaluate the increasing of population density which caused vast illegal and unplanned development which finally converts the Kabul city to a slam area as the whole. The Kabul city Master Plan was reviewed in the year 1978 and revised for the planned 2million population. In 2001, the interim administration took place and the city became influx of returnees from neighbor countries and other provinces of Afghanistan mostly for the purpose of employment opportunities, security and better quality of life, therefore, Kabul faced with strange population growth. According to Central Statistics Organization of Afghanistan population of Kabul has been estimated approx. 5 million (2015), however a new Master Plan has been prepared in 2009, but the existing challenges have not been dissolved yet. On the other hand, 70% of Kabul population is living in unplanned (slam) area and facing the shortage of drinking water, inexistence of sewerage and drainage network, inexistence of proper management system for solid waste collection, lack of public transportation and traffic management, environmental degradation and the shortage of social infrastructure. Although there are many problems in Kabul city, but still the development of 22 townships are in progress which caused the great attraction of population. The research is completed with a detailed analysis on four main issues such as elimination of duplicated administrations, Development of regions, Rehabilitation and improvement of infrastructure, and prevention of new townships establishment in Kabul Central Core in order to mitigate the problems and constraints which are the foundation and principal to find the point of departure for an objective based future development of Kabul city. The closure has been defined to reflect the stage-wise development in light of prepared policy and strategies, development of a procedure for the improvement of infrastructure, conducting a preliminary EIA, defining scope of stakeholder’s contribution and preparation of project list for initial development. In conclusion this paper will help the transformation of Kabul city.Keywords: development of regions, illegal construction, population density, urbanization pattern
Procedia PDF Downloads 32015 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank
Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh
Abstract:
Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI
Procedia PDF Downloads 11414 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes
Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi
Abstract:
The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees
Procedia PDF Downloads 14613 Encapsulated Bioflavonoids: Nanotechnology Driven Food Waste Utilization
Authors: Niharika Kaushal, Minni Singh
Abstract:
Citrus fruits fall into the category of those commercially grown fruits that constitute an excellent repository of phytochemicals with health-promoting properties. Fruits belonging to the citrus family, when processed by industries, produce tons of agriculture by-products in the form of peels, pulp, and seeds, which normally have no further usage and are commonly discarded. In spite of this, such residues are of paramount importance due to their richness in valuable compounds; therefore, agro-waste is considered a valuable bioresource for various purposes in the food sector. A range of biological properties, including anti-oxidative, anti-cancerous, anti-inflammatory, anti-allergenicity, and anti-aging activity, have been reported for these bioactive compounds. Taking advantage of these inexpensive residual sources requires special attention to extract bioactive compounds. Mandarin (Citrus nobilis X Citrus deliciosa) is a potential source of bioflavonoids with antioxidant properties, and it is increasingly regarded as a functional food. Despite these benefits, flavonoids suffer from a barrier of pre-systemic metabolism in gastric fluid, which impedes their effectiveness. Therefore, colloidal delivery systems can completely overcome the barrier in question. This study involved the extraction and identification of key flavonoids from mandarin biomass. Using a green chemistry approach, supercritical fluid extraction at 330 bar, temperature 40C, and co-solvent 10% ethanol was employed for extraction, and the identification of flavonoids was made by mass spectrometry. As flavonoids are concerned with a limitation, the obtained extract was encapsulated in polylactic-co-glycolic acid (PLGA) matrix using a solvent evaporation method. Additionally, the antioxidant potential was evaluated by the 2,2-diphenylpicrylhydrazyl (DPPH) assay. A release pattern of flavonoids was observed over time using simulated gastrointestinal fluids. From the results, it was observed that the total flavonoids extracted from the mandarin biomass were estimated to be 47.3 ±1.06 mg/ml rutin equivalents as total flavonoids. In the extract, significantly, polymethoxyflavones (PMFs), tangeretin and nobiletin were identified, followed by hesperetin and naringin. The designed flavonoid-PLGA nanoparticles exhibited a particle size between 200-250nm. In addition, the bioengineered nanoparticles had a high entrapment efficiency of nearly 80.0% and maintained stability for more than a year. Flavonoid nanoparticles showed excellent antioxidant activity with an IC50 of 0.55μg/ml. Morphological studies revealed the smooth and spherical shape of nanoparticles as visualized by Field emission scanning electron microscopy (FE-SEM). Simulated gastrointestinal studies of free extract and nanoencapsulation revealed the degradation of nearly half of the flavonoids under harsh acidic conditions in the case of free extract. After encapsulation, flavonoids exhibited sustained release properties, suggesting that polymeric encapsulates are efficient carriers of flavonoids. Thus, such technology-driven and biomass-derived products form the basis for their use in the development of functional foods with improved therapeutic potential and antioxidant properties. As a result, citrus processing waste can be considered a new resource that has high value and can be used for promoting its utilization.Keywords: citrus, agrowaste, flavonoids, nanoparticles
Procedia PDF Downloads 13012 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 6211 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients
Authors: Ainura Tursunalieva, Irene Hudson
Abstract:
Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence
Procedia PDF Downloads 15310 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan
Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad
Abstract:
Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules
Procedia PDF Downloads 1079 Developing and integrated Clinical Risk Management Model
Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei
Abstract:
Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.Keywords: failure modes and effective analysis, risk management, root cause analysis, model
Procedia PDF Downloads 2508 Ecotoxicological Test-Battery for Efficiency Assessment of TiO2 Assisted Photodegradation of Emerging Micropolluants
Authors: Ildiko Fekete-Kertesz, Jade Chaker, Sylvain Berthelot, Viktoria Feigl, Monika Molnar, Lidia Favier
Abstract:
There has been growing concern about emerging micropollutants in recent years, because of the possible environmental and health risk posed by these substances, which are released into the environment as a consequence of anthropogenic activities. Among them pharmaceuticals are currently not considered under water quality regulations; however, their potential effect on the environment have become more frequent in recent years. Due to the fact that these compounds can be detected in natural water matrices, it can be concluded, that the currently applied water treatment processes are not efficient enough for their effective elimination. To date, advanced oxidation processes (AOPs) are considered as highly competitive water treatment technologies for the removal of those organic micropollutants not treatable by conventional techniques due to their high chemical stability and/or low biodegradability. AOPs such as (photo)chemical oxidation and heterogeneous photocatalysis have proven their potential in degrading harmful organic compounds from aqueous matrices. However, some of these technologies generate reaction by-products, which can even be more toxic to aquatic organisms than the parent compounds. Thus, target compound removal does not necessarily result in the removal of toxicity. Therefore, to evaluate process efficiency the determination of the toxicity and ecotoxicity of the reaction intermediates is crucial to estimate the environmental risk of such techniques. In this context, the present study investigates the effectiveness of TiO2 assisted photodegradation for the removal of emerging water contaminants. Two drugs named losartan (used in high blood pressure medication) and levetiracetam (used to treat epilepsy) were considered in this work. The photocatalytic reactions were carried out with a commercial catalyst usually employed in photocatalysis. Moreover, the toxicity of the by-products generated during the process was assessed with various ecotoxicological methods applying aquatic test organisms from different trophic levels. A series of experiments were performed to evaluate the toxicity of untreated and treated solutions applying the Aliivibrio fischeri bioluminescence inhibition test, the Tetrahymena pyriformis proliferation inhibition test, the Daphnia magna lethality and immobilization tests and the Lemna minor growth inhibition test. The applied ecotoxicological methodology indicated sensitively the toxic effects of the treated and untreated water samples, hence the applied test battery is suitable for the ecotoxicological characterization of TiO2 based photocatalytic water treatment technologies and the indication of the formation of toxic by-products from the parent chemical compounds. Obtained results clearly showed that the TiO2 assisted photodegradation was more efficient in the elimination of losartan than levetiracetam. It was also observed that the treated levetiracetam solutions had more severe effect on the applied test organisms. A possible explanation would be the production of levetiracetam by-products, which are more toxic than the parent compound. The increased toxicity and the risk of formation of toxic metabolites represent one possible limitation to the implementation of photocatalytic treatment using TiO2 for the removal of losartan and levetiracetam. Our results proved that, the battery of ecotoxicity tests used in this work can be a promising investigation tool for the environmental risk assessment of photocatalytic processes.Keywords: aquatic micropollutants, ecotoxicology, nano titanium dioxide, photocatalysis, water treatment
Procedia PDF Downloads 1917 Renewable Energy Micro-Grid Control Using Microcontroller in LabVIEW
Authors: Meena Agrawal, Chaitanya P. Agrawal
Abstract:
The power systems are transforming and becoming smarter with innovations in technologies to enable embark simultaneously upon the sustainable energy needs, rising environmental concerns, economic benefits and quality requirements. The advantages provided by inter-connection of renewable energy resources are becoming more viable and dependable with the smart controlling technologies. The limitation of most renewable resources have their diversity and intermittency causing problems in power quality, grid stability, reliability, security etc. is being cured by these efforts. A necessitate of optimal energy management by intelligent Micro-Grids at the distribution end of the power system has been accredited to accommodate sustainable renewable Distributed Energy Resources on large scale across the power grid. All over the world Smart Grids are emerging now as foremost concern infrastructure upgrade programs. The hardware setup includes NI cRIO 9022, Compact Reconfigurable Input Output microcontroller board connected to the PC on a LAN router with three hardware modules. The Real-Time Embedded Controller is reconfigurable controller device consisting of an embedded real-time processor controller for communication and processing, a reconfigurable chassis housing the user-programmable FPGA, Eight hot-swappable I/O modules, and graphical LabVIEW system design software. It has been employed for signal analysis, controls and acquisition and logging of the renewable sources with the LabVIEW Real-Time applications. The employed cRIO chassis controls the timing for the module and handles communication with the PC over the USB, Ethernet, or 802.11 Wi-Fi buses. It combines modular I/O, real-time processing, and NI LabVIEW programmable. In the presented setup, the Analog Input Module NI 9205 five channels have been used for input analog voltage signals from renewable energy sources and NI 9227 four channels have been used for input analog current signals of the renewable sources. For switching actions based on the programming logic developed in software, a module having Electromechanical Relays (single-pole single throw) with 4-Channels, electrically isolated and LED indicating the state of that channel have been used for isolating the renewable Sources on fault occurrence, which is decided by the logic in the program. The module for Ethernet based Data Acquisition Interface ENET 9163 Ethernet Carrier, which is connected on the LAN Router for data acquisition from a remote source over Ethernet also has the module NI 9229 installed. The LabVIEW platform has been employed for efficient data acquisition, monitoring and control. Control logic utilized in program for operation of the hardware switching Related to Fault Relays has been portrayed as a flowchart. A communication system has been successfully developed amongst the sources and loads connected on different computers using Hypertext transfer protocol, HTTP or Ethernet Local Stacked area Network TCP/IP protocol. There are two main I/O interfacing clients controlling the operation of the switching control of the renewable energy sources over internet or intranet. The paper presents experimental results of the briefed setup for intelligent control of the micro-grid for renewable energy sources, besides the control of Micro-Grid with data acquisition and control hardware based on a microcontroller with visual program developed in LabVIEW.Keywords: data acquisition and control, LabVIEW, microcontroller cRIO, Smart Micro-Grid
Procedia PDF Downloads 3346 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength
Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma
Abstract:
The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.
Procedia PDF Downloads 575 The Ecuador Healthy Food Environment Policy Index (Food-EPI)
Authors: Samuel Escandón, María J. Peñaherrera-Vélez, Signe Vargas-Rosvik, Carlos Jerves Córdova, Ximena Vélez-Calvo, Angélica Ochoa-Avilés
Abstract:
Overweight and obesity are considered risk factors in childhood for developing nutrition-related non-communicable diseases (NCDs), such as diabetes, cardiovascular diseases, and cancer. In Ecuador, 35.4% of 5- to 11-year-olds and 29.6% of 12- to 19-year-olds are overweight or obese. Globally, unhealthy food environments characterized by high consumption of processed/ultra-processed food and rapid urbanization are highly related to the increasing nutrition-related non-communicable diseases. The evidence shows that in low- and middle-income countries (LMICs), fiscal policies and regulatory measures significantly reduce unhealthy food environments, achieving substantial advances in health. However, in some LMICs, little is known about the impact of governments' action to implement healthy food-environment policies. This study aimed to generate evidence on the state of implementation of public policy focused on food environments for the prevention of overweight and obesity in children and adolescents in Ecuador compared to global best practices and to target key recommendations for reinforcing the current strategies. After adapting the INFORMAS' Healthy Food Environment Policy Index (Food‐EPI) to the Ecuadorian context, the Policy and Infrastructure support components were assessed. Individual online interviews were performed using fifty-one indicators to analyze the level of implementation of policies directly or indirectly related to preventing overweight and obesity in children and adolescents compared to international best practices. Additionally, a participatory workshop was conducted to identify the critical indicators and generate recommendations to reinforce or improve the political action around them. In total, 17 government and non-government experts were consulted. From 51 assessed indicators, only the one corresponding to the nutritional information and ingredients labelling registered an implementation level higher than 60% (67%) compared to the best international practices. Among the 17 indicators determined as priorities by the participants, those corresponding to the provision of local products in school meals and the limitation of unhealthy-products promotion in traditional and digital media had the lowest level of implementation (34% and 11%, respectively) compared to global best practices. The participants identified more barriers (e.g., lack of continuity of effective policies across government administrations) than facilitators (e.g., growing interest from the Ministry of Environment because of the eating-behavior environmental impact) for Ecuador to move closer to the best international practices. Finally, within the participants' recommendations, we highlight the need for policy-evaluation systems, information transparency on the impact of the policies, transformation of successful strategies into laws or regulations to make them mandatory, and regulation of power and influence from the food industry (conflicts of interest). Actions focused on promoting a more active role of society in the stages of policy formation and achieving more articulated actions between the different government levels/institutions for implementing the policy are necessary to generate a noteworthy impact on preventing overweight and obesity in children and adolescents. Including systems for internal evaluation of existing strategies to strengthen successful actions, create policies to fill existing gaps and reform policies that do not generate significant impact should be a priority for the Ecuadorian government to improve the country's food environments.Keywords: children and adolescents, food-EPI, food policies, healthy food environment
Procedia PDF Downloads 654 Optical Coherence Tomography in Differentiation of Acute and Non-Healing Wounds
Authors: Ananya Barui, Provas Banerjee, Jyotirmoy Chatterjee
Abstract:
Application of optical technology in medicine and biology has a long track-record. In this endeavor, OCT is able to attract both engineers and biologists to work together in the field of photonics for establishing a striking non-invasive imaging technology. In contrast to other in vivo imaging modalities like Raman imaging, confocal imaging, two-photon microscopy etc. which can perform in vivo imaging upto 100-200 micron depth due to limitation in numerical aperture or scattering, however, OCT can achieve high-resolution imaging upto few millimeters of tissue structures depending on their refractive index in different anatomical location. This tomographic system depends on interference of two light waves in an interferometer to produce a depth profile of specimen. In wound healing, frequent collection of biopsies for follow-up of repair process could be avoided by such imaging technique. Real time skin OCT (the optical biopsy) has efficacy in deeper and faster illumination of cutaneou tissue to acquire high resolution cross sectional images of their internal micro-structure. Swept Source-OCT (SS-OCT), a novel imaging technique, can generate high-speed depth profile (~ 2 mm) of wound at a sweeping rate of laser with micron level resolution and optimum coherent length of 5-6 mm. Normally multi-layered skin tissue depicts different optical properties along with variation in thickness, refractive index and composition (i.e. keratine layer, water, fat etc.) according to their anatomical location. For instance, stratum corneum, the upper-most and relatively dehydrated layer of epidermis reflects more light and produces more lucid and a sharp demarcation line with rest of the hydrated epidermal region. During wound healing or regeneration, optical properties of cutaneous tissue continuously altered with maturation of wound bed. More mature and less hydrated tissue component reflects more light and becomes visible as a brighter area in comparison to immature region which content higher amount water or fat that depicts as a darker area in OCT image. Non-healing wound possess prolonged inflammation and inhibits nascent proliferative stage. Accumulation of necrotic tissues also prevents the repair of non-healing wounds. Due to high resolution and potentiality to reflect the compositional aspects of tissues in terms of their optical properties, this tomographic method may facilitate in differentiating non-healing and acute wounds in addition to clinical observations. Non-invasive OCT offers better insight regarding specific biological status of tissue in health and pathological conditions, OCT images could be associated with histo-pathological ‘gold standard’. This correlated SS-OCT and microscopic evaluation of the wound edges can provide information regarding progressive healing and maturation of the epithelial components. In the context of searching analogy between two different imaging modalities, their relative performances in imaging of healing bed were estimated for probing an alternative approach. Present study validated utility of SS-OCT in revealing micro-anatomic structure in the healing bed with newer information. Exploring precise correspondence of OCT images features with histo-chemical findings related to epithelial integrity of the regenerated tissue could have great implication. It could establish the ‘optical biopsy’ as a potent non-invasive diagnostic tool for cutaneous pathology.Keywords: histo-pathology, non invasive imaging, OCT, wound healing
Procedia PDF Downloads 2793 Amifostine Analogue, Drde-30, Attenuates Radiation-Induced Lung Injury in Mice
Authors: Aastha Arora, Vikas Bhuria, Saurabh Singh, Uma Pathak, Shweta Mathur, Puja P. Hazari, Rajat Sandhir, Ravi Soni, Anant N. Bhatt, Bilikere S. Dwarakanath
Abstract:
Radiotherapy is an effective curative and palliative option for patients with thoracic malignancies. However, lung injury, comprising of pneumonitis and fibrosis, remains a significant clin¬ical complication of thoracic radiation, thus making it a dose-limiting factor. Also, injury to the lung is often reported as part of multi-organ failure in victims of accidental radiation exposures. Radiation induced inflammatory response in the lung, characterized by leukocyte infiltration and vascular changes, is an important contributing factor for the injury. Therefore, countermeasure agents to attenuate radiation induced inflammatory response are considered as an important approach to prevent chronic lung damage. Although Amifostine, the widely used, FDA approved radio-protector, has been found to reduce the radiation induced pneumonitis during radiation therapy of non-small cell lung carcinoma, its application during mass and field exposure is limited due to associated toxicity and ineffectiveness with the oral administration. The amifostine analogue (DRDE-30) overcomes this limitation as it is orally effective in reducing the mortality of whole body irradiated mice. The current study was undertaken to investigate the potential of DRDE-30 to ameliorate radiation induced lung damage. DRDE-30 was administered intra-peritoneally, 30 minutes prior to 13.5 Gy thoracic (60Co-gamma) radiation in C57BL/6 mice. Broncheo- alveolar lavage fluid (BALF) and lung tissues were harvested at 12 and 24 weeks post irradiation for studying inflammatory and fibrotic markers. Lactate dehydrogenase (LDH) leakage, leukocyte count and protein content in BALF were used as parameters to evaluate lung vascular permeability. Inflammatory cell signaling (p38 phosphorylation) and anti-oxidant status (MnSOD and Catalase level) was assessed by Western blot, while X-ray CT scan, H & E staining and trichrome staining were done to study the lung architecture and collagen deposition. Irradiation of the lung increased the total protein content, LDH leakage and total leukocyte count in the BALF, reflecting endothelial barrier dysfunction. These disruptive effects were significantly abolished by DRDE-30, which appear to be linked to the DRDE-30 mediated abrogation of activation of the redox-sensitive pro- inflammatory signaling cascade, the MAPK pathway. Concurrent administration of DRDE-30 with radiation inhibited radiation-induced oxidative stress by strengthening the anti-oxidant defense system and abrogated p38 mitogen-activated protein kinase activation, which was associated with reduced vascular leak and macrophage recruitment to the lungs. Histopathological examination (by H & E staining) of the lung showed radiation-induced inflammation of the lungs, characterized by cellular infiltration, interstitial oedema, alveolar wall thickening, perivascular fibrosis and obstruction of alveolar spaces, which were all reduced by pre-administration of DRDE-30. Structural analysis with X-ray CT indicated lung architecture (linked to the degree of opacity) comparable to un-irradiated mice that correlated well with the lung morphology and reduced collagen deposition. Reduction in the radiation-induced inflammation and fibrosis brought about by DRDE-30 resulted in a profound increase in animal survival (72 % in the combination vs 24% with radiation) observed at the end of 24 weeks following irradiation. These findings establish the potential of the Amifostine analogue, DRDE-30, in reducing radiation induced pulmonary injury by attenuating the inflammatory and fibrotic responses.Keywords: amifostine, fibrosis, inflammation, lung injury radiation
Procedia PDF Downloads 5102 Clinically-Based Improvement Project Focused on Reducing Risks Associated with Diabetes Insipidus, Syndrome of Inappropriate ADH, and Cerebral Salt Wasting in Paediatric Post-Neurosurgical and Traumatic Brain Injury Patients
Authors: Shreya Saxena, Felix Miller-Molloy, Phillipa Bowen, Greg Fellows, Elizabeth Bowen
Abstract:
Background: Complex fluid balance abnormalities are well-established post-neurosurgery and traumatic brain injury (TBI). The triple-phase response requires fluid management strategies reactive to urine output and sodium homeostasis as patients shift between Diabetes Insipidus (DI) and Syndrome of Inappropriate ADH (SIADH). It was observed, at a tertiary paediatric center, a relatively high prevalence of the above complications within a cohort of paediatric post-neurosurgical and TBI patients. An audit of the clinical practice against set institutional guidelines was undertaken and analyzed to understand why this was occurring. Based on those results, new guidelines were developed with structured educational packages for the specialist teams involved. This was then reaudited, and the findings were compared. Methods: Two independent audits were conducted across two time periods, pre and post guideline change. Primary data was collected retrospectively, including both qualitative and quantitative data sets from the CQUIN neurosurgical database and electronic medical records. All paediatric patients post posterior fossa (PFT) or supratentorial surgery or with a TBI were included. A literature review of evidence-based practice, initial audit data, and stakeholder feedback was used to develop new clinical guidelines and nursing standard operation procedures. Compliance against these newly developed guidelines was re-assessed and a thematic, trend-based analysis of the two sets of results was conducted. Results: Audit-1 January2017-June2018, n=80; Audit-2 January2020-June2021, n=30 (reduced operative capacity due to COVID-19 pandemic). Overall, improvements in the monitoring of both fluid balance and electrolyte trends were demonstrated; 51% vs. 77% and 78% vs. 94%, respectively. The number of clear fluid management plans documented postoperatively also increased (odds ratio of 4), leading to earlier recognition and management of evolving fluid-balance abnormalities. The local paediatric endocrine team was involved in the care of all complex cases and notified sooner for those considered to be developing DI or SIADH (14% to 35%). However, significant Na fluctuations (>12mmol in 24 hours) remained similar – 5 vs six patients – found to be due to complex pituitary hypothalamic pathology – and the recommended adaptive fluid management strategy was still not always used. Qualitative data regarding useability and understanding of fluid-balance abnormalities and the revised guidelines were obtained from health professionals via surveys and discussion in the specialist teams providing care. The feedback highlighted the new guidelines provided a more consistent approach to the post-operative care of these patients and was a better platform for communication amongst the different specialist teams involved. The potential limitation to our study would be the small sample size on which to conduct formal analyses; however, this reflects the population that we were investigating, which we cannot control. Conclusion: The revised clinical guidelines, based on audited data, evidence-based literature review and stakeholder consultations, have demonstrated an improvement in understanding of the neuro-endocrine complications that are possible, as well as increased compliance to post-operative monitoring of fluid balance and electrolytes in this cohort of patients. Emphasis has been placed on preventative rather than treatment of DI and SIADH. Consequently, this has positively impacted patient safety for the center and highlighted the importance of educational awareness and multi-disciplinary team working.Keywords: post-operative, fluid-balance management, neuro-endocrine complications, paediatric
Procedia PDF Downloads 931 Employee Engagement
Authors: Jai Bakliya, Palak Dhamecha
Abstract:
Today customer satisfaction is given utmost priority be it any industry. But when it comes to hospitality industry this applies even more as they come in direct contact with customers while providing them services. Employee engagement is new concept adopted by Human Resource Department which impacts customer satisfactions. To satisfy your customers, it is necessary to see that the employees in the organisation are satisfied and engaged enough in their work that they meet the company’s expectations and contribute in the process of achieving company’s goals and objectives. After all employees is human capital of the organisation. Employee engagement has become a top business priority for every organisation. In this fast moving economy, business leaders know that having a potential and high-performing human resource is important for growth and survival. They recognize that a highly engaged manpower can increase innovation, productivity, and performance, while reducing costs related to retention and hiring in highly competitive talent markets. But while most executives see a clear need to improve employee engagement, many have yet to develop tangible ways to measure and tackle this goal. Employee Engagement is an approach which is applied to establish an emotional connection between an employee and the organisation which ensures the employee’s commitment towards his work which affects the productivity and overall performance of the organisation. The study was conducted in hospitality industry. A popular branded hotel was chosen as a sample unit. Data were collected, both qualitative and quantitative from respondents. It is found that employee engagement level of the organisation (Hotel) is quite low. This means that employees are not emotionally connected with the organisation which may in turn, affect performance of the employees it is important to note that in hospitality industry individual employee’s performance specifically in terms of emotional engagement is critical and, therefore, a low engagement level may contribute to low organisation performance. An attempt to this study was made to identify employee engagement level. Another objective to take this study was to explore the factors impeding employee engagement and to explore employee engagement facilitation. While in the hospitality industry where people tend to work for as long as 16 to 18 hours concepts like employee engagement is essential. Because employees get tired of their routine job and in case where job rotation cannot be done employee engagement acts as a solution. The study was conducted at Trident Hotel, Udaipur. It was conducted on the sample size of 30 in-house employees from 6 different departments. The various departments were: Accounts and General, Front Office, Food & Beverage Service, Housekeeping, Food & Beverage Production and Engineering. It was conducted with the help of research instrument. The research instrument was Questionnaire. Data collection source was primary source. Trident Udaipur is one of the busiest hotels in Udaipur. The occupancy rate of the guest over there is nearly 80%. Due the high occupancy rate employees or staff of the hotel used to remain very busy and occupied all the time in their work. They worked for their remuneration only. As a result, they do not have any encouragement for their work nor they are interested in going an extra mile for the organisation. The study result shows working environment factors including recognition and appreciation, opinions of the employee, counselling, feedback from superiors, treatment of managers and respect from the organisation are capable of increasing employee engagement level in the hotel. The above study result encouraged us to explore the factors contributed to low employee engagement. It is being found that factors such as recognition and appreciation, feedback from supervisors, opinion of the employee, counselling, feedback from supervisors, treatment from managers has contributed negatively to employee engagement level. Probable reasons for the low contribution are number of employees gave the negative feedback in accordance to the factors stated above of the organisation. It seems that the structure of organisation itself is responsible for the low contribution of employee engagement. The scope of this study is limited to trident hotel situated in the Udaipur. The limitation of the study was that that the results or findings were only based on the responses of respondents of Trident, Udaipur. And so the recommendations were also applicable in Trident, Udaipur and not to all the like organisations across the country. Through the data collected was further analysed, interpreted and concluded. On the basis of the findings, suggestions were provided to the hotel for improvisation.Keywords: human resource, employee engagement, research, study
Procedia PDF Downloads 308