Search results for: iterative calculation
197 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model
Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.
Procedia PDF Downloads 140196 Hydrogeochemical Assessment, Evaluation and Characterization of Groundwater Quality in Ore, South-Western, Nigeria
Authors: Olumuyiwa Olusola Falowo
Abstract:
One of the objectives of the Millennium Development Goals is to have sustainable access to safe drinking water and basic sanitation. In line with this objective, an assessment of groundwater quality was carried out in Odigbo Local Government Area of Ondo State in November – February, 2019 to assess the drinking, domestic and irrigation uses of the water. Samples from 30 randomly selected ground water sources; 16 shallow wells and 14 from boreholes and analyzed using American Public Health Association method for the examination of water and wastewater. Water quality index calculation, and diagrams such as Piper diagram, Gibbs diagram and Wilcox diagram have been used to assess the groundwater in conjunction with irrigation indices such as % sodium, sodium absorption ratio, permeability index, magnesium ratio, Kelly ratio, and electrical conductivity. In addition statistical Principal component analysis were used to determine the homogeneity and source(s) influencing the chemistry of the groundwater. The results show that all the parameters are within the permissible limit of World Health Organization. The physico-chemical analysis of groundwater samples indicates that the dominant major cations are in decreasing order of Na+, Ca2+, Mg2+, K+ and the dominant anions are HCO-3, Cl-, SO-24, NO-3. The values of water quality index varies suggest a Good water (WQI of 50-75) accounts for 70% of the study area. The dominant groundwater facies revealed in this study are the non-carbonate alkali (primary salinity) exceeds 50% (zone 7); and transition zone with no one cation-anion pair exceeds 50% (zone 9), while evaporation; rock–water interaction, and precipitation; and silicate weathering process are the dominant processes in the hydrogeochemical evolution of the groundwater. The study indicates that waters were found within the permissible limits of irrigation indices adopted, and plot on excellent category on Wilcox plot. In conclusion, the water in the study area are good/suitable for drinking, domestic and irrigation purposes with low equivalent salinity concentrate and moderate electrical conductivity.Keywords: equivalent salinity concentration, groundwater quality, hydrochemical facies, principal component analysis, water-rock interaction
Procedia PDF Downloads 148195 Estimation of Noise Barriers for Arterial Roads of Delhi
Authors: Sourabh Jain, Parul Madan
Abstract:
Traffic noise pollution has become a challenging problem for all metro cities of India due to rapid urbanization, growing population and rising number of vehicles and transport development. In Delhi the prime source of noise pollution is vehicular traffic. In Delhi it is found that the ambient noise level (Leq) is exceeding the standard permissible value at all the locations. Noise barriers or enclosures are definitely useful in obtaining effective deduction of traffic noise disturbances in urbanized areas. US’s Federal Highway Administration Model (FHWA) and Calculation of Road Traffic Noise (CORTN) of UK are used to develop spread sheets for noise prediction. Spread sheets are also developed for evaluating effectiveness of existing boundary walls abutting houses in mitigating noise, redesigning them as noise barriers. Study was also carried out to examine the changes in noise level due to designed noise barrier by using both models FHWA and CORTN respectively. During the collection of various data it is found that receivers are located far away from road at Rithala and Moolchand sites and hence extra barrier height needed to meet prescribed limits was less as seen from calculations and most of the noise diminishes by propagation effect.On the basis of overall study and data analysis, it is concluded that FHWA and CORTN models under estimate noise levels. FHWA model predicted noise levels with an average percentage error of -7.33 and CORTN predicted with an average percentage error of -8.5. It was observed that at all sites noise levels at receivers were exceeding the standard limit of 55 dB. It was seen from calculations that existing walls are reducing noise levels. Average noise reduction due to walls at Rithala was 7.41 dB and at Panchsheel was 7.20 dB and lower amount of noise reduction was observed at Friend colony which was only 5.88. It was observed from analysis that Friends colony sites need much greater height of barrier. This was because of residential buildings abutting the road. At friends colony great amount of traffic was observed since it is national highway. At this site diminishing of noise due to propagation effect was very less.As FHWA and CORTN models were developed in excel programme, it eliminates laborious calculations of noise. There was no reflection correction in FHWA models as like in CORTN model.Keywords: IFHWA, CORTN, Noise Sources, Noise Barriers
Procedia PDF Downloads 133194 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis
Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone
Abstract:
The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21◦C and 25◦C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.Keywords: dehumidification, nodal calculation, radiant cooling panel, system sizing
Procedia PDF Downloads 176193 Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre of Visual Arts in the Context of Agile, Lean and Hybrid Project Management Approaches
Authors: Maria Ledinskaya
Abstract:
This paper examines the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts in the context of Agile, Lean, and Hybrid project management. It is part case study and part literature review. To date, relatively little has been written about non-traditional project management approaches in heritage conservation. This paper seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation, by referencing their practical application on a recent museum-based conservation project. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre for Visual Arts by private collectors Michael and Joyce Morris. The first part introduces the chronological timeline and key elements of the project. It describes a medium-size conservation project of moderate complexity, which was planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown condition and materials, unconfirmed budget. The project was also impacted by the unknown unknowns of the COVID-19 pandemic, such as indeterminate lockdowns, and the need to accommodate social distancing and remote communications. The author, a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Collection Conservation Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. Subsequent sections examine the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment, due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Collection Conservation Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, as well as the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics, particularly with respect to change management, bespoke ethics, shared decision-making, and value-based cost-benefit conservation strategy. The author concludes that the Morris Collection Conservation Project had multiple Agile and Lean features which were instrumental to the successful delivery of the project. These key features are identified as distributed decision making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point largely in favour of a Hybrid model which combines traditional and alternative project processes and tools to suit the specific needs of the project.Keywords: project management, conservation, waterfall, agile, lean, hybrid
Procedia PDF Downloads 99192 Impact of Electric Vehicles on Energy Consumption and Environment
Authors: Amela Ajanovic, Reinhard Haas
Abstract:
Electric vehicles (EVs) are considered as an important means to cope with current environmental problems in transport. However, their high capital costs and limited driving ranges state major barriers to a broader market penetration. The core objective of this paper is to investigate the future market prospects of various types of EVs from an economic and ecological point of view. Our method of approach is based on the calculation of total cost of ownership of EVs in comparison to conventional cars and a life-cycle approach to assess the environmental benignity. The most crucial parameters in this context are km driven per year, depreciation time of the car and interest rate. The analysis of future prospects it is based on technological learning regarding investment costs of batteries. The major results are the major disadvantages of battery electric vehicles (BEVs) are the high capital costs, mainly due to the battery, and a low driving range in comparison to conventional vehicles. These problems could be reduced with plug-in hybrids (PHEV) and range extenders (REXs). However, these technologies have lower CO₂ emissions in the whole energy supply chain than conventional vehicles, but unlike BEV they are not zero-emission vehicles at the point of use. The number of km driven has a higher impact on total mobility costs than the learning rate. Hence, the use of EVs as taxis and in car-sharing leads to the best economic performance. The most popular EVs are currently full hybrid EVs. They have only slightly higher costs and similar operating ranges as conventional vehicles. But since they are dependent on fossil fuels, they can only be seen as energy efficiency measure. However, they can serve as a bridging technology, as long as BEVs and fuel cell vehicle do not gain high popularity, and together with PHEVs and REX contribute to faster technological learning and reduction in battery costs. Regarding the promotion of EVs, the best results could be reached with a combination of monetary and non-monetary incentives, as in Norway for example. The major conclusion is that to harvest the full environmental benefits of EVs a very important aspect is the introduction of CO₂-based fuel taxes. This should ensure that the electricity for EVs is generated from renewable energy sources; otherwise, total CO₂ emissions are likely higher than those of conventional cars.Keywords: costs, mobility, policy, sustainability,
Procedia PDF Downloads 226191 Realistic Modeling of the Preclinical Small Animal Using Commercial Software
Authors: Su Chul Han, Seungwoo Park
Abstract:
As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.Keywords: mimics, preclinical small animal, segmentation, 3D printer
Procedia PDF Downloads 366190 The Relationships between AntimüLlerian Hormone, Androgens and Ovarian Reserve in Non-Obese East Indian Women with and without Polycystic Ovary Syndrome
Authors: Dipanshu Sur, Ratnabali Chakravorty, Rimi Pal, Siddhartha Chatterjee, Joyshree Chaterjee, Amal Mallik
Abstract:
Background: Polycystic ovary syndrome (PCOS) is a common endocrine disease in reproductive women with a complex hormonal disturbance that affects the menstrual cycle and leads to metabolic consequences in later life. Hyperandrogenaemia is noticeable features of PCOS and influence the process of folliculogenesis in women. The levels of Antimüllerian Hormone (AMH) reflect the number of pre-antral follicles and thus are a marker of oocyte pool – germinal reserve of the ovary for reproduction. Besides its utilization in IVF (In-vitro fertilization), determination of AMH may serve as an additional marker in the diagnostics of PCOS, where increased AMH levels reflect the severity of the disease. The positive correlation of serum AMH with the number of antral follicles was found also in patients with PCOS. Objective: The objective of this study was to investigate the relationship between AMH androgens and whether AMH contributes to altered folliculogenesis in non-obese women with PCOS. Methods: We designed a prospective study which included a total of 65 IVF individuals. It enrolled 26 cases of PCOS based on 2003 Rotterdam criteria and 39 ovulatory normal- non PCOS, healthy, age-matched controls. AMH levels and ovarian morphology were assessed. The relationships between AMH and androgenaemia in patients with and without PCOS were studied. Results: Mean age of PCOS patients were slightly higher than controls (32±4 and 28±3 years, respectively). AMH generally increased with antral follicle count (AFC) [P=0.001], testosterone, and luteinising hormone, and decreased with age, and serum sex hormone binding globulin (SHBG). No significant relationships were found between circulating AMH levels and BMI between PCOS and non-PCOS patients. The calculation of AMH production per antral follicle (AMH/AF) showed that there was a significant difference in median AMH/AF between PCOS and non-PCOS (P =0.001). Both PCOS and non-PCOS groups showed a very similar increase in AMH with increases in AFC, but the PCOS patients had consistently higher AMH across all AFC levels. Conclusions: These observations indicate that there is a connection between AMH and androgens levels between PCOS and non-PCOS East Indian women. Excessive granulosa cell activity may be implicated in the abnormal follicular dynamic of the syndrome. They are higher in women with PCOS and, on the other hand, very low in women with an ovarian failure.Keywords: anti-Mullerian hormone, polycystic ovary syndrome, antral follicle count, androgens
Procedia PDF Downloads 212189 Embodied Empowerment: A Design Framework for Augmenting Human Agency in Assistive Technologies
Authors: Melina Kopke, Jelle Van Dijk
Abstract:
Persons with cognitive disabilities, such as Autism Spectrum Disorder (ASD) are often dependent on some form of professional support. Recent transformations in Dutch healthcare have spurred institutions to apply new, empowering methods and tools to enable their clients to cope (more) independently in daily life. Assistive Technologies (ATs) seem promising as empowering tools. While ATs can, functionally speaking, help people to perform certain activities without human assistance, we hold that, from a design-theoretical perspective, such technologies often fail to empower in a deeper sense. Most technologies serve either to prescribe or to monitor users’ actions, which in some sense objectifies them, rather than strengthening their agency. This paper proposes that theories of embodied interaction could help formulating a design vision in which interactive assistive devices augment, rather than replace, human agency and thereby add to a persons’ empowerment in daily life settings. It aims to close the gap between empowerment theory and the opportunities provided by assistive technologies, by showing how embodiment and empowerment theory can be applied in practice in the design of new, interactive assistive devices. Taking a Research-through-Design approach, we conducted a case study of designing to support independently living people with ASD with structuring daily activities. In three iterations we interlaced design action, active involvement and prototype evaluations with future end-users and healthcare professionals, and theoretical reflection. Our co-design sessions revealed the issue of handling daily activities being multidimensional. Not having the ability to self-manage one’s daily life has immense consequences on one’s self-image, and also has major effects on the relationship with professional caregivers. Over the course of the project relevant theoretical principles of both embodiment and empowerment theory together with user-insights, informed our design decisions. This resulted in a system of wireless light units that users can program as a reminder for tasks, but also to record and reflect on their actions. The iterative process helped to gradually refine and reframe our growing understanding of what it concretely means for a technology to empower a person in daily life. Drawing on the case study insights we propose a set of concrete design principles that together form what we call the embodied empowerment design framework. The framework includes four main principles: Enabling ‘reflection-in-action’; making information ‘publicly available’ in order to enable co-reflection and social coupling; enabling the implementation of shared reflections into an ‘endurable-external feedback loop’ embedded in the persons familiar ’lifeworld’; and nudging situated actions with self-created action-affordances. In essence, the framework aims for the self-development of a suitable routine, or ‘situated practice’, by building on a growing shared insight of what works for the person. The framework, we propose, may serve as a starting point for AT designers to create truly empowering interactive products. In a set of follow-up projects involving the participation of persons with ASD, Intellectual Disabilities, Dementia and Acquired Brain Injury, the framework will be applied, evaluated and further refined.Keywords: assistive technology, design, embodiment, empowerment
Procedia PDF Downloads 278188 Kinematics and Dynamics Analysis of Crank-Piston System of a High-Power, Nine-Cylinder Aircraft Engine
Authors: Michal Biały, Konrad Pietrykowski, Rafal Sochaczewski
Abstract:
The kinematics and dynamics analysis of crank-piston system of aircraft engine. The object of the study was the high power aircraft engine ASz 62-IR. This engine is produced by a Polish company WSK "PZL-KALISZ" S.A.". All analyzes were performed numerically using CAD and CAE environment. Three-dimensional model of the crank-piston system was developed based on real engine located in the Laboratory of Centre of Innovation and Advanced Technologies of Lublin University of Technology. During the development of the model, the technique of reverse engineering - 3D scanning was used. ASz 62-IR engine is characterized by a radial type of crank-piston system. In this system the cylinders are arranged radially around the circle. This crank-piston system consists of a main connecting rod and eight additional connecting rods. In addition, three-dimensional model consists of a piston pins, pistons and piston rings. As a result of the specific engine design, characteristics of the piston individual movement are slightly different from each other. But the model assumes that they are the same during the analysis. Three-dimensional model of the engine was implemented into the MSC Adams software. The environment of MSC Adams allows for multibody simulation of the dynamic phenomena. This determines the state parameters of the moving elements, among which the load or force distribution on each kinematic node can be distinguished. Materials and characteristic materials parameters were adopted on the basis of commonly used materials for engine parts. The mass values of individual elements were adopted on the basis of real engine parts. The piston gas forces were replaced by calculation of pressure variations recorded during engine tests on the engine test bench. The research the changes of forces acting in the individual kinematic pairs of crank-piston system. The model allows to determine the load on the crankshaft main bearings. This gives the possibility for the main supports forces analysis The model allows for testing and simulation of kinematics and dynamics of a radial aircraft engine. This is the first stage of the work, which aims to numerical simulation of vibration of multi-cylinder aircraft engine. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: aircraft engine, CAD, CAE, dynamics, kinematics, MSC Adams, numerical simulation
Procedia PDF Downloads 389187 Heuristic Approaches for Injury Reductions by Reduced Car Use in Urban Areas
Authors: Stig H. Jørgensen, Trond Nordfjærn, Øyvind Teige Hedenstrøm, Torbjørn Rundmo
Abstract:
The aim of the paper is to estimate and forecast road traffic injuries in the coming 10-15 years given new targets in urban transport policy and shifts of mode of transport, including injury cross-effects of mode changes. The paper discusses possibilities and limitations in measuring and quantifying possible injury reductions. Injury data (killed and seriously injured road users) from six urban areas in Norway from 1998-2012 (N= 4709 casualties) form the basis for estimates of changing injury patterns. For the coming period calculation of number of injuries and injury rates by type of road user (categories of motorized versus non-motorized) by sex, age and type of road are made. A prognosticated population increase (25 %) in total population within 2025 in the six urban areas will curb the proceeded fall in injury figures. However, policy strategies and measures geared towards a stronger modal shift from use of private vehicles to safer public transport (bus, train) will modify this effect. On the other side will door to door transport (pedestrians on their way to/from public transport nodes) imply a higher exposure for pedestrians (bikers) converting from private vehicle use (including fall accidents not registered as traffic accidents). The overall effect is the sum of these modal shifts in the increasing urban population and in addition diminishing return to the majority of road safety countermeasures has also to be taken into account. The paper demonstrates how uncertainties in the various estimates (prediction factors) on increasing injuries as well as decreasing injury figures may partly offset each other. The paper discusses road safety policy and welfare consequences of transport mode shift, including reduced use of private vehicles, and further environmental impacts. In this regard, safety and environmental issues will as a rule concur. However pursuing environmental goals (e.g. improved air quality, reduced co2 emissions) encouraging more biking may generate more biking injuries. The study was given financial grants from the Norwegian Research Council’s Transport Safety Program.Keywords: road injuries, forecasting, reduced private care use, urban, Norway
Procedia PDF Downloads 237186 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer
Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD
Procedia PDF Downloads 507185 A Radiofrequency Based Navigation Method for Cooperative Robotic Communities in Surface Exploration Missions
Authors: Francisco J. García-de-Quirós, Gianmarco Radice
Abstract:
When considering small robots working in a cooperative community for Moon surface exploration, navigation and inter-nodes communication aspects become a critical issue for the mission success. For this approach to succeed, it is necessary however to deploy the required infrastructure for the robotic community to achieve efficient self-localization as well as relative positioning and communications between nodes. In this paper, an exploration mission concept in which two cooperative robotic systems co-exist is presented. This paradigm hinges on a community of reference agents that provide support in terms of communication and navigation to a second agent community tasked with exploration goals. The work focuses on the role of the agent community in charge of the overall support and, more specifically, will focus on the positioning and navigation methods implemented in RF microwave bands, which are combined with the communication services. An analysis of the different methods for range and position calculation are presented, as well as the main limiting factors for precision and resolution, such as phase and frequency noise in RF reference carriers and drift mechanisms such as thermal drift and random walk. The effects of carrier frequency instability due to phase noise are categorized in different contributing bands, and the impact of these spectrum regions are considered both in terms of the absolute position and the relative speed. A mission scenario is finally proposed, and key metrics in terms of mass and power consumption for the required payload hardware are also assessed. For this purpose, an application case involving an RF communication network in UHF Band is described, in coexistence with a communications network used for the single agents to communicate within the both the exploring agents as well as the community and with the mission support agents. The proposed approach implements a substantial improvement in planetary navigation since it provides self-localization capabilities for robotic agents characterized by very low mass, volume and power budgets, thus enabling precise navigation capabilities to agents of reduced dimensions. Furthermore, a common and shared localization radiofrequency infrastructure enables new interaction mechanisms such as spatial arrangement of agents over the area of interest for distributed sensing.Keywords: cooperative robotics, localization, robot navigation, surface exploration
Procedia PDF Downloads 294184 The Use of Random Set Method in Reliability Analysis of Deep Excavations
Authors: Arefeh Arabaninezhad, Ali Fakher
Abstract:
Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty
Procedia PDF Downloads 268183 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines
Authors: V. Radulescu, S. Dumitru
Abstract:
Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow
Procedia PDF Downloads 164182 Economics of Precision Mechanization in Wine and Table Grape Production
Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka
Abstract:
The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes
Procedia PDF Downloads 260181 Impact of Marangoni Stress and Mobile Surface Charge on Electrokinetics of Ionic Liquids Over Hydrophobic Surfaces
Authors: Somnath Bhattacharyya
Abstract:
The mobile adsorbed surface charge on hydrophobic surfaces can modify the velocity slip condition as well as create a Marangoni stress at the interface. The functionalized hydrophobic walls of micro/nanopores, e.g., graphene nanochannels, may possess physio-sorbed ions. The lateral mobility of the physisorbed absorbed ions creates a friction force as well as an electric force, leading to a modification in the velocity slip condition at the hydrophobic surface. In addition, the non-uniform distribution of these surface ions creates a surface tension gradient, leading to a Marangoni stress. The impact of the mobile surface charge on streaming potential and electrochemical energy conversion efficiency in a pressure-driven flow of ionized liquid through the nanopore is addressed. Also, enhanced electro-osmotic flow through the hydrophobic nanochannel is also analyzed. The mean-filed electrokinetic model is modified to take into account the short-range non-electrostatic steric interactions and the long-range Coulomb correlations. The steric interaction is modeled by considering the ions as charged hard spheres of finite radius suspended in the electrolyte medium. The electrochemical potential is modified by including the volume exclusion effect, which is modeled based on the BMCSL equation of state. The electrostatic correlation is accounted for in the ionic self-energy. The extremal of the self-energy leads to a fourth-order Poisson equation for the electric field. The ion transport is governed by the modified Nernst-Planck equation, which includes the ion steric interactions; born force arises due to the spatial variation of the dielectric permittivity and the dielectrophoretic force on the hydrated ions. This ion transport equation is coupled with the Navier-Stokes equation describing the flow of the ionized fluid and the 3fourth-order Poisson equation for the electric field. We numerically solve the coupled set of nonlinear governing equations along with the prescribed boundary conditions by adopting a control volume approach over a staggered grid arrangement. In the staggered grid arrangements, velocity components are stored on the midpoint of the cell faces to which they are normal, whereas the remaining scalar variables are stored at the center of each cell. The convection and electromigration terms are discretized at each interface of the control volumes using the total variation diminishing (TVD) approach to capture the strong convection resulting from the highly enhanced fluid flow due to the modified model. In order to link pressure to the continuity equation, we adopt a pressure correction-based iterative SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) algorithm, in which the discretized continuity equation is converted to a Poisson equation involving pressure correction terms. Our results show that the physisorbed ions on a hydrophobic surface create an enhanced slip velocity when streaming potential, which enhances the convection current. However, the electroosmotic flow attenuates due to the mobile surface ions.Keywords: microfluidics, electroosmosis, streaming potential, electrostatic correlation, finite sized ions
Procedia PDF Downloads 72180 Bandgap Engineering of CsMAPbI3-xBrx Quantum Dots for Intermediate Band Solar Cell
Authors: Deborah Eric, Abbas Ahmad Khan
Abstract:
Lead halide perovskites quantum dots have attracted immense scientific and technological interest for successful photovoltaic applications because of their remarkable optoelectronic properties. In this paper, we have simulated CsMAPbI3-xBrx based quantum dots to implement their use in intermediate band solar cells (IBSC). These types of materials exhibit optical and electrical properties distinct from their bulk counterparts due to quantum confinement. The conceptual framework provides a route to analyze the electronic properties of quantum dots. This layer of quantum dots optimizes the position and bandwidth of IB that lies in the forbidden region of the conventional bandgap. A three-dimensional MAPbI3 quantum dot (QD) with geometries including spherical, cubic, and conical has been embedded in the CsPbBr3 matrix. Bound energy wavefunction gives rise to miniband, which results in the formation of IB. If there is more than one miniband, then there is a possibility of having more than one IB. The optimization of QD size results in more IBs in the forbidden region. One band time-independent Schrödinger equation using the effective mass approximation with step potential barrier is solved to compute the electronic states. Envelope function approximation with BenDaniel-Duke boundary condition is used in combination with the Schrödinger equation for the calculation of eigen energies and Eigen energies are solved for the quasi-bound states using an eigenvalue study. The transfer matrix method is used to study the quantum tunneling of MAPbI3 QD through neighbor barriers of CsPbI3. Electronic states are computed using Schrödinger equation with effective mass approximation by considering quantum dot and wetting layer assembly. Results have shown the varying the quantum dot size affects the energy pinning of QD. Changes in the ground, first, second state energies have been observed. The QD is non-zero at the center and decays exponentially to zero at boundaries. Quasi-bound states are characterized by envelope functions. It has been observed that conical quantum dots have maximum ground state energy at a small radius. Increasing the wetting layer thickness exhibits energy signatures similar to bulk material for each QD size.Keywords: perovskite, intermediate bandgap, quantum dots, miniband formation
Procedia PDF Downloads 165179 Finite Element Simulation of Four Point Bending of Laminated Veneer Lumber (LVL) Arch
Authors: Eliska Smidova, Petr Kabele
Abstract:
This paper describes non-linear finite element simulation of laminated veneer lumber (LVL) under tensile and shear loads that induce cracking along fibers. For this purpose, we use 2D homogeneous orthotropic constitutive model of tensile and shear fracture in timber that has been recently developed and implemented into ATENA® finite element software by the authors. The model captures (i) material orthotropy for small deformations in both linear and non-linear range, (ii) elastic behavior until anisotropic failure criterion is fulfilled, (iii) inelastic behavior after failure criterion is satisfied, (iv) different post-failure response for cracks along and across the grain, (v) unloading/reloading behavior. The post-cracking response is treated by fixed smeared crack model where Reinhardt-Hordijk function is used. The model requires in total 14 input parameters that can be obtained from standard tests, off-axis test results and iterative numerical simulation of compact tension (CT) or compact tension-shear (CTS) test. New engineered timber composites, such as laminated veneer lumber (LVL), offer improved structural parameters compared to sawn timber. LVL is manufactured by laminating 3 mm thick wood veneers aligned in one direction using water-resistant adhesives (e.g. polyurethane). Thus, 3 main grain directions, namely longitudinal (L), tangential (T), and radial (R), are observed within the layered LVL product. The core of this work consists in 3 numerical simulations of experiments where Radiata Pine LVL and Yellow Poplar LVL were involved. The first analysis deals with calibration and validation of the proposed model through off-axis tensile test (at a load-grain angle of 0°, 10°, 45°, and 90°) and CTS test (at a load-grain angle of 30°, 60°, and 90°), both of which were conducted for Radiata Pine LVL. The second finite element simulation reproduces load-CMOD curve of compact tension (CT) test of Yellow Poplar with the aim of obtaining cohesive law parameters to be used as an input in the third finite element analysis. That is four point bending test of small-size arch of 780 mm span that is made of Yellow Poplar LVL. The arch is designed with a through crack between two middle layers in the crown. Curved laminated beams are exposed to high radial tensile stress compared to timber strength in radial tension in the crown area. Let us note that in this case the latter parameter stands for tensile strength in perpendicular direction with respect to the grain. Standard tests deliver most of the relevant input data whereas traction-separation law for crack along the grain can be obtained partly by inverse analysis of compact tension (CT) test or compact tension-shear test (CTS). The initial crack was modeled as a narrow gap separating two layers in the middle the arch crown. Calculated load-deflection curve is in good agreement with the experimental ones. Furthermore, crack pattern given by numerical simulation coincides with the most important observed crack paths.Keywords: compact tension (CT) test, compact tension shear (CTS) test, fixed smeared crack model, four point bending test, laminated arch, laminated veneer lumber LVL, off-axis test, orthotropic elasticity, orthotropic fracture criterion, Radiata Pine LVL, traction-separation law, yellow poplar LVL, 2D constitutive model
Procedia PDF Downloads 290178 An Analysis of the Recent Flood Scenario (2017) of the Southern Districts of the State of West Bengal, India
Authors: Soumita Banerjee
Abstract:
The State of West Bengal is mostly watered by innumerable rivers, and they are different in nature in both the northern and the southern part of the state. The southern part of West Bengal is mainly drained with the river Bhagirathi-Hooghly, and its major distributaries and tributaries have divided this major river basin into many subparts like the Ichamati-Bidyadhari, Pagla-Bansloi, Mayurakshi-Babla, Ajay, Damodar, Kangsabati Sub-basin to name a few. These rivers basically drain the Districts of Bankura, Burdwan, Hooghly, Nadia and Purulia, Birbhum, Midnapore, Murshidabad, North 24-Parganas, Kolkata, Howrah and South 24-Parganas. West Bengal has a huge number of flood-prone blocks in the southern part of the state of West Bengal, the responsible factors for flood situation are the shape and size of the catchment area, its steep gradient starting from plateau to flat terrain, the river bank erosion and its siltation, tidal condition especially in the lower Ganga Basin and very low maintenance of the embankments which are mostly used as communication links. Along with these factors, DVC (Damodar Valley Corporation) plays an important role in the generation (with the release of water) and controlling the flood situation. This year the whole Gangetic West Bengal is being flooded due to high intensity and long duration rainfall, and the release of water from the Durgapur Barrage As most of the rivers are interstate in nature at times floods also take place with release of water from the dams of the neighbouring states like Jharkhand. Other than Embankments, there is no such structural measures for combatting flood in West Bengal. This paper tries to analyse the reasons behind the flood situation this year especially with the help of climatic data collected from the Indian Metrological Department, flood related data from the Irrigation and Waterways Department, West Bengal and GPM (General Precipitation Measurement) data for rainfall analysis. Based on the threshold value derived from the calculation of the past available flood data, it is possible to predict the flood events which may occur in the near future and with the help of social media it can be spread out within a very short span of time to aware the mass. On a larger or a governmental scale, heightening the settlements situated on the either banks of the river can yield a better result than building up embankments.Keywords: dam failure, embankments, flood, rainfall
Procedia PDF Downloads 225177 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study
Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu
Abstract:
Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm
Procedia PDF Downloads 137176 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program
Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner
Abstract:
Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NYKeywords: accessibility, COVID-19, recovery, testing
Procedia PDF Downloads 193175 Performance Analysis of the Precise Point Positioning Data Online Processing Service and Using for Monitoring Plate Tectonic of Thailand
Authors: Nateepat Srivarom, Weng Jingnong, Serm Chinnarat
Abstract:
Precise Point Positioning (PPP) technique is use to improve accuracy by using precise satellite orbit and clock correction data, but this technique is complicated methods and high costs. Currently, there are several online processing service providers which offer simplified calculation. In the first part of this research, we compare the efficiency and precision of four software. There are three popular online processing service providers: Australian Online GPS Processing Service (AUSPOS), CSRS-Precise Point Positioning and CenterPoint RTX post processing by Trimble and 1 offline software, RTKLIB, which collected data from 10 the International GNSS Service (IGS) stations for 10 days. The results indicated that AUSPOS has the least distance root mean square (DRMS) value of 0.0029 which is good enough to be calculated for monitoring the movement of tectonic plates. The second, we use AUSPOS to process the data of geodetic network of Thailand. In December 26, 2004, the earthquake occurred a 9.3 MW at the north of Sumatra that highly affected all nearby countries, including Thailand. Earthquake effects have led to errors of the coordinate system of Thailand. The Royal Thai Survey Department (RTSD) is primarily responsible for monitoring of the crustal movement of the country. The difference of the geodetic network movement is not the same network and relatively large. This result is needed for survey to continue to improve GPS coordinates system in every year. Therefore, in this research we chose the AUSPOS to calculate the magnitude and direction of movement, to improve coordinates adjustment of the geodetic network consisting of 19 pins in Thailand during October 2013 to November 2017. Finally, results are displayed on the simulation map by using the ArcMap program with the Inverse Distance Weighting (IDW) method. The pin with the maximum movement is pin no. 3239 (Tak) in the northern part of Thailand. This pin moved in the south-western direction to 11.04 cm. Meanwhile, the directional movement of the other pins in the south gradually changed from south-west to south-east, i.e., in the direction noticed before the earthquake. The magnitude of the movement is in the range of 4 - 7 cm, implying small impact of the earthquake. However, the GPS network should be continuously surveyed in order to secure accuracy of the geodetic network of Thailand.Keywords: precise point positioning, online processing service, geodetic network, inverse distance weighting
Procedia PDF Downloads 189174 Modulating Photoelectrochemical Water-Splitting Activity by Charge-Storage Capacity of Electrocatalysts
Authors: Yawen Dai, Ping Cheng, Jian Ru Gong
Abstract:
Photoelctrochemical (PEC) water splitting using semiconductors (SCs) provides a convenient way to convert sustainable but intermittent solar energy into clean hydrogen energy, and it has been regarded as one of most promising technology to solve the energy crisis and environmental pollution in modern society. However, the record energy conversion efficiency of a PEC cell (~3%) is still far lower than the commercialization requirement (~10%). The sluggish kinetics of oxygen evolution reaction (OER) half reaction on photoanodes is a significant limiting factor of the PEC device efficiency, and electrocatalysts (ECs) are always deposited on SCs to accelerate the hole injection for OER. However, an active EC cannot guarantee enhanced PEC performance, since the newly emerged SC-EC interface complicates the interfacial charge behavior. Herein, α-Fe2O3 photoanodes coated with Co3O4 and CoO ECs are taken as the model system to glean fundamental understanding on the EC-dependent interfacial charge behavior. Intensity modulated photocurrent spectroscopy and electrochemical impedance spectroscopy were used to investigate the competition between interfacial charge transfer and recombination, which was found to be dominated by the charge storage capacities of ECs. The combined results indicate that both ECs can store holes and increase the hole density on photoanode surface. It is like a double-edged sword that benefit the multi-hole participated OER, as well as aggravate the SC-EC interfacial charge recombination due to the Coulomb attraction, thus leading to a nonmonotonic PEC performance variation trend with the increasing surface hole density. Co3O4 has low hole storage capacity which brings limited interfacial charge recombination, and thus the increased surface holes can be efficiently utilized for OER to generate enhanced photocurrent. In contrast, CoO has overlarge hole storage capacity that causes severe interfacial charge recombination, which hinders hole transfer to electrolyte for OER. Therefore, the PEC performance of α-Fe2O3 is improved by Co3O4 but decreased by CoO despite the similar electrocatalytic activity of the two ECs. First-principle calculation was conducted to further reveal how the charge storage capacity depends on the EC’s intrinsic property, demonstrating that the larger hole storage capacity of CoO than that of Co3O4 is determined by their Co valence states and original Fermi levels. This study raises up a new strategy to manipulate interfacial charge behavior and the resultant PEC performance by the charge storage capacity of ECs, providing insightful guidance for the interface design in PEC devices.Keywords: charge storage capacity, electrocatalyst, interfacial charge behavior, photoelectrochemistry, water-splitting
Procedia PDF Downloads 141173 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia
Abstract:
Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines
Procedia PDF Downloads 234172 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review
Authors: Anicet Dansou
Abstract:
Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete
Procedia PDF Downloads 108171 Life Time Improvement of Clamp Structural by Using Fatigue Analysis
Authors: Pisut Boonkaew, Jatuporn Thongsri
Abstract:
In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability
Procedia PDF Downloads 235170 Optimizing Productivity and Quality through the Establishment of a Learning Management System for an Agency-Based Graduate School
Authors: Maria Corazon Tapang-Lopez, Alyn Joy Dela Cruz Baltazar, Bobby Jones Villanueva Domdom
Abstract:
The requisite for an organization implementing quality management system to sustain its compliance to the requirements and commitment for continuous improvement is even higher. It is expected that the offices and units has high and consistent compliance to the established processes and procedures. The Development Academy of the Philippines has been operating under project management to which is has a quality management certification. To further realize its mandate as a think-tank and capacity builder of the government, DAP expanded its operation and started to grant graduate degree through its Graduate School of Public and Development Management (GSPDM). As the academic arm of the Academy, GSPDM offers graduate degree programs on public management and productivity & quality aligned to the institutional trusts. For a time, the documented procedures and processes of a project management seem to fit the Graduate School. However, there has been a significant growth in the operations of the GSPDM in terms of the graduate programs offered that directly increase the number of students. There is an apparent necessity to align the project management system into a more educational system otherwise it will no longer be responsive to the development that are taking place. The strongly advocate and encourage its students to pursue internal and external improvement to cope up with the challenges of providing quality service to their own clients and to our country. If innovation will not take roots in the grounds of GSPDM, then how will it serve the purpose of “walking the talk”? This research was conducted to assess the diverse flow of the existing internal operations and processes of the DAP’s project management and GSPDM’s school management that will serve as basis to develop a system that will harmonize into one, the Learning Management System. The study documented the existing process of GSPDM following the project management phases of conceptualization & development, negotiation & contracting, mobilization, implementation, and closure into different flow charts of the key activities. The primary source of information as respondents were the different groups involved into the delivery of graduate programs - the executive, learning management team and administrative support offices. The Learning Management System (LMS) shall capture the unique and critical processes of the GSPDM as a degree-granting unit of the Academy. The LMS is the harmonized project management and school management system that shall serve as the standard system and procedure for all the programs within the GSPDM. The unique processes cover the three important areas of school management – student, curriculum, and faculty. The required processes of these main areas such as enrolment, course syllabus development, and faculty evaluation were appropriately placed within the phases of the project management system. Further, the research shall identify critical reports and generate manageable documents and records to ensure accuracy, consistency and reliable information. The researchers had an in-depth review of the DAP-GSDPM’s mandate, analyze the various documents, and conducted series of focused group discussions. A comprehensive review on flow chart system prior and various models of school management systems were made. Subsequently, the final output of the research is a work instructions manual that will be presented to the Academy’s Quality Management Council and eventually an additional scope for ISO certification. The manual shall include documented forms, iterative flow charts and program Gantt chart that will have a parallel development of automated systems.Keywords: productivity, quality, learning management system, agency-based graduate school
Procedia PDF Downloads 319169 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study
Authors: Anna Ching-Yu Wong
Abstract:
Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.Keywords: textbook savings, open textbooks, textbook costs assessment, open access
Procedia PDF Downloads 75168 Hybrid Materials on the Basis of Magnetite and Magnetite-Gold Nanoparticles for Biomedical Application
Authors: Mariia V. Efremova, Iana O. Tcareva, Anastasia D. Blokhina, Ivan S. Grebennikov, Anastasia S. Garanina, Maxim A. Abakumov, Yury I. Golovin, Alexander G. Savchenko, Alexander G. Majouga, Natalya L. Klyachko
Abstract:
During last decades magnetite nanoparticles (NPs) attract a deep interest of scientists due to their potential application in therapy and diagnostics. However, magnetite nanoparticles are toxic and non-stable in physiological conditions. To solve these problems, we decided to create two types of hybrid systems based on magnetite and gold which is inert and biocompatible: gold as a shell material (first type) and gold as separate NPs interfacially bond to magnetite NPs (second type). The synthesis of the first type hybrid nanoparticles was carried out as follows: Magnetite nanoparticles with an average diameter of 9±2 nm were obtained by co-precipitation of iron (II, III) chlorides then they were covered with gold shell by iterative reduction of hydrogen tetrachloroaurate with hydroxylamine hydrochloride. According to the TEM, ICP MS and EDX data, final nanoparticles had an average diameter of 31±4 nm and contained iron even after hydrochloric acid treatment. However, iron signals (K-line, 7,1 keV) were not localized so we can’t speak about one single magnetic core. Described nanoparticles covered with mercapto-PEG acid were non-toxic for human prostate cancer PC-3/ LNCaP cell lines (more than 90% survived cells as compared to control) and had high R2-relaxivity rates (>190 mМ-1s-1) that exceed the transverse relaxation rate of commercial MRI-contrasting agents. These nanoparticles were also used for chymotrypsin enzyme immobilization. The effect of alternating magnetic field on catalytic properties of chymotrypsin immobilized on magnetite nanoparticles, notably the slowdown of catalyzed reaction at the level of 35-40 % was found. The synthesis of the second type hybrid nanoparticles also involved two steps. Firstly, spherical gold nanoparticles with an average diameter of 9±2 nm were synthesized by the reduction of hydrogen tetrachloroaurate with oleylamine; secondly, they were used as seeds during magnetite synthesis by thermal decomposition of iron pentacarbonyl in octadecene. As a result, so-called dumbbell-like structures were obtained where magnetite (cubes with 25±6 nm diagonal) and gold nanoparticles were connected together pairwise. By HRTEM method (first time for this type of structure) an epitaxial growth of magnetite nanoparticles on gold surface with co-orientation of (111) planes was discovered. These nanoparticles were transferred into water by means of block-copolymer Pluronic F127 then loaded with anti-cancer drug doxorubicin and also PSMA-vector specific for LNCaP cell line. Obtained nanoparticles were found to have moderate toxicity for human prostate cancer cells and got into the intracellular space after 45 minutes of incubation (according to fluorescence microscopy data). These materials are also perspective from MRI point of view (R2-relaxivity rates >70 mМ-1s-1). Thereby, in this work magnetite-gold hybrid nanoparticles, which have a strong potential for biomedical application, particularly in targeted drug delivery and magnetic resonance imaging, were synthesized and characterized. That paves the way to the development of special medicine types – theranostics. The authors knowledge financial support from Ministry of Education and Science of the Russian Federation (14.607.21.0132, RFMEFI60715X0132). This work was also supported by Grant of Ministry of Education and Science of the Russian Federation К1-2014-022, Grant of Russian Scientific Foundation 14-13-00731 and MSU development program 5.13.Keywords: drug delivery, magnetite-gold, MRI contrast agents, nanoparticles, toxicity
Procedia PDF Downloads 382