Search results for: DFT calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1241

Search results for: DFT calculation

191 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics

Authors: Sairi Satari

Abstract:

Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.

Keywords: sigma matrics, analytical performance, total error, bias

Procedia PDF Downloads 149
190 The Impact of Passive Design Factors on House Energy Efficiency for New Cities in Egypt

Authors: Mahmoud Mourad, Ahmad Hamza H. Ali, S.Ookawara, Ali Kamel Abdel-Rahman, Nady M. Abdelkariem

Abstract:

The energy consumption of a house can be affected simultaneously by many building design factors related to its main architectural features, building elements and materials. This study focuses on the impact of passive design factors on the annual energy consumption of a suggested prototype house for single-family detached houses of 240 m2 in two floors, each floor of 120 m2 in new Egyptian cities located in (Alexandria - Cairo - Siwa - Assuit – Aswan) which resemble five different climatic zones (Northern coast – Northern upper Egypt - dessert region- Southern upper Egypt – South Egypt) respectively. This study present the effect of the passive design factors affecting the building energy consumption as building orientation, building material (walls, roof and slabs), building type (residential, educational, commercial), building occupancy (type of occupant, no. of occupant, age), building landscape and site selection, building envelope and fenestration (glazing material, shading), and building plan form. This information can be used to estimate the approximate saving in energy consumption, which would result on a change in the design datum for the future houses development, and to identify the major design problems for energy efficiency. To achieve the above objective, this paper presents a study for the factors affecting on the building energy consumption in the hot arid area in new Egyptian cities in five different climatic zones , followed by defining the energy needs for different utilization in this suggested prototype house. Consequently, a detailed analysis of the available Renewable Energy utilizations technologies used in the suggested home, and a calculation of the energy as a function of yearly distribution that required for this home will presented. The results obtained from building annual energy analyses show that architecture passive design factors saves about 35% of the annual energy consumption. It shows also passive cooling techniques saves about 45%, and renewable energy systems saves about 40% of the annual energy needs for this proposed home depending on the cities location on the climatic zones.

Keywords: architecture passive design factors, energy efficient homes, Egypt new cites, renewable energy technologies

Procedia PDF Downloads 370
189 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 137
188 1D/3D Modeling of a Liquid-Liquid Two-Phase Flow in a Milli-Structured Heat Exchanger/Reactor

Authors: Antoinette Maarawi, Zoe Anxionnaz-Minvielle, Pierre Coste, Nathalie Di Miceli Raimondi, Michel Cabassud

Abstract:

Milli-structured heat exchanger/reactors have been recently widely used, especially in the chemical industry, due to their enhanced performances in heat and mass transfer compared to conventional apparatuses. In our work, the ‘DeanHex’ heat exchanger/reactor with a 2D-meandering channel is investigated both experimentally and numerically. The square cross-sectioned channel has a hydraulic diameter of 2mm. The aim of our study is to model local physico-chemical phenomena (heat and mass transfer, axial dispersion, etc.) for a liquid-liquid two-phase flow in our lab-scale meandering channel, which represents the central part of the heat exchanger/reactor design. The numerical approach of the reactor is based on a 1D model for the flow channel encapsulated in a 3D model for the surrounding solid, using COMSOL Multiphysics V5.5. The use of the 1D approach to model the milli-channel reduces significantly the calculation time compared to 3D approaches, which are generally focused on local effects. Our 1D/3D approach intends to bridge the gap between the simulation at a small scale and the simulation at the reactor scale at a reasonable CPU cost. The heat transfer process between the 1D milli-channel and its 3D surrounding is modeled. The feasibility of this 1D/3D coupling was verified by comparing simulation results to experimental ones originated from two previous works. Temperature profiles along the channel axis obtained by simulation fit the experimental profiles for both cases. The next step is to integrate the liquid-liquid mass transfer model and to validate it with our experimental results. The hydrodynamics of the liquid-liquid two-phase system is modeled using the ‘mixture model approach’. The mass transfer behavior is represented by an overall volumetric mass transfer coefficient ‘kLa’ correlation obtained from our experimental results in the millimetric size meandering channel. The present work is a first step towards the scale-up of our ‘DeanHex’ expecting future industrialization of such equipment. Therefore, a generalized scaled-up model of the reactor comprising all the transfer processes will be built in order to predict the performance of the reactor in terms of conversion rate and energy efficiency at an industrial scale.

Keywords: liquid-liquid mass transfer, milli-structured reactor, 1D/3D model, process intensification

Procedia PDF Downloads 93
187 Transient Level in the Surge Chamber at the Robert-bourassa Generating Station

Authors: Maryam Kamali Nezhad

Abstract:

The Robert-Bourassa development (LG-2), the first to be built on the Grande Rivière, comprises two sets of eight turbines- generator units each, the East and West powerhouses. Each powerhouse has two tailrace tunnels with an average length of about 1178 m. The LG-2A powerhouse houses 6 turbine-generator units. The water is discharged through two tailrace tunnels with a length of about 1330 m. The objective of this work, at RB (LG-2), is; 1) to establish a new maximum transient level in the surge chamber, 2) to define the new maximum equipment flow rate for the future turbine-generator units, 3) to ensure safe access to various intervention locations in the surge chamber. The transient levels under normal operating conditions at the RB plant were determined in 2001 by the Hydraulics Unit of HQE using the "Chamber" software. It is a one-dimensional mass oscillation calculation software; it is used to determine the variation of the water level in the equilibrium chamber located downstream of a power plant during the load shedding of the power plant units; it can also be used in the case of an equilibrium stack upstream of a power plant. The RB (LG-2) plant study is based on the theoretical nominal geometry of the chamber and the tailrace tunnels and the flow-level relationship at the outlet of the galleries established during design. The software is used in such a way that the results have an acceptable margin of safety, especially with respect to the maximum transient level (e.g., resumption of flow at an inopportune time), to take into account the turbulent and three-dimensional aspects of the actual flow in the chamber. Note that the transient levels depend on the water levels in the river and in the steady-state equilibrium chambers. These data are established in the HQP CRP database and updated from time to time. The maximum transient levels in the RB-East and RB-West powerhouses surge chamber were revised based on the latest update (set 4) of in-river rating curves and steady-state surge chamber water levels. The results of the revision were also used to update the technical advice on the operating conditions for the aforementioned surge chamber access while considering revisions to the calculated water levels.

Keywords: generating station, surge chamber, maximum transient level, hydroelectric power station, turbine-generator, reservoir

Procedia PDF Downloads 57
186 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data

Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene

Abstract:

Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.

Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging

Procedia PDF Downloads 247
185 Geomorphology of Leyte, Philippines: Seismic Response and Remote Sensing Analysis and Its Implication to Landslide Hazard Assessment

Authors: Arturo S. Daag, Ira Karrel D. L. San Jose, Mike Gabriel G. Pedrosa, Ken Adrian C. Villarias, Rayfred P. Ingeniero, Cyrah Gale H. Rocamora, Margarita P. Dizon, Roland Joseph B. De Leon, Teresito C. Bacolcol

Abstract:

The province of Leyte consists of various geomorphological landforms: These are: a) landforms of tectonic origin transect large part of the volcanic centers in upper Ormoc area; b) landforms of volcanic origin, several inactive volcanic centers located in Upper Ormoc are transected by Philippine Fault; c) landforms of volcano-denudational and denudational slopes dominates the area where most of the earthquake-induced landslide occurred; and d) Colluvium and alluvial deposits dominate the foot slope of Ormoc and Jaro-Pastrana plain. Earthquake ground acceleration and geotechnical properties of various landforms are crucial for landslide studies. To generate the landslide critical acceleration model of sliding block, various data were considered, these are: geotechnical data (i.e., soil and rock strength parameters), slope, topographic wetness index (TWI), landslide inventory, soil map, geologic maps for the calculation of the factor of safety. Horizontal-to-vertical spectral ratio (HVSR) surveying methods, refraction microtremor (ReMi), and three-component microtremor (3CMT) were conducted to measure site period and surface wave velocity as well as to create a soil thickness model. Critical acceleration model of various geomorphological unit using Remote Sensing, field geotechnical, geophysical, and geospatial data collected from the areas affected by the 06 July 2017 M6.5 Leyte earthquake. Spatial analysis of earthquake-induced landslide from the 06 July 2017, were then performed to assess the relationship between the calculated critical acceleration and peak ground acceleration. The observed trends proved helpful in establishing the role of critical acceleration as a determining factor in the distribution of co-seismic landslides.

Keywords: earthquake-induced landslide, remote sensing, geomorphology, seismic response

Procedia PDF Downloads 65
184 Plasma Technology for Hazardous Biomedical Waste Treatment

Authors: V. E. Messerle, A. L. Mosse, O. A. Lavrichshev, A. N. Nikonchuk, A. B. Ustimenko

Abstract:

One of the most serious environmental problems today is pollution by biomedical waste (BMW), which in most cases has undesirable properties such as toxicity, carcinogenicity, mutagenicity, fire. Sanitary and hygienic survey of typical solid BMW, made in Belarus, Kazakhstan, Russia and other countries shows that their risk to the environment is significantly higher than that of most chemical wastes. Utilization of toxic BMW requires use of the most universal methods to ensure disinfection and disposal of any of their components. Such technology is a plasma technology of BMW processing. To implement this technology a thermodynamic analysis of the plasma processing of BMW was fulfilled and plasma-box furnace was developed. The studies have been conducted on the example of the processing of bone. To perform thermodynamic calculations software package Terra was used. Calculations were carried out in the temperature range 300 - 3000 K and a pressure of 0.1 MPa. It is shown that the final products do not contain toxic substances. From the organic mass of BMW synthesis gas containing combustible components 77.4-84.6% was basically produced, and mineral part consists mainly of calcium oxide and contains no carbon. Degree of gasification of carbon reaches 100% by the temperature 1250 K. Specific power consumption for BMW processing increases with the temperature throughout its range and reaches 1 kWh/kg. To realize plasma processing of BMW experimental installation with DC plasma torch of 30 kW power was developed. The experiments allowed verifying the thermodynamic calculations. Wastes are packed in boxes weighing 5-7 kg. They are placed in the box furnace. Under the influence of air plasma flame average temperature in the box reaches 1800 OC, the organic part of the waste is gasified and inorganic part of the waste is melted. The resulting synthesis gas is continuously withdrawn from the unit through the cooling and cleaning system. Molten mineral part of the waste is removed from the furnace after it has been stopped. Experimental studies allowed determining operating modes of the plasma box furnace, the exhaust gases was analyzed, samples of condensed products were assembled and their chemical composition was determined. Gas at the outlet of the plasma box furnace has the following composition (vol.%): CO - 63.4, H2 - 6.2, N2 - 29.6, S - 0.8. The total concentration of synthesis gas (CO + H2) is 69.6%, which agrees well with the thermodynamic calculation. Experiments confirmed absence of the toxic substances in the final products.

Keywords: biomedical waste, box furnace, plasma torch, processing, synthesis gas

Procedia PDF Downloads 495
183 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress

Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin

Abstract:

Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.

Keywords: acceptance, coping strategies, stress, validation process

Procedia PDF Downloads 314
182 A Case Study of Determining the Times of Overhauls and the Number of Spare Parts for Repairable Items in Rolling Stocks with Simulation

Authors: Ji Young Lee, Jong Woon Kim

Abstract:

It is essential to secure high availability of railway vehicles to realize high quality and efficiency of railway service. Once the availability decreased, planned railway service could not be provided or more cars need to be reserved. additional cars need to be purchased or the frequency of railway service could be decreased. Such situation would be a big loss in terms of quality and cost related to railway service. Therefore, we make various efforts to get high availability of railway vehicles. Because it is a big loss to operators, we make various efforts to get high availability of railway vehicles. To secure high availability, the idle time of the vehicle needs to be reduced and the following methods are applied to railway vehicles. First, through modularization design, exchange time for line replaceable units is reduced which makes railway vehicles could be put into the service quickly. Second, to reduce periodic preventive maintenance time, preventive maintenance with short period would be proceeded test oriented to minimize the maintenance time, and reliability is secured through overhauls for each main component. With such design changes for railway vehicles, modularized components are exchanged first at the time of vehicle failure or overhaul so that vehicles could be put into the service quickly and exchanged components are repaired or overhauled. Therefore, spare components are required for any future failures or overhauls. And, as components are modularized and costs for components are high, it is considerably important to get reasonable quantities of spare components. Especially, when a number of railway vehicles were put into the service simultaneously, the time of overhauls come almost at the same time. Thus, for some vehicles, components need to be exchanged and overhauled before appointed overhaul period so that these components could be secured as spare parts for the next vehicle’s component overhaul. For this reason, components overhaul time and spare parts quantities should be decided at the same time. This study deals with the time of overhauls for repairable components of railway vehicles and the calculation of spare parts quantities in consideration of future failure/overhauls. However, as railway vehicles are used according to the service schedule, maintenance work cannot be proceeded after the service was closed thus it is quite difficult to resolve this situation mathematically. In this study, Simulation software system is used in this study for analyzing the time of overhauls for repairable components of railway vehicles and the spare parts for the railway systems.

Keywords: overhaul time, rolling stocks, simulation, spare parts

Procedia PDF Downloads 308
181 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams

Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche

Abstract:

According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.

Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor

Procedia PDF Downloads 297
180 Recursion, Merge and Event Sequence: A Bio-Mathematical Perspective

Authors: Noury Bakrim

Abstract:

Formalization is indeed a foundational Mathematical Linguistics as demonstrated by the pioneering works. While dialoguing with this frame, we nonetheless propone, in our approach of language as a real object, a mathematical linguistics/biosemiotics defined as a dialectical synthesis between induction and computational deduction. Therefore, relying on the parametric interaction of cycles, rules, and features giving way to a sub-hypothetic biological point of view, we first hypothesize a factorial equation as an explanatory principle within Category Mathematics of the Ergobrain: our computation proposal of Universal Grammar rules per cycle or a scalar determination (multiplying right/left columns of the determinant matrix and right/left columns of the logarithmic matrix) of the transformable matrix for rule addition/deletion and cycles within representational mapping/cycle heredity basing on the factorial example, being the logarithmic exponent or power of rule deletion/addition. It enables us to propone an extension of minimalist merge/label notions to a Language Merge (as a computing principle) within cycle recursion relying on combinatorial mapping of rules hierarchies on external Entax of the Event Sequence. Therefore, to define combinatorial maps as language merge of features and combinatorial hierarchical restrictions (governing, commanding, and other rules), we secondly hypothesize from our results feature/hierarchy exponentiation on graph representation deriving from Gromov's Symbolic Dynamics where combinatorial vertices from Fe are set to combinatorial vertices of Hie and edges from Fe to Hie such as for all combinatorial group, there are restriction maps representing different derivational levels that are subgraphs: the intersection on I defines pullbacks and deletion rules (under restriction maps) then under disjunction edges H such that for the combinatorial map P belonging to Hie exponentiation by intersection there are pullbacks and projections that are equal to restriction maps RM₁ and RM₂. The model will draw on experimental biomathematics as well as structural frames with focus on Amazigh and English (cases from phonology/micro-semantics, Syntax) shift from Structure to event (especially Amazigh formant principle resolving its morphological heterogeneity).

Keywords: rule/cycle addition/deletion, bio-mathematical methodology, general merge calculation, feature exponentiation, combinatorial maps, event sequence

Procedia PDF Downloads 98
179 Estimation of Hydrogen Production from PWR Spent Fuel Due to Alpha Radiolysis

Authors: Sivakumar Kottapalli, Abdesselam Abdelouas, Christoph Hartnack

Abstract:

Spent nuclear fuel generates a mixed field of ionizing radiation to the water. This radiation field is generally dominated by gamma rays and a limited flux of fast neutrons. The fuel cladding effectively attenuates beta and alpha particle radiation. Small fraction of the spent nuclear fuel exhibits some degree of fuel cladding penetration due to pitting corrosion and mechanical failure. Breaches in the fuel cladding allow the exposure of small volumes of water in the cask to alpha and beta ionizing radiation. The safety of the transport of radioactive material is assured by the package complying with the IAEA Requirements for the Safe Transport of Radioactive Material SSR-6. It is of high interest to avoid generation of hydrogen inside the cavity which may to an explosive mixture. The risk of hydrogen production along with other radiation gases should be analyzed for a typical spent fuel for safety issues. This work aims to perform a realistic study of the production of hydrogen by radiolysis assuming most penalizing initial conditions. It consists in the calculation of the radionuclide inventory of a pellet taking into account the burn up and decays. Westinghouse 17X17 PWR fuel has been chosen and data has been analyzed for different sets of enrichment, burnup, cycles of irradiation and storage conditions. The inventory is calculated as the entry point for the simulation studies of hydrogen production by radiolysis kinetic models by MAKSIMA-CHEMIST. Dose rates decrease strongly within ~45 μm from the fuel surface towards the solution(water) in case of alpha radiation, while the dose rate decrease is lower in case of beta and even slower in case of gamma radiation. Calculations are carried out to obtain spectra as a function of time. Radiation dose rate profiles are taken as the input data for the iterative calculations. Hydrogen yield has been found to be around 0.02 mol/L. Calculations have been performed for a realistic scenario considering a capsule containing the spent fuel rod. Thus, hydrogen yield has been debated. Experiments are under progress to validate the hydrogen production rate using cyclotron at > 5MeV (at ARRONAX, Nantes).

Keywords: radiolysis, spent fuel, hydrogen, cyclotron

Procedia PDF Downloads 489
178 The Effects of Cultural Distance and Institutions on Foreign Direct Investment Choices: Evidence from Turkey and China

Authors: Nihal Kartaltepe Behram, Göksel Ataman, Dila Okçu

Abstract:

With the development of foreign direct investments, the social, cultural, political and economic interactions between countries and institutions have become visible and they have become determining factors for the strategic structuring and market goals. In this context the purpose of this study is to investigate the effects of cultural distance and institutions on foreign direct investment choices in terms of location and investment model. For international establishments, the concept of culture, as well as the concept of cultural distance, is taken specifically into consideration, especially in the selection of methods for entering the market. In the researches and empirical studies conducted, a direct relationship between cultural distance and foreign direct investments is set and institutions and effective variable factors are examined at the level of defining the investment types. When the detailed calculation strategies and empirical researches and studies are taken into consideration, the most common methods for determining the direct investment model, considering the cultural distances, are full-ownership enterprises and joint ventures. Also, when all of the factors affecting the investments are taken into consideration, it was seen that the effect of institutions such as Government Intervention, Intellectual Property Rights, Corruption and Contract Enforcements is very important. Furthermore agglomeration is more intense and effective on the investment, compared to other factors. China has been selected as the target country, due to its effectiveness in world economy and its contributions to developing countries, which has commercial relationships with. Qualitative research methods are used for this study conducted, to measure the effects of determinative variable factors in the hypotheses of study, on the direct foreign investors and to evaluate the findings. In this study in-depth interview is used as a data collection method and the data analysis is made through descriptive analysis. Foreign Direct Investments are so reactive to institutions and cultural distance is identified by all interviews and analysis. On the other hand, agglomeration is the most strong determiner factor on foreign direct investors in Chinese Market. The reason of this factors, which comprise the sectorial aggregate, are not the strongest factors as agglomeration that the most important finding. We expect that this study became a beneficial guideline for developed and developing countries and local and national institutions’ strategic plans.

Keywords: China, cultural distance, Foreign Direct Investments, institutions

Procedia PDF Downloads 390
177 Air Quality Health Index in Windsor, Canada, and the Impact of Regional Scale Transport

Authors: Xiaohong Xu, Tianchu Zhang, Yangfan Chen, Rongtai Tan

Abstract:

In Canada, Air Quality Health Index (AQHI) is a scale designed to help residences understand the impact of air quality on human health. In Ontario, Canada, AQHI was implemented in June 2015. This study investigated temporal variability of daily AQHI and impact of regional transport on AQHI in Windsor, Ontario, Canada from 2016 to 2019. During 2016–2019, 1428 daily AQHIs were recorded in Windsor Downtown Station. Among those, the AQHIs were at the low health risk level (AQHI = 1, 2 or 3) in 82% of days, only a few days at high risk level (AQHI = 7), the rest were at moderate health risk level (AQHI = 4, 5, 6), indicating air quality in Windsor was fairly good with relatively low health risk. The annual mean AQHI value decreased from 2.95 in 2016 to 2.81 in 2019, demonstrating the improvement of air quality. Half of the days, AQHI were 3 regardless of season. AQHI was higher in the warm season (3.1) than in the cold season (2.6) due to more frequent moderate risk days (27%, AQHI = 4) in warm season and more frequent low risk days (42%, AQHI = 2) in the cold season. Among the three pollutants considered in AQHI calculation, O3 was the most frequently reported dominant contributor to daily AQHI (88% of days), followed by NO2 (12%), especially in the cold season, with small contribution from PM2.5 (<1%). In the past two decades, NO2 concentrations had decreased significantly and O3 concentrations had increased, resulting in daily AQHI being less reliance on NO2 (from 51% of days being the primary contributor during 2003–2010 to 12% during 2016–2019) and more on O3 concentrations (49% to 88%). Trajectory analysis found that AQHI ≤ 3 days were closely associated with air masses from the north and northwest, whereas AQHI > 3 days were closely associated with air masses from the west and southwest. This is because northerly flows brought in clear air mass owing to less industrial facilities, while polluted air masses were transported from the south of Windsor, where several industrial states of the US were located. Overall, O3 concentrations dictate the daily AQHI values, the seasonal variability of AQHI, and the impact of regional transport on AQHI in Windsor. This makes further reductions of AQHI challenging because O3 concentrations are likely to continue increasing due to weakened consumption of O3 by NO owing to decreasing NO emissions and more hot days because of climate change. The predominant and increasing contribution of O3 to AQHI calls for more effective control measures to mitigate O3 pollution and its impact on human health and the environment.

Keywords: air quality, Air Quality Health Index (AQHI), hysplit, regional transport, windsor

Procedia PDF Downloads 40
176 Evaluation of Learning Outcomes, Satisfaction and Self-Assessment of Students as a Change Factor in the Polish Higher Education System

Authors: Teresa Kupczyk, Selçuk Mustafa Özcan, Joanna Kubicka

Abstract:

The paper presents results of specialist literature analysis concerning learning outcomes and student satisfaction as a factor of the necessary change in the Polish higher education system. The objective of the empirical research was to determine students’ assessment of learning outcomes, satisfaction of their expectations, as well as their satisfaction with lectures and practical classes held in the traditional form, e-learning and video-conference. The assessment concerned effectiveness of time spent at classes, usefulness of the delivered knowledge, instructors’ preparation and teaching skills, application of tools, studies curriculum, its adaptation to students’ needs and labour market, as well as studying conditions. Self-assessment of learning outcomes was confronted with assessment by lecturers. The indirect objective of the research was also to identify how students assessed their activity and commitment in acquisition of knowledge and their discipline in achieving education goals. It was analysed how the studies held affected the students’ willingness to improve their skills and assessment of their perspectives at the labour market. To capture the changes underway, the research was held at the beginning, during and after completion of the studies. The study group included 86 students of two editions of full-time studies majoring in Management and specialising in “Mega-event organisation”. The studies were held within the EU-funded project entitled “Responding to challenges of new markets – innovative managerial education”. The results obtained were analysed statistically. Average results and standard deviations were calculated. In order to describe differences between the studied variables present during the process of studies, as well as considering the respondents’ gender, t-Student test for independent samples was performed with the IBM SPSS Statistics 21.0 software package. Correlations between variables were identified by calculation of Pearson and Spearman correlation coefficients. Research results suggest necessity to introduce some changes in the teaching system applied at Polish higher education institutions, not only considering the obtained outcomes, but also impact on students’ willingness to improve their qualifications constantly, improved self-assessment among students and their opportunities at the labour market.

Keywords: higher education, learning outcomes, students, change

Procedia PDF Downloads 212
175 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model

Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.

Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.

Procedia PDF Downloads 115
174 Hydrogeochemical Assessment, Evaluation and Characterization of Groundwater Quality in Ore, South-Western, Nigeria

Authors: Olumuyiwa Olusola Falowo

Abstract:

One of the objectives of the Millennium Development Goals is to have sustainable access to safe drinking water and basic sanitation. In line with this objective, an assessment of groundwater quality was carried out in Odigbo Local Government Area of Ondo State in November – February, 2019 to assess the drinking, domestic and irrigation uses of the water. Samples from 30 randomly selected ground water sources; 16 shallow wells and 14 from boreholes and analyzed using American Public Health Association method for the examination of water and wastewater. Water quality index calculation, and diagrams such as Piper diagram, Gibbs diagram and Wilcox diagram have been used to assess the groundwater in conjunction with irrigation indices such as % sodium, sodium absorption ratio, permeability index, magnesium ratio, Kelly ratio, and electrical conductivity. In addition statistical Principal component analysis were used to determine the homogeneity and source(s) influencing the chemistry of the groundwater. The results show that all the parameters are within the permissible limit of World Health Organization. The physico-chemical analysis of groundwater samples indicates that the dominant major cations are in decreasing order of Na+, Ca2+, Mg2+, K+ and the dominant anions are HCO-3, Cl-, SO-24, NO-3. The values of water quality index varies suggest a Good water (WQI of 50-75) accounts for 70% of the study area. The dominant groundwater facies revealed in this study are the non-carbonate alkali (primary salinity) exceeds 50% (zone 7); and transition zone with no one cation-anion pair exceeds 50% (zone 9), while evaporation; rock–water interaction, and precipitation; and silicate weathering process are the dominant processes in the hydrogeochemical evolution of the groundwater. The study indicates that waters were found within the permissible limits of irrigation indices adopted, and plot on excellent category on Wilcox plot. In conclusion, the water in the study area are good/suitable for drinking, domestic and irrigation purposes with low equivalent salinity concentrate and moderate electrical conductivity.

Keywords: equivalent salinity concentration, groundwater quality, hydrochemical facies, principal component analysis, water-rock interaction

Procedia PDF Downloads 114
173 Estimation of Noise Barriers for Arterial Roads of Delhi

Authors: Sourabh Jain, Parul Madan

Abstract:

Traffic noise pollution has become a challenging problem for all metro cities of India due to rapid urbanization, growing population and rising number of vehicles and transport development. In Delhi the prime source of noise pollution is vehicular traffic. In Delhi it is found that the ambient noise level (Leq) is exceeding the standard permissible value at all the locations. Noise barriers or enclosures are definitely useful in obtaining effective deduction of traffic noise disturbances in urbanized areas. US’s Federal Highway Administration Model (FHWA) and Calculation of Road Traffic Noise (CORTN) of UK are used to develop spread sheets for noise prediction. Spread sheets are also developed for evaluating effectiveness of existing boundary walls abutting houses in mitigating noise, redesigning them as noise barriers. Study was also carried out to examine the changes in noise level due to designed noise barrier by using both models FHWA and CORTN respectively. During the collection of various data it is found that receivers are located far away from road at Rithala and Moolchand sites and hence extra barrier height needed to meet prescribed limits was less as seen from calculations and most of the noise diminishes by propagation effect.On the basis of overall study and data analysis, it is concluded that FHWA and CORTN models under estimate noise levels. FHWA model predicted noise levels with an average percentage error of -7.33 and CORTN predicted with an average percentage error of -8.5. It was observed that at all sites noise levels at receivers were exceeding the standard limit of 55 dB. It was seen from calculations that existing walls are reducing noise levels. Average noise reduction due to walls at Rithala was 7.41 dB and at Panchsheel was 7.20 dB and lower amount of noise reduction was observed at Friend colony which was only 5.88. It was observed from analysis that Friends colony sites need much greater height of barrier. This was because of residential buildings abutting the road. At friends colony great amount of traffic was observed since it is national highway. At this site diminishing of noise due to propagation effect was very less.As FHWA and CORTN models were developed in excel programme, it eliminates laborious calculations of noise. There was no reflection correction in FHWA models as like in CORTN model.

Keywords: IFHWA, CORTN, Noise Sources, Noise Barriers

Procedia PDF Downloads 104
172 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis

Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone

Abstract:

The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21C and 25C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.

Keywords: dehumidification, nodal calculation, radiant cooling panel, system sizing

Procedia PDF Downloads 148
171 Impact of Electric Vehicles on Energy Consumption and Environment

Authors: Amela Ajanovic, Reinhard Haas

Abstract:

Electric vehicles (EVs) are considered as an important means to cope with current environmental problems in transport. However, their high capital costs and limited driving ranges state major barriers to a broader market penetration. The core objective of this paper is to investigate the future market prospects of various types of EVs from an economic and ecological point of view. Our method of approach is based on the calculation of total cost of ownership of EVs in comparison to conventional cars and a life-cycle approach to assess the environmental benignity. The most crucial parameters in this context are km driven per year, depreciation time of the car and interest rate. The analysis of future prospects it is based on technological learning regarding investment costs of batteries. The major results are the major disadvantages of battery electric vehicles (BEVs) are the high capital costs, mainly due to the battery, and a low driving range in comparison to conventional vehicles. These problems could be reduced with plug-in hybrids (PHEV) and range extenders (REXs). However, these technologies have lower CO₂ emissions in the whole energy supply chain than conventional vehicles, but unlike BEV they are not zero-emission vehicles at the point of use. The number of km driven has a higher impact on total mobility costs than the learning rate. Hence, the use of EVs as taxis and in car-sharing leads to the best economic performance. The most popular EVs are currently full hybrid EVs. They have only slightly higher costs and similar operating ranges as conventional vehicles. But since they are dependent on fossil fuels, they can only be seen as energy efficiency measure. However, they can serve as a bridging technology, as long as BEVs and fuel cell vehicle do not gain high popularity, and together with PHEVs and REX contribute to faster technological learning and reduction in battery costs. Regarding the promotion of EVs, the best results could be reached with a combination of monetary and non-monetary incentives, as in Norway for example. The major conclusion is that to harvest the full environmental benefits of EVs a very important aspect is the introduction of CO₂-based fuel taxes. This should ensure that the electricity for EVs is generated from renewable energy sources; otherwise, total CO₂ emissions are likely higher than those of conventional cars.

Keywords: costs, mobility, policy, sustainability,

Procedia PDF Downloads 200
170 Realistic Modeling of the Preclinical Small Animal Using Commercial Software

Authors: Su Chul Han, Seungwoo Park

Abstract:

As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.

Keywords: mimics, preclinical small animal, segmentation, 3D printer

Procedia PDF Downloads 342
169 The Relationships between AntimüLlerian Hormone, Androgens and Ovarian Reserve in Non-Obese East Indian Women with and without Polycystic Ovary Syndrome

Authors: Dipanshu Sur, Ratnabali Chakravorty, Rimi Pal, Siddhartha Chatterjee, Joyshree Chaterjee, Amal Mallik

Abstract:

Background: Polycystic ovary syndrome (PCOS) is a common endocrine disease in reproductive women with a complex hormonal disturbance that affects the menstrual cycle and leads to metabolic consequences in later life. Hyperandrogenaemia is noticeable features of PCOS and influence the process of folliculogenesis in women. The levels of Antimüllerian Hormone (AMH) reflect the number of pre-antral follicles and thus are a marker of oocyte pool – germinal reserve of the ovary for reproduction. Besides its utilization in IVF (In-vitro fertilization), determination of AMH may serve as an additional marker in the diagnostics of PCOS, where increased AMH levels reflect the severity of the disease. The positive correlation of serum AMH with the number of antral follicles was found also in patients with PCOS. Objective: The objective of this study was to investigate the relationship between AMH androgens and whether AMH contributes to altered folliculogenesis in non-obese women with PCOS. Methods: We designed a prospective study which included a total of 65 IVF individuals. It enrolled 26 cases of PCOS based on 2003 Rotterdam criteria and 39 ovulatory normal- non PCOS, healthy, age-matched controls. AMH levels and ovarian morphology were assessed. The relationships between AMH and androgenaemia in patients with and without PCOS were studied. Results: Mean age of PCOS patients were slightly higher than controls (32±4 and 28±3 years, respectively). AMH generally increased with antral follicle count (AFC) [P=0.001], testosterone, and luteinising hormone, and decreased with age, and serum sex hormone binding globulin (SHBG). No significant relationships were found between circulating AMH levels and BMI between PCOS and non-PCOS patients. The calculation of AMH production per antral follicle (AMH/AF) showed that there was a significant difference in median AMH/AF between PCOS and non-PCOS (P =0.001). Both PCOS and non-PCOS groups showed a very similar increase in AMH with increases in AFC, but the PCOS patients had consistently higher AMH across all AFC levels. Conclusions: These observations indicate that there is a connection between AMH and androgens levels between PCOS and non-PCOS East Indian women. Excessive granulosa cell activity may be implicated in the abnormal follicular dynamic of the syndrome. They are higher in women with PCOS and, on the other hand, very low in women with an ovarian failure.

Keywords: anti-Mullerian hormone, polycystic ovary syndrome, antral follicle count, androgens

Procedia PDF Downloads 182
168 Kinematics and Dynamics Analysis of Crank-Piston System of a High-Power, Nine-Cylinder Aircraft Engine

Authors: Michal Biały, Konrad Pietrykowski, Rafal Sochaczewski

Abstract:

The kinematics and dynamics analysis of crank-piston system of aircraft engine. The object of the study was the high power aircraft engine ASz 62-IR. This engine is produced by a Polish company WSK "PZL-KALISZ" S.A.". All analyzes were performed numerically using CAD and CAE environment. Three-dimensional model of the crank-piston system was developed based on real engine located in the Laboratory of Centre of Innovation and Advanced Technologies of Lublin University of Technology. During the development of the model, the technique of reverse engineering - 3D scanning was used. ASz 62-IR engine is characterized by a radial type of crank-piston system. In this system the cylinders are arranged radially around the circle. This crank-piston system consists of a main connecting rod and eight additional connecting rods. In addition, three-dimensional model consists of a piston pins, pistons and piston rings. As a result of the specific engine design, characteristics of the piston individual movement are slightly different from each other. But the model assumes that they are the same during the analysis. Three-dimensional model of the engine was implemented into the MSC Adams software. The environment of MSC Adams allows for multibody simulation of the dynamic phenomena. This determines the state parameters of the moving elements, among which the load or force distribution on each kinematic node can be distinguished. Materials and characteristic materials parameters were adopted on the basis of commonly used materials for engine parts. The mass values of individual elements were adopted on the basis of real engine parts. The piston gas forces were replaced by calculation of pressure variations recorded during engine tests on the engine test bench. The research the changes of forces acting in the individual kinematic pairs of crank-piston system. The model allows to determine the load on the crankshaft main bearings. This gives the possibility for the main supports forces analysis The model allows for testing and simulation of kinematics and dynamics of a radial aircraft engine. This is the first stage of the work, which aims to numerical simulation of vibration of multi-cylinder aircraft engine. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: aircraft engine, CAD, CAE, dynamics, kinematics, MSC Adams, numerical simulation

Procedia PDF Downloads 351
167 Heuristic Approaches for Injury Reductions by Reduced Car Use in Urban Areas

Authors: Stig H. Jørgensen, Trond Nordfjærn, Øyvind Teige Hedenstrøm, Torbjørn Rundmo

Abstract:

The aim of the paper is to estimate and forecast road traffic injuries in the coming 10-15 years given new targets in urban transport policy and shifts of mode of transport, including injury cross-effects of mode changes. The paper discusses possibilities and limitations in measuring and quantifying possible injury reductions. Injury data (killed and seriously injured road users) from six urban areas in Norway from 1998-2012 (N= 4709 casualties) form the basis for estimates of changing injury patterns. For the coming period calculation of number of injuries and injury rates by type of road user (categories of motorized versus non-motorized) by sex, age and type of road are made. A prognosticated population increase (25 %) in total population within 2025 in the six urban areas will curb the proceeded fall in injury figures. However, policy strategies and measures geared towards a stronger modal shift from use of private vehicles to safer public transport (bus, train) will modify this effect. On the other side will door to door transport (pedestrians on their way to/from public transport nodes) imply a higher exposure for pedestrians (bikers) converting from private vehicle use (including fall accidents not registered as traffic accidents). The overall effect is the sum of these modal shifts in the increasing urban population and in addition diminishing return to the majority of road safety countermeasures has also to be taken into account. The paper demonstrates how uncertainties in the various estimates (prediction factors) on increasing injuries as well as decreasing injury figures may partly offset each other. The paper discusses road safety policy and welfare consequences of transport mode shift, including reduced use of private vehicles, and further environmental impacts. In this regard, safety and environmental issues will as a rule concur. However pursuing environmental goals (e.g. improved air quality, reduced co2 emissions) encouraging more biking may generate more biking injuries. The study was given financial grants from the Norwegian Research Council’s Transport Safety Program.

Keywords: road injuries, forecasting, reduced private care use, urban, Norway

Procedia PDF Downloads 211
166 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer

Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott

Abstract:

The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.

Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD

Procedia PDF Downloads 482
165 A Radiofrequency Based Navigation Method for Cooperative Robotic Communities in Surface Exploration Missions

Authors: Francisco J. García-de-Quirós, Gianmarco Radice

Abstract:

When considering small robots working in a cooperative community for Moon surface exploration, navigation and inter-nodes communication aspects become a critical issue for the mission success. For this approach to succeed, it is necessary however to deploy the required infrastructure for the robotic community to achieve efficient self-localization as well as relative positioning and communications between nodes. In this paper, an exploration mission concept in which two cooperative robotic systems co-exist is presented. This paradigm hinges on a community of reference agents that provide support in terms of communication and navigation to a second agent community tasked with exploration goals. The work focuses on the role of the agent community in charge of the overall support and, more specifically, will focus on the positioning and navigation methods implemented in RF microwave bands, which are combined with the communication services. An analysis of the different methods for range and position calculation are presented, as well as the main limiting factors for precision and resolution, such as phase and frequency noise in RF reference carriers and drift mechanisms such as thermal drift and random walk. The effects of carrier frequency instability due to phase noise are categorized in different contributing bands, and the impact of these spectrum regions are considered both in terms of the absolute position and the relative speed. A mission scenario is finally proposed, and key metrics in terms of mass and power consumption for the required payload hardware are also assessed. For this purpose, an application case involving an RF communication network in UHF Band is described, in coexistence with a communications network used for the single agents to communicate within the both the exploring agents as well as the community and with the mission support agents. The proposed approach implements a substantial improvement in planetary navigation since it provides self-localization capabilities for robotic agents characterized by very low mass, volume and power budgets, thus enabling precise navigation capabilities to agents of reduced dimensions. Furthermore, a common and shared localization radiofrequency infrastructure enables new interaction mechanisms such as spatial arrangement of agents over the area of interest for distributed sensing.

Keywords: cooperative robotics, localization, robot navigation, surface exploration

Procedia PDF Downloads 261
164 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 243
163 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 133
162 Economics of Precision Mechanization in Wine and Table Grape Production

Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka

Abstract:

The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.

Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes

Procedia PDF Downloads 233