Search results for: quantum calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1784

Search results for: quantum calculation

224 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer

Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott

Abstract:

The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.

Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD

Procedia PDF Downloads 486
223 A Radiofrequency Based Navigation Method for Cooperative Robotic Communities in Surface Exploration Missions

Authors: Francisco J. García-de-Quirós, Gianmarco Radice

Abstract:

When considering small robots working in a cooperative community for Moon surface exploration, navigation and inter-nodes communication aspects become a critical issue for the mission success. For this approach to succeed, it is necessary however to deploy the required infrastructure for the robotic community to achieve efficient self-localization as well as relative positioning and communications between nodes. In this paper, an exploration mission concept in which two cooperative robotic systems co-exist is presented. This paradigm hinges on a community of reference agents that provide support in terms of communication and navigation to a second agent community tasked with exploration goals. The work focuses on the role of the agent community in charge of the overall support and, more specifically, will focus on the positioning and navigation methods implemented in RF microwave bands, which are combined with the communication services. An analysis of the different methods for range and position calculation are presented, as well as the main limiting factors for precision and resolution, such as phase and frequency noise in RF reference carriers and drift mechanisms such as thermal drift and random walk. The effects of carrier frequency instability due to phase noise are categorized in different contributing bands, and the impact of these spectrum regions are considered both in terms of the absolute position and the relative speed. A mission scenario is finally proposed, and key metrics in terms of mass and power consumption for the required payload hardware are also assessed. For this purpose, an application case involving an RF communication network in UHF Band is described, in coexistence with a communications network used for the single agents to communicate within the both the exploring agents as well as the community and with the mission support agents. The proposed approach implements a substantial improvement in planetary navigation since it provides self-localization capabilities for robotic agents characterized by very low mass, volume and power budgets, thus enabling precise navigation capabilities to agents of reduced dimensions. Furthermore, a common and shared localization radiofrequency infrastructure enables new interaction mechanisms such as spatial arrangement of agents over the area of interest for distributed sensing.

Keywords: cooperative robotics, localization, robot navigation, surface exploration

Procedia PDF Downloads 270
222 The Effect of Research Unit Clique-Diversity and Power Structure on Performance and Originality

Authors: Yue Yang, Qiang Wu, Xingyu Gao

Abstract:

"Organized research units" have always been an important part of academia. According to the type of organization, there are public research units, university research units, and corporate research units. Existing research has explored the research unit in some depth from several perspectives. However, there is a research gap on the closer interaction between the three from a network perspective and the impact of this interaction on their performance as well as originality. Cliques are a special kind of structure under the concept of cohesive subgroups in the field of social networks, representing particularly tightly knit teams in a network. This study develops the concepts of the diversity of clique types and the diversity of clique geography based on cliques, starting from the diversity of collaborative activities characterized by them. Taking research units as subjects and assigning values to their power in cliques based on occupational age, we explore the impact of clique diversity and clique power on their performance as well as originality and the moderating role of clique relationship strength and structural holes in them. By collecting 9094 articles published in the field of quantum communication at WoSCC over the 15 years 2007-2021, we processed them to construct annual collaborative networks between a total of 533 research units and measured the network characteristic variables using Ucinet. It was found that the type and geographic diversity of cliques promoted the performance and originality of the research units, and the strength of clique relationships positively moderated the positive effect of the diversity of clique types on performance and negatively affected the promotional relationship between the geographic diversity of cliques and performance. It also negatively affected the positive effects of clique-type diversity and clique-geography diversity on originality. Structural holes positively moderated the facilitating effect of both types of factional diversity on performance and originality. Clique power promoted the performance of the research unit, but unfavorably affected its performance on novelty. Faction relationship strength facilitated the relationship between faction rights and performance and showed negative insignificance for clique power and originality. Structural holes positively moderated the effect of clique power on performance and originality.

Keywords: research unit, social networks, clique structure, clique power, diversity

Procedia PDF Downloads 32
221 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 247
220 2106 kA/cm² Peak Tunneling Current Density in GaN-Based Resonant Tunneling Diode with an Intrinsic Oscillation Frequency of ~260GHz at Room Temperature

Authors: Fang Liu, JunShuai Xue, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun

Abstract:

Terahertz spectra is in great demand since last two decades for many photonic and electronic applications. III-Nitride resonant tunneling diode is one of the promising candidates for portable and compact THz sources. Room temperature microwave oscillator based on GaN/AlN resonant tunneling diode was reported in this work. The devices, grown by plasma-assisted molecular-beam epitaxy on free-standing c-plane GaN substrates, exhibit highly repeatable and robust negative differential resistance (NDR) characteristics at room temperature. To improve the interface quality at the active region in RTD, indium surfactant assisted growth is adopted to enhance the surface mobility of metal atoms on growing film front. Thanks to the lowered valley current associated with the suppression of threading dislocation scattering on low dislocation GaN substrate, a positive peak current density of record-high 2.1 MA/cm2 in conjunction with a peak-to-valley current ratio (PVCR) of 1.2 are obtained, which is the best results reported in nitride-based RTDs up to now considering the peak current density and PVCR values simultaneously. When biased within the NDR region, microwave oscillations are measured with a fundamental frequency of 0.31 GHz, yielding an output power of 5.37 µW. Impedance mismatch results in the limited output power and oscillation frequency described above. The actual measured intrinsic capacitance is only 30fF. Using a small-signal equivalent circuit model, the maximum intrinsic frequency of oscillation for these diodes is estimated to be ~260GHz. This work demonstrates a microwave oscillator based on resonant tunneling effect, which can meet the demands of terahertz spectral devices, more importantly providing guidance for the fabrication of the complex nitride terahertz and quantum effect devices.

Keywords: GaN resonant tunneling diode, peak current density, microwave oscillation, intrinsic capacitance

Procedia PDF Downloads 109
219 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 141
218 Economics of Precision Mechanization in Wine and Table Grape Production

Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka

Abstract:

The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.

Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes

Procedia PDF Downloads 240
217 Oligoalkylamine Modified Poly(Amidoamine) Generation 4.5 Dendrimer for the Delivery of Small Interfering RNA

Authors: Endris Yibru Hanurry, Wei-Hsin Hsu, Hsieh-Chih Tsai

Abstract:

In recent years, the discovery of small interfering RNAs (siRNAs) has got great attention for the treatment of cancer and other diseases. However, the therapeutic efficacy of siRNAs has been faced with many drawbacks because of short half-life in blood circulation, poor membrane penetration, weak endosomal escape and inadequate release into the cytosol. To overcome these drawbacks, we designed a non-viral vector by conjugating polyamidoamine generation 4.5 dendrimer (PDG4.5) with diethylenetriamine (DETA)- and tetraethylenepentamine (TEPA) followed by binding with siRNA to form polyplexes through electrostatic interaction. The result of 1H nuclear magnetic resonance (NMR), 13C NMR, correlation spectroscopy, heteronuclear single–quantum correlation spectroscopy, and Fourier transform infrared spectroscopy confirmed the successful conjugation of DETA and TEPA with PDG4.5. Then, the size, surface charge, morphology, binding ability, stability, release assay, toxicity and cellular internalization were analyzed to explore the physicochemical and biological properties of PDG4.5-DETA and PDG4.5-TEPA polyplexes at specific N/P ratios. The polyplexes (N/P = 8) exhibited spherical nanosized (125 and 85 nm) particles with optimum surface charge (13 and 26 mV), showed strong siRNA binding ability, protected the siRNA against enzyme digestion and accepted biocompatibility to the HeLa cells. Qualitatively, the fluorescence microscopy image revealed the delocalization (Manders’ coefficient 0.63 and 0.53 for PDG4.5-DETA and PDG4.5-TEPA, respectively) of polyplexes and the translocation of the siRNA throughout the cytosol to show a decent cellular internalization and intracellular biodistribution of polyplexes in HeLa cells. Quantitatively, the flow cytometry result indicated that a significant (P < 0.05) amount of siRNA was internalized by cells treated with PDG4.5-DETA (68.5%) and PDG4.5-TEPA (73%) polyplexes. Generally, PDG4.5-DETA and PDG4.5-TEPA were ideal nanocarriers of siRNA in vitro and might be used as promising candidates for in vivo study and future pharmaceutical applications.

Keywords: non-viral carrier, oligoalkylamine, poly(amidoamine) dendrimer, polyplexes, siRNA

Procedia PDF Downloads 110
216 Carbon Nanotube Field Effect Transistor - a Review

Authors: P. Geetha, R. S. D. Wahida Banu

Abstract:

The crowning advances in Silicon based electronic technology have dominated the computation world for the past decades. The captivating performance of Si devices lies in sustainable scaling down of the physical dimensions, by that increasing device density and improved performance. But, the fundamental limitations due to physical, technological, economical, and manufacture features restrict further miniaturization of Si based devices. The pit falls are due to scaling down of the devices such as process variation, short channel effects, high leakage currents, and reliability concerns. To fix the above-said problems, it is needed either to follow a new concept that will manage the current hitches or to support the available concept with different materials. The new concept is to design spintronics, quantum computation or two terminal molecular devices. Otherwise, presently used well known three terminal devices can be modified with different materials that suits to address the scaling down difficulties. The first approach will occupy in the far future since it needs considerable effort; the second path is a bright light towards the travel. Modelling paves way to know not only the current-voltage characteristics but also the performance of new devices. So, it is desirable to model a new device of suitable gate control and project the its abilities towards capability of handling high current, high power, high frequency, short delay, and high velocity with excellent electronic and optical properties. Carbon nanotube became a thriving material to replace silicon in nano devices. A well-planned optimized utilization of the carbon material leads to many more advantages. The unique nature of this organic material allows the recent developments in almost all fields of applications from an automobile industry to medical science, especially in electronics field-on which the automation industry depends. More research works were being done in this area. This paper reviews the carbon nanotube field effect transistor with various gate configurations, number of channel element, CNT wall configurations and different modelling techniques.

Keywords: array of channels, carbon nanotube field effect transistor, double gate transistor, gate wrap around transistor, modelling, multi-walled CNT, single-walled CNT

Procedia PDF Downloads 296
215 An Analysis of the Recent Flood Scenario (2017) of the Southern Districts of the State of West Bengal, India

Authors: Soumita Banerjee

Abstract:

The State of West Bengal is mostly watered by innumerable rivers, and they are different in nature in both the northern and the southern part of the state. The southern part of West Bengal is mainly drained with the river Bhagirathi-Hooghly, and its major distributaries and tributaries have divided this major river basin into many subparts like the Ichamati-Bidyadhari, Pagla-Bansloi, Mayurakshi-Babla, Ajay, Damodar, Kangsabati Sub-basin to name a few. These rivers basically drain the Districts of Bankura, Burdwan, Hooghly, Nadia and Purulia, Birbhum, Midnapore, Murshidabad, North 24-Parganas, Kolkata, Howrah and South 24-Parganas. West Bengal has a huge number of flood-prone blocks in the southern part of the state of West Bengal, the responsible factors for flood situation are the shape and size of the catchment area, its steep gradient starting from plateau to flat terrain, the river bank erosion and its siltation, tidal condition especially in the lower Ganga Basin and very low maintenance of the embankments which are mostly used as communication links. Along with these factors, DVC (Damodar Valley Corporation) plays an important role in the generation (with the release of water) and controlling the flood situation. This year the whole Gangetic West Bengal is being flooded due to high intensity and long duration rainfall, and the release of water from the Durgapur Barrage As most of the rivers are interstate in nature at times floods also take place with release of water from the dams of the neighbouring states like Jharkhand. Other than Embankments, there is no such structural measures for combatting flood in West Bengal. This paper tries to analyse the reasons behind the flood situation this year especially with the help of climatic data collected from the Indian Metrological Department, flood related data from the Irrigation and Waterways Department, West Bengal and GPM (General Precipitation Measurement) data for rainfall analysis. Based on the threshold value derived from the calculation of the past available flood data, it is possible to predict the flood events which may occur in the near future and with the help of social media it can be spread out within a very short span of time to aware the mass. On a larger or a governmental scale, heightening the settlements situated on the either banks of the river can yield a better result than building up embankments.

Keywords: dam failure, embankments, flood, rainfall

Procedia PDF Downloads 203
214 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study

Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu

Abstract:

Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.

Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm

Procedia PDF Downloads 119
213 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System

Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu

Abstract:

Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.

Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model

Procedia PDF Downloads 88
212 Physicochemical Investigation of Caffeic Acid and Caffeinates with Chosen Metals (Na, Mg, Al, Fe, Ru, Os)

Authors: Włodzimierz Lewandowski, Renata Świsłocka, Aleksandra Golonko, Grzegorz Świderski, Monika Kalinowska

Abstract:

Caffeic acid (3,4-dihydroxycinnamic) is distributed in a free form or as ester conjugates in many fruits, vegetables and seasonings including plants used for medical purpose. Caffeic acid is present in propolis – a substance with exceptional healing properties used in natural medicine since ancient times. The antioxidant, antibacterial, antiinflammatory and anticarcinogenic properties of caffeic acid are widely described in the literature. The biological activity of chemical compounds can be modified by the synthesis of their derivatives or metal complexes. The structure of the compounds determines their biological properties. This work is a continuation of the broader topic concerning the investigation of the correlation between the electronic charge distribution and biological (anticancer and antioxidant) activity of the chosen phenolic acids and their metal complexes. In the framework of this study the synthesis of new metal complexes of sodium, magnesium, aluminium, iron (III) ruthenium (III) and osmium (III) with caffeic acid was performed. The spectroscopic properties of these compounds were studied by means of FT-IR, FT-Raman, UV-Vis, ¹H and ¹³C NMR. The quantum-chemical calculations (at B3LYP/LAN L2DZ level) of caffeic acid and selected complexes were done. Moreover the antioxidant properties of synthesized complexes were studied in relation to selected stable radicals (method of reduction of DPPH and method of reduction of ABTS). On the basis of the differences in the number, intensity and locations of the bands from the IR, Raman, UV/Vis and NMR spectra of caffeic acid and its metal complexes the effect of metal cations on the electronic system of ligand was discussed. The geometry, theoretical spectra and electronic charge distribution were calculated by the use of Gaussian 09 programme. The geometric aromaticity indices (Aj – normalized function of the variance in bond lengths; BAC - bond alternation coefficient; HOMA – harmonic oscillator model of aromaticity and I₆ – Bird’s index) were calculated and the changes in the aromaticity of caffeic acid and its complexes was discussed. This work was financially supported by National Science Centre, Poland, under the research project number 2014/13/B/NZ7/02-352.

Keywords: antioxidant properties, caffeic acid, metal complexes, spectroscopic methods

Procedia PDF Downloads 191
211 Performance Analysis of the Precise Point Positioning Data Online Processing Service and Using for Monitoring Plate Tectonic of Thailand

Authors: Nateepat Srivarom, Weng Jingnong, Serm Chinnarat

Abstract:

Precise Point Positioning (PPP) technique is use to improve accuracy by using precise satellite orbit and clock correction data, but this technique is complicated methods and high costs. Currently, there are several online processing service providers which offer simplified calculation. In the first part of this research, we compare the efficiency and precision of four software. There are three popular online processing service providers: Australian Online GPS Processing Service (AUSPOS), CSRS-Precise Point Positioning and CenterPoint RTX post processing by Trimble and 1 offline software, RTKLIB, which collected data from 10 the International GNSS Service (IGS) stations for 10 days. The results indicated that AUSPOS has the least distance root mean square (DRMS) value of 0.0029 which is good enough to be calculated for monitoring the movement of tectonic plates. The second, we use AUSPOS to process the data of geodetic network of Thailand. In December 26, 2004, the earthquake occurred a 9.3 MW at the north of Sumatra that highly affected all nearby countries, including Thailand. Earthquake effects have led to errors of the coordinate system of Thailand. The Royal Thai Survey Department (RTSD) is primarily responsible for monitoring of the crustal movement of the country. The difference of the geodetic network movement is not the same network and relatively large. This result is needed for survey to continue to improve GPS coordinates system in every year. Therefore, in this research we chose the AUSPOS to calculate the magnitude and direction of movement, to improve coordinates adjustment of the geodetic network consisting of 19 pins in Thailand during October 2013 to November 2017. Finally, results are displayed on the simulation map by using the ArcMap program with the Inverse Distance Weighting (IDW) method. The pin with the maximum movement is pin no. 3239 (Tak) in the northern part of Thailand. This pin moved in the south-western direction to 11.04 cm. Meanwhile, the directional movement of the other pins in the south gradually changed from south-west to south-east, i.e., in the direction noticed before the earthquake. The magnitude of the movement is in the range of 4 - 7 cm, implying small impact of the earthquake. However, the GPS network should be continuously surveyed in order to secure accuracy of the geodetic network of Thailand.

Keywords: precise point positioning, online processing service, geodetic network, inverse distance weighting

Procedia PDF Downloads 171
210 Modulating Photoelectrochemical Water-Splitting Activity by Charge-Storage Capacity of Electrocatalysts

Authors: Yawen Dai, Ping Cheng, Jian Ru Gong

Abstract:

Photoelctrochemical (PEC) water splitting using semiconductors (SCs) provides a convenient way to convert sustainable but intermittent solar energy into clean hydrogen energy, and it has been regarded as one of most promising technology to solve the energy crisis and environmental pollution in modern society. However, the record energy conversion efficiency of a PEC cell (~3%) is still far lower than the commercialization requirement (~10%). The sluggish kinetics of oxygen evolution reaction (OER) half reaction on photoanodes is a significant limiting factor of the PEC device efficiency, and electrocatalysts (ECs) are always deposited on SCs to accelerate the hole injection for OER. However, an active EC cannot guarantee enhanced PEC performance, since the newly emerged SC-EC interface complicates the interfacial charge behavior. Herein, α-Fe2O3 photoanodes coated with Co3O4 and CoO ECs are taken as the model system to glean fundamental understanding on the EC-dependent interfacial charge behavior. Intensity modulated photocurrent spectroscopy and electrochemical impedance spectroscopy were used to investigate the competition between interfacial charge transfer and recombination, which was found to be dominated by the charge storage capacities of ECs. The combined results indicate that both ECs can store holes and increase the hole density on photoanode surface. It is like a double-edged sword that benefit the multi-hole participated OER, as well as aggravate the SC-EC interfacial charge recombination due to the Coulomb attraction, thus leading to a nonmonotonic PEC performance variation trend with the increasing surface hole density. Co3O4 has low hole storage capacity which brings limited interfacial charge recombination, and thus the increased surface holes can be efficiently utilized for OER to generate enhanced photocurrent. In contrast, CoO has overlarge hole storage capacity that causes severe interfacial charge recombination, which hinders hole transfer to electrolyte for OER. Therefore, the PEC performance of α-Fe2O3 is improved by Co3O4 but decreased by CoO despite the similar electrocatalytic activity of the two ECs. First-principle calculation was conducted to further reveal how the charge storage capacity depends on the EC’s intrinsic property, demonstrating that the larger hole storage capacity of CoO than that of Co3O4 is determined by their Co valence states and original Fermi levels. This study raises up a new strategy to manipulate interfacial charge behavior and the resultant PEC performance by the charge storage capacity of ECs, providing insightful guidance for the interface design in PEC devices.

Keywords: charge storage capacity, electrocatalyst, interfacial charge behavior, photoelectrochemistry, water-splitting

Procedia PDF Downloads 118
209 The Introduction of the Revolution Einstein’s Relative Energy Equations in Even 2n and Odd 3n Light Dimension Energy States Systems

Authors: Jiradeach Kalayaruan, Tosawat Seetawan

Abstract:

This paper studied the energy of the nature systems by looking at the overall image throughout the universe. The energy of the nature systems was developed from the Einstein’s energy equation. The researcher used the new ideas called even 2n and odd 3n light dimension energy states systems, which were developed from Einstein’s relativity energy theory equation. In this study, the major methodology the researchers used was the basic principle ideas or beliefs of some religions such as Buddhism, Christianity, Hinduism, Islam, or Tao in order to get new discoveries. The basic beliefs of each religion - Nivara, God, Ether, Atman, and Tao respectively, were great influential ideas on the researchers to use them greatly in the study to form new ideas from philosophy. Since the philosophy of each religion was alive with deep insight of the physical nature relative energy, it connected the basic beliefs to light dimension energy states systems. Unfortunately, Einstein’s original relative energy equation showed only even 2n light dimension energy states systems (if n = 1,…,∞). But in advance ideas, the researchers multiplied light dimension energy by Einstein’s original relative energy equation and get new idea of theoritical physics in odd 3n light dimension energy states systems (if n = 1,…,∞). Because from basic principle ideas or beliefs of some religions philosophy of each religion, you had to add the media light dimension energy into Einstein’s original relative energy equation. Consequently, the simple meaning picture in deep insight showed that you could touch light dimension energy of Nivara, God, Ether, Atman, and Tao by light dimension energy. Since light dimension energy was transferred by Nivara, God, Ether, Atman and Tao, the researchers got the new equation of odd 3n light dimension energy states systems. Moreover, the researchers expected to be able to solve overview problems of all light dimension energy in all nature relative energy, which are developed from Eistein’s relative energy equation.The finding of the study was called 'super nature relative energy' ( in odd 3n light dimension energy states systems (if n = 1,…,∞)). From the new ideas above you could do the summation of even 2n and odd 3n light dimension energy states systems in all of nature light dimension energy states systems. In the future time, the researchers will expect the new idea to be used in insight theoretical physics, which is very useful to the development of quantum mechanics, all engineering, medical profession, transportation, communication, scientific inventions, and technology, etc.

Keywords: 2n light dimension energy states systems effect, Ether, even 2n light dimension energy states systems, nature relativity, Nivara, odd 3n light dimension energy states systems, perturbation points energy, relax point energy states systems, stress perturbation energy states systems effect, super relative energy

Procedia PDF Downloads 319
208 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia

Authors: S. Cencek, A. Markun

Abstract:

Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.

Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines

Procedia PDF Downloads 213
207 Inverted Geometry Ceramic Insulators in High Voltage Direct Current Electron Guns for Accelerators

Authors: C. Hernandez-Garcia, P. Adderley, D. Bullard, J. Grames, M. A. Mamun, G. Palacios-Serrano, M. Poelker, M. Stutzman, R. Suleiman, Y. Wang, , S. Zhang

Abstract:

High-energy nuclear physics experiments performed at the Jefferson Lab (JLab) Continuous Electron Beam Accelerator Facility require a beam of spin-polarized ps-long electron bunches. The electron beam is generated when a circularly polarized laser beam illuminates a GaAs semiconductor photocathode biased at hundreds of kV dc inside an ultra-high vacuum chamber. The photocathode is mounted on highly polished stainless steel electrodes electrically isolated by means of a conical-shape ceramic insulator that extends into the vacuum chamber, serving as the cathode electrode support structure. The assembly is known as a dc photogun, which has to simultaneously meet the following criteria: high voltage to manage space charge forces within the electron bunch, ultra-high vacuum conditions to preserve the photocathode quantum efficiency, no field emission to prevent gas load when field emitted electrons impact the vacuum chamber, and finally no voltage breakdown for robust operation. Over the past decade, JLab has tested and implemented the use of inverted geometry ceramic insulators connected to commercial high voltage cables to operate a photogun at 200kV dc with a 10 cm long insulator, and a larger version at 300kV dc with 20 cm long insulator. Plans to develop a third photogun operating at 400kV dc to meet the stringent requirements of the proposed International Linear Collider are underway at JLab, utilizing even larger inverted insulators. This contribution describes approaches that have been successful in solving challenging problems related to breakdown and field emission, such as triple-point junction screening electrodes, mechanical polishing to achieve mirror-like surface finish and high voltage conditioning procedures with Kr gas to extinguish field emission.

Keywords: electron guns, high voltage techniques, insulators, vacuum insulation

Procedia PDF Downloads 95
206 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review

Authors: Anicet Dansou

Abstract:

Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.

Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete

Procedia PDF Downloads 87
205 Spectroscopic Studies and Reddish Luminescence Enhancement with the Increase in Concentration of Europium Ions in Oxy-Fluoroborate Glasses

Authors: Mahamuda Sk, Srinivasa Rao Allam, Vijaya Prakash G.

Abstract:

The different concentrations of Eu3+ ions doped in Oxy-fluoroborate glasses of composition 60 B2O3-10 BaF2-10 CaF2-15 CaF2- (5-x) Al2O3 -x Eu2O3 where x = 0.1, 0.5, 1.0 and 2.0 mol%, have been prepared by conventional melt quenching technique and are characterized through absorption and photoluminescence (PL), decay, color chromaticity and Confocal measurements. The absorption spectra of all the glasses consists of six peaks corresponding to the transitions 7F0→5D2, 7F0→5D1, 7F1→5D1, 7F1→5D0, 7F0→7F6 and 7F1→7F6 respectively. The experimental oscillator strengths with and without thermal corrections have been evaluated using absorption spectra. Judd-Ofelt (JO) intensity parameters (Ω2 and Ω4) have been evaluated from the photoluminescence spectra of all the glasses. PL spectra of all the glasses have been recorded at excitation wavelengths 395 nm (conventional excitation source) and 410 nm (diode laser) to observe the intensity variation in the PL spectra. All the spectra consists of five emission peaks corresponding to the transitions 5D0→7FJ (J = 0, 1, 2, 3 and 4). Surprisingly no concentration quenching is observed on PL spectra. Among all the glasses the glass with 2.0 mol% of Eu3+ ion concentration possesses maximum intensity for the transition 5D0→7F2 (612 nm) in bright red region. The JO parameters derived from the photoluminescence spectra have been used to evaluate the essential radiative properties such as transition probability (A), radiative lifetime (τR), branching ratio (βR) and peak stimulated emission cross-section (σse) for the 5D0→7FJ (J = 0, 1, 2, 3 and 4) transitions of the Eu3+ ions. The decay rates of the 5D0 fluorescent level of Eu3+ ions in the title glasses are found to be single exponential for all the studied Eu3+ ion concentrations. A marginal increase in lifetime of the 5D0 level has been noticed with increase in Eu3+ ion concentration from 0.1 mol% to 2.0 mol%. Among all the glasses, the glass with 2.0 mol% of Eu3+ ion concentration possesses maximum values of branching ratio, stimulated emission cross-section and quantum efficiency for the transition 5D0→7F2 (612 nm) in bright red region. The color chromaticity coordinates are also evaluated to confirm the reddish luminescence from these glasses. These color coordinates exactly fall in the bright red region. Confocal images also recorded to confirm reddish luminescence from these glasses. From all the obtained results in the present study, it is suggested that the glass with 2.0 mol% of Eu3+ ion concentration is suitable to emit bright red color laser.

Keywords: Europium, Judd-Ofelt parameters, laser, luminescence

Procedia PDF Downloads 219
204 Life Time Improvement of Clamp Structural by Using Fatigue Analysis

Authors: Pisut Boonkaew, Jatuporn Thongsri

Abstract:

In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.

Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability

Procedia PDF Downloads 217
203 Improved Visible Light Activities for Degrading Pollutants on ZnO-TiO2 Nanocomposites Decorated with C and Fe Nanoparticles

Authors: Yuvraj S. Malghe, Atul B. Lavand

Abstract:

In recent years, semiconductor photocatalytic degradation processes have attracted a lot of attention and are used widely for the destruction of organic pollutants present in waste water. Among various semiconductors, titanium dioxide (TiO2) is the most popular photocatalyst due to its excellent chemical stability, non-toxicity, relatively low cost and high photo-oxidation power. It has been known that zinc oxide (ZnO) with band gap energy 3.2 eV is a suitable alternative to TiO2 due to its high quantum efficiency, however it corrodes in acidic medium. Unfortunately TiO2 and ZnO both are active only in UV light due to their wide band gaps. Sunlight consist about 5-7% UV light, 46% visible light and 47% infrared radiation. In order to utilize major portion of sunlight (visible spectrum), it is necessary to modify the band gap of TiO2 as well as ZnO. This can be done by several ways such as semiconductor coupling, doping the material with metals/non metals. Doping of TiO2 using transition metals like Fe, Co and non-metals such as N, C or S extends its absorption wavelengths from UV to visible region. In the present work, we have synthesized ZnO-TiO2 nanocomposite using reverse microemulsion method. Visible light photocatalytic activity of synthesized nanocomposite was investigated for degradation of aqueous solution of malachite green (MG). To increase the photocatalytic activity of ZnO-TiO2 nanocomposite, it is decorated with C and Fe. Pure, carbon (C) doped and carbon, iron(C, Fe) co-doped nanosized ZnO-TiO2 nanocomposites were synthesized using reverse microemulsion method. These composites were characterized using, X-ray diffraction (XRD), Energy dispersive X-ray spectroscopy (EDX), Scanning electron microscopy (SEM), UV visible spectrophotometery and X-ray photoelectron spectroscopy (XPS). Visible light photocatalytic activities of synthesized nanocomposites were investigated for degradation of aqueous malachite green (MG) solution. C, Fe co-doped ZnO-TiO2 nanocomposite exhibit better photocatalytic activity and showed threefold increase in photocatalytic activity. Effect of amount of catalyst, pH and concentration of MG solution on the photodegradation rate is studied. Stability and reusability of photocatalyst is also studied. C, Fe decorated ZnO-TiO2 nanocomposite shows threefold increase in photocatalytic activity.

Keywords: malachite green, nanocomposite, photocatalysis, titanium dioxide, zinc oxide

Procedia PDF Downloads 269
202 Yield and Physiological Evaluation of Coffee (Coffea arabica L.) in Response to Biochar Applications

Authors: Alefsi D. Sanchez-Reinoso, Leonardo Lombardini, Hermann Restrepo

Abstract:

Colombian coffee is recognized worldwide for its mild flavor and aroma. Its cultivation generates a large amount of waste, such as fresh pulp, which leads to environmental, health, and economic problems. Obtaining biochar (BC) by pyrolysis of coffee pulp and its incorporation to the soil can be a complement to the crop mineral nutrition. The objective was to evaluate the effect of the application of BC obtained from coffee pulp on the physiology and agronomic performance of the Castillo variety coffee crop (Coffea arabica L.). The research was developed in field condition experiment, using a three-year-old commercial coffee crop, carried out in Tolima. Four doses of BC (0, 4, 8 and 16 t ha-1) and four levels of chemical fertilization (CF) (0%, 33%, 66% and 100% of the nutritional requirements) were evaluated. Three groups of variables were recorded during the experiment: i) physiological parameters such as Gas exchange, the maximum quantum yield of PSII (Fv/Fm), biomass, and water status were measured; ii) physical and chemical characteristics of the soil in a commercial coffee crop, and iii) physiochemical and sensorial parameters of roasted beans and coffee beverages. The results indicated that a positive effect was found in plants with 8 t ha-1 BC and fertilization levels of 66 and 100%. Also, a positive effect was observed in coffee trees treated with 8 t ha-1 BC and 100%. In addition, the application of 16 t ha-1 BC increased the soil pHand microbial respiration; reduced the apparent density and state of aggregation of the soil compared to 0 t ha-1 BC. Applications of 8 and 16 t ha-1 BC and 66%-100% chemical fertilization registered greater sensitivity to the aromatic compounds of roasted coffee beans in the electronic nose. Amendments of BC between 8 and 16 t ha-1 and CF between 66% and 100% increased the content of total soluble solids (TSS), reduced the pH, and increased the titratable acidity in beverages of roasted coffee beans. In conclusion, 8 t ha-1 BC of the coffee pulp can be an alternative to supplement the nutrition of coffee seedlings and trees. Applications between 8 and 16 t ha-1 BC support coffee soil management strategies and help the use of solid waste. BC as a complement to chemical fertilization showed a positive effect on the aromatic profile obtained for roasted coffee beans and cup quality attributes.

Keywords: crop yield, cup quality, mineral nutrition, pyrolysis, soil amendment

Procedia PDF Downloads 83
201 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study

Authors: Anna Ching-Yu Wong

Abstract:

Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.

Keywords: textbook savings, open textbooks, textbook costs assessment, open access

Procedia PDF Downloads 49
200 Visitor Management in the National Parks: Recreational Carrying Capacity Assessment of Çıralı Coast, Turkey

Authors: Tendü H. Göktuğ, Gönül T. İçemer, Bülent Deniz

Abstract:

National parks, which are rich in natural and cultural resources values are protected in the context of the idea to develop sustainability, are among the most important recreated areas demanding with each passing day. Increasing recreational use or unplanned use forms negatively affect the resource values and visitor satisfaction. The intent of national parks management is to protect the natural and cultural resource values and to provide the visitors with a quality of recreational experience, as well. In this context, the current studies to improve the appropriate tourism and recreation planning and visitor management, approach have focused on recreational carrying capacity analysis. The aim of this study is to analyze recreational carrying capacity of Çıralı Coast in the Bey Mountains Coastal National Park to compare the analyze results with the current usage format and to develop alternative management strategies. In the first phase of the study, the annual and daily visitations, geographic, bio-physical, and managerial characteristics of the park and the type of recreational usage and the recreational areas were analyzed. In addition to these, ecological observations were carried out in order to determine recreational-based pressures on the ecosystems. On-site questionnaires were administrated to a sample of 284 respondents in the August 2015 - 2016 to collect data concerning the demographics and visit characteristics. The second phase of the study, the coastal area separated into four different usage zones and the methodology proposed by Cifuentes (1992) was used for capacity analyses. This method supplies the calculation of physical, real and effective carrying capacities by using environmental, ecological, climatic and managerial parameters in a formulation. Expected numbers which estimated three levels of carrying capacities were compared to current numbers of national parks’ visitors. In the study, it was determined that the current recreational uses in the north of the beach were caused by ecological pressures, and the current numbers in the south of beach much more than estimated numbers of visitors. Based on these results management strategies were defined and the appropriate management tools were developed in accordance with these strategies. The authors are grateful for the financial support of this project by The Scientific and Technological Research Council of Turkey (No: 114O344)

Keywords: Çıralı Coast, national parks, recreational carrying capacity, visitor management

Procedia PDF Downloads 253
199 A Reduced Ablation Model for Laser Cutting and Laser Drilling

Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz

Abstract:

In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.

Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling

Procedia PDF Downloads 197
198 A Modelling of Main Bearings in the Two-Stroke Diesel Engine

Authors: Marcin Szlachetka, Rafal Sochaczewski, Lukasz Grabowski

Abstract:

This paper presents the results of the load simulations of main bearings in a two-stroke Diesel engine. A model of an engine lubrication system with connections of its main lubrication nodes, i.e., a connection of its main bearings in the engine block with the crankshaft, a connection of its crankpins with its connecting rod and a connection of its pin and its piston has been created for our calculations performed using the AVL EXCITE Designer. The analysis covers the loads given as a pressure distribution in a hydrodynamic oil film, a temperature distribution on the main bush surfaces for the specified radial clearance values as well as the impact of the force of gas on the minimum oil film thickness in the main bearings depending on crankshaft rotational speeds and temperatures of oil in the bearings. One of the main goals of the research has been to determine whether the minimum thickness of the oil film at which fluid friction occurs can be achieved for each value of crankshaft speed. Our model calculates different oil film parameters, i.e., its thickness, a pressure distribution there, the change in oil temperature. Additional enables an analysis of an oil temperature distribution on the surfaces of the bearing seats. It allows verifying the selected clearances in the bearings of the main engine under normal operation conditions and extremal ones that show a significant increase in temperature above the limit value. The research has been conducted for several engine crankshaft speeds ranging from 1000 rpm to 4000 rpm. The oil pressure in the bearings has ranged 2-5 bar according to engine speeds and the oil temperature has ranged 90-120 °C. The main bearing clearance has been adopted for the calculation and analysis as 0.025 mm. The oil classified as SAE 5W-30 has been used for the simulations. The paper discusses the selected research results referring to several specific operating points and different temperatures of the lubricating oil in the bearings. The received research results show that for the investigated main bearing bushes of the shaft, the results fall within the ranges of the limit values despite the increase in the oil temperature of the bearings reaching 120˚C. The fact that the bearings are loaded with the maximum pressure makes no excessive temperature rise on the bush surfaces. The oil temperature increases by 17˚C, reaching 137˚C at a speed of 4000 rpm. The minimum film thickness at which fluid friction occurs has been achieved for each of the operating points at each of the engine crankshaft speeds. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A.’ and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: diesel engine, main bearings, opposing pistons, two-stroke

Procedia PDF Downloads 120
197 Engine Thrust Estimation by Strain Gauging of Engine Mount Assembly

Authors: Rohit Vashistha, Amit Kumar Gupta, G. P. Ravishankar, Mahesh P. Padwale

Abstract:

Accurate thrust measurement is required for aircraft during takeoff and after ski-jump. In a developmental aircraft, takeoff from ship is extremely critical and thrust produced by the engine should be known to the pilot before takeoff so that if thrust produced is not sufficient then take-off can be aborted and accident can be avoided. After ski-jump, thrust produced by engine is required because the horizontal speed of aircraft is less than the normal takeoff speed. Engine should be able to produce enough thrust to provide nominal horizontal takeoff speed to the airframe within prescribed time limit. The contemporary low bypass gas turbine engines generally have three mounts where the two side mounts transfer the engine thrust to the airframe. The third mount only takes the weight component. It does not take any thrust component. In the present method of thrust estimation, the strain gauging of the two side mounts is carried out. The strain produced at various power settings is used to estimate the thrust produced by the engine. The quarter Wheatstone bridge is used to acquire the strain data. The engine mount assembly is subjected to Universal Test Machine for determination of equivalent elasticity of assembly. This elasticity value is used in the analytical approach for estimation of engine thrust. The estimated thrust is compared with the test bed load cell thrust data. The experimental strain data is also compared with strain data obtained from FEM analysis. Experimental setup: The strain gauge is mounted on the tapered portion of the engine mount sleeve. Two strain gauges are mounted on diametrically opposite locations. Both of the strain gauges on the sleeve were in the horizontal plane. In this way, these strain gauges were not taking any strain due to the weight of the engine (except negligible strain due to material's poison's ratio) or the hoop's stress. Only the third mount strain gauge will show strain when engine is not running i.e. strain due to weight of engine. When engine starts running, all the load will be taken by the side mounts. The strain gauge on the forward side of the sleeve was showing a compressive strain and the strain gauge on the rear side of the sleeve shows a tensile strain. Results and conclusion: the analytical calculation shows that the hoop stresses dominate the bending stress. The estimated thrust by strain gauge shows good accuracy at higher power setting as compared to lower power setting. The accuracy of estimated thrust at max power setting is 99.7% whereas at lower power setting is 78%.

Keywords: engine mounts, finite elements analysis, strain gauge, stress

Procedia PDF Downloads 457
196 Development of Positron Emission Tomography (PET) Tracers for the in-Vivo Imaging of α-Synuclein Aggregates in α-Synucleinopathies

Authors: Bright Chukwunwike Uzuegbunam, Wojciech Paslawski, Hans Agren, Christer Halldin, Wolfgang Weber, Markus Luster, Thomas Arzberger, Behrooz Hooshyar Yousefi

Abstract:

There is a need to develop a PET tracer that will enable to diagnosis and track the progression of Alpha-synucleinopathies (Parkinson’s disease [PD], dementia with Lewy bodies [DLB], multiple system atrophy [MSA]) in living subjects over time. Alpha-synuclein aggregates (a-syn), which are present in all the stages of disease progression, for instance, in PD, are a suitable target for in vivo PET imaging. For this reason, we have developed some promising a-syn tracers based on a disarylbisthiazole (DABTA) scaffold. The precursors are synthesized via a modified Hantzsch thiazole synthesis. The precursors were then radiolabeled via one- or two-step radiofluorination methods. The ligands were initially screened using a combination of molecular dynamics and quantum/molecular mechanics approaches in order to calculate the binding affinity to a-syn (in silico binding experiments). Experimental in vitro binding assays were also performed. The ligands were further screened in other experiments such as log D, in vitro plasma protein binding & plasma stability, biodistribution & brain metabolite analyses in healthy mice. Radiochemical yields were up to 30% - 72% in some cases. Molecular docking revealed possible binding sites in a-syn and also the free energy of binding to those sites (-28.9 - -66.9 kcal/mol), which correlated to the high binding affinity of the DABTAs to a-syn (Ki as low as 0.5 nM) and selectivity (> 100-fold) over Aβ and tau, which usually co-exist with a-synin some pathologies. The log D values range from 2.88 - 2.34, which correlated with free-protein fraction of 0.28% - 0.5%. Biodistribution experiments revealed that the tracers are taken up (5.6 %ID/g - 7.3 %ID/g) in the brain at 5 min (post-injection) p.i., and cleared out (values as low as 0.39 %ID/g were obtained at 120 min p.i. Analyses of the mice brain 20 min p.i. Revealed almost no radiometabolites in the brain in most cases. It can be concluded that in silico study presents a new venue for the rational development of radioligands with suitable features. The results obtained so far are promising and encourage us to further validate the DABTAs in autoradiography, immunohistochemistry, and in vivo imaging in non-human primates and humans.

Keywords: alpha-synuclein aggregates, alpha-synucleinopathies, PET imaging, tracer development

Procedia PDF Downloads 214
195 A Mixed Finite Element Formulation for Functionally Graded Micro-Beam Resting on Two-Parameter Elastic Foundation

Authors: Cagri Mollamahmutoglu, Aykut Levent, Ali Mercan

Abstract:

Micro-beams are one of the most common components of Nano-Electromechanical Systems (NEMS) and Micro Electromechanical Systems (MEMS). For this reason, static bending, buckling, and free vibration analysis of micro-beams have been the subject of many studies. In addition, micro-beams restrained with elastic type foundations have been of particular interest. In the analysis of microstructures, closed-form solutions are proposed when available, but most of the time solutions are based on numerical methods due to the complex nature of the resulting differential equations. Thus, a robust and efficient solution method has great importance. In this study, a mixed finite element formulation is obtained for a functionally graded Timoshenko micro-beam resting on two-parameter elastic foundation. In the formulation modified couple stress theory is utilized for the micro-scale effects. The equation of motion and boundary conditions are derived according to Hamilton’s principle. A functional, derived through a scientific procedure based on Gateaux Differential, is proposed for the bending and buckling analysis which is equivalent to the governing equations and boundary conditions. Most important advantage of the formulation is that the mixed finite element formulation allows usage of C₀ type continuous shape functions. Thus shear-locking is avoided in a built-in manner. Also, element matrices are sparsely populated and can be easily calculated with closed-form integration. In this framework results concerning the effects of micro-scale length parameter, power-law parameter, aspect ratio and coefficients of partially or fully continuous elastic foundation over the static bending, buckling, and free vibration response of FG-micro-beam under various boundary conditions are presented and compared with existing literature. Performance characteristics of the presented formulation were evaluated concerning other numerical methods such as generalized differential quadrature method (GDQM). It is found that with less computational burden similar convergence characteristics were obtained. Moreover, formulation also includes a direct calculation of the micro-scale related contributions to the structural response as well.

Keywords: micro-beam, functionally graded materials, two-paramater elastic foundation, mixed finite element method

Procedia PDF Downloads 132