Search results for: linear guides
175 The Effect of Manure Loaded Biochar on Soil Microbial Communities
Authors: T. Weber, D. MacKenzie
Abstract:
The script in this paper describes the use of advanced simulation environment using electronic systems (microcontroller, operational amplifiers, and FPGA). The simulation was used for non-linear dynamic systems behaviour with required observer structure working with parallel real-time simulation based on state-space representation. The proposed deposited model was used for electrodynamic effects including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time and such systems. For further purpose, the spatial temperature distribution may also be used. With upon system, the uncertainties and disturbances may be determined. This provides the estimation of the more precise system states for the required system and additionally the estimation of the ionising disturbances that arise due to radiation effects in space systems. The results have also shown that a system can be developed specifically with the real-time calculation (estimation) of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. TID (Total Ionising Dose) of 1 Gy and Single Effect Transient (SET) free operation up to 50 MeVcm²/mg may assure certain functions. Single-Event Latch-up (SEL) results on the placement of several transistors in the shared substrate of an integrated circuit; ionising radiation can activate an additional parasitic thyristor. This short circuit between semiconductor-elements can destroy the device without protection and measurements. Single-Event Burnout (SEB) on the other hand, increases current between drain and source of a MOSFET and destroys the component in a short time. A Single-Event Gate Rupture (SEGR) can destroy a dielectric of semiconductor also. In order to be able to react to these processes, it must be calculated within a shorter time that ionizing radiation and dose is present. For this purpose, sensors may be used for the realistic evaluation of the diffusion and ionizing effects of the test system. For this purpose, the Peltier element is used for the evaluation of the dynamic temperature increases (dT/dt), from which a measure of the ionization processes and thus radiation will be detected. In addition, the piezo element may be used to record highly dynamic vibrations and oscillations to absorb impacts of charged particle flux. All available sensors shall be used to calibrate the spatial distributions also. By measured value of size and known location of the sensors, the entire distribution in space can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms.Keywords: cattle, biochar, manure, microbial activity
Procedia PDF Downloads 103174 Kinematic of Thrusts and Tectonic Vergence in the Paleogene Orogen of Eastern Iran, Sechangi Area
Authors: Shahriyar Keshtgar, Mahmoud Reza Heyhat, Sasan Bagheri, Ebrahim Gholami, Seyed Naser Raiisosadat
Abstract:
The eastern Iranian range is a Z-shaped sigmoidal outcrop appearing with a NS-trending general strike on the satellite images, has already been known as the Sistan suture zone, recently identified as the product of an orogenic event introduced either by the Paleogene or Sistan orogen names. The flysch sedimentary basin of eastern Iran was filled by a huge volume of fine-grained Eocene turbiditic sediments, smaller amounts of pelagic deposits and Cretaceous ophiolitic slices, which are entirely remnants of older accretionary prisms appeared in a fold-thrust belt developed onto a subduction zone under the Lut/Afghan block, portions of the Cimmerian superterrane. In these ranges, there are Triassic sedimentary and carbonate sequences (equivalent to Nayband and Shotori Formations) along with scattered outcrops of Permian limestones (equivalent to Jamal limestone) and greenschist-facies metamorphic rocks, probably belonging to the basement of the Lut block, which have tectonic contacts with younger rocks. Moreover, the younger Eocene detrital-volcanic rocks were also thrusted onto the Cretaceous or younger turbiditic deposits. The first generation folds (parallel folds) and thrusts with slaty cleavage appeared parallel to the NE edge of the Lut block. Structural analysis shows that the most vergence of thrusts is toward the southeast so that the Permo-Triassic units in Lut have been thrusted on the younger rocks, including older (probably Jurassic) granites. Additional structural studies show that the regional transport direction in this deformation event is from northwest to the southeast where, from the outside to the inside of the orogen in the Sechengi area. Younger thrusts of the second deformation event were either directly formed as a result of the second deformation event, or they were older thrusts that reactivated and folded so that often, two sets or more slickenlines can be recognized on the thrust planes. The recent thrusts have been redistributed in directions nearly perpendicular to the edge of the Lut block and parallel to the axial surfaces of the northwest second generation large-scale folds (radial folds). Some of these younger thrusts follow the out-of-the-syncline thrust system. The both axial planes of these folds and associated penetrative shear cleavage extended towards northwest appeared with both northeast and southwest dips parallel to the younger thrusts. The large-scale buckling with the layer-parallel stress field has created this deformation event. Such consecutive deformation events perpendicular to each other cannot be basically explained by the simple linear orogen models presented for eastern Iran so far and are more consistent with the oroclinal buckling model.Keywords: thrust, tectonic vergence, orocline buckling, sechangi, eastern iranian ranges
Procedia PDF Downloads 78173 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 445172 Influence of Surface Fault Rupture on Dynamic Behavior of Cantilever Retaining Wall: A Numerical Study
Authors: Partha Sarathi Nayek, Abhiparna Dasgupta, Maheshreddy Gade
Abstract:
Earth retaining structure plays a vital role in stabilizing unstable road cuts and slopes in the mountainous region. The retaining structures located in seismically active regions like the Himalayas may experience moderate to severe earthquakes. An earthquake produces two kinds of ground motion: permanent quasi-static displacement (fault rapture) on the fault rupture plane and transient vibration, traveling a long distance. There has been extensive research work to understand the dynamic behavior of retaining structures subjected to transient ground motions. However, understanding the effect caused by fault rapture phenomena on retaining structures is limited. The presence of shallow crustal active faults and natural slopes in the Himalayan region further highlights the need to study the response of retaining structures subjected to fault rupture phenomena. In this paper, an attempt has been made to understand the dynamic response of the cantilever retaining wall subjected to surface fault rupture. For this purpose, a 2D finite element model consists of a retaining wall, backfill and foundation have been developed using Abaqus 6.14 software. The backfill and foundation material are modeled as per the Mohr-Coulomb failure criterion, and the wall is modeled as linear elastic. In this present study, the interaction between backfill and wall is modeled as ‘surface-surface contact.’ The entire simulation process is divided into three steps, i.e., the initial step, gravity load step, fault rupture step. The interaction property between wall and soil and fixed boundary condition to all the boundary elements are applied in the initial step. In the next step, gravity load is applied, and the boundary elements are allowed to move in the vertical direction to incorporate the settlement of soil due to the gravity load. In the final step, surface fault rupture has been applied to the wall-backfill system. For this purpose, the foundation is divided into two blocks, namely, the hanging wall block and the footwall block. A finite fault rupture displacement is applied to the hanging wall part while the footwall bottom boundary is kept as fixed. Initially, a numerical analysis is performed considering the reverse fault mechanism with a dip angle of 45°. The simulated result is presented in terms of contour maps of permanent displacements of the wall-backfill system. These maps highlighted that surface fault rupture can induce permanent displacement in both horizontal and vertical directions, which can significantly influence the dynamic behavior of the wall-backfill system. Further, the influence of fault mechanism, dip angle, and surface fault rupture position is also investigated in this work.Keywords: surface fault rupture, retaining wall, dynamic response, finite element analysis
Procedia PDF Downloads 106171 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method
Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili
Abstract:
The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method
Procedia PDF Downloads 198170 Viscoelastic Behavior of Human Bone Tissue under Nanoindentation Tests
Authors: Anna Makuch, Grzegorz Kokot, Konstanty Skalski, Jakub Banczorowski
Abstract:
Cancellous bone is a porous composite of a hierarchical structure and anisotropic properties. The biological tissue is considered to be a viscoelastic material, but many studies based on a nanoindentation method have focused on their elasticity and microhardness. However, the response of many organic materials depends not only on the load magnitude, but also on its duration and time course. Depth Sensing Indentation (DSI) technique has been used for examination of creep in polymers, metals and composites. In the indentation tests on biological samples, the mechanical properties are most frequently determined for animal tissues (of an ox, a monkey, a pig, a rat, a mouse, a bovine). However, there are rare reports of studies of the bone viscoelastic properties on microstructural level. Various rheological models were used to describe the viscoelastic behaviours of bone, identified in the indentation process (e. g Burgers model, linear model, two-dashpot Kelvin model, Maxwell-Voigt model). The goal of the study was to determine the influence of creep effect on the mechanical properties of human cancellous bone in indentation tests. The aim of this research was also the assessment of the material properties of bone structures, having in mind the energy aspects of the curve (penetrator loading-depth) obtained in the loading/unloading cycle. There was considered how the different holding times affected the results within trabecular bone.As a result, indentation creep (CIT), hardness (HM, HIT, HV) and elasticity are obtained. Human trabecular bone samples (n=21; mean age 63±15yrs) from the femoral heads replaced during hip alloplasty were removed and drained from alcohol of 1h before the experiment. The indentation process was conducted using CSM Microhardness Tester equipped with Vickers indenter. Each sample was indented 35 times (7 times for 5 different hold times: t1=0.1s, t2=1s, t3=10s, t4=100s and t5=1000s). The indenter was advanced at a rate of 10mN/s to 500mN. There was used Oliver-Pharr method in calculation process. The increase of hold time is associated with the decrease of hardness parameters (HIT(t1)=418±34 MPa, HIT(t2)=390±50 MPa, HIT(t3)= 313±54 MPa, HIT(t4)=305±54 MPa, HIT(t5)=276±90 MPa) and elasticity (EIT(t1)=7.7±1.2 GPa, EIT(t2)=8.0±1.5 GPa, EIT(t3)=7.0±0.9 GPa, EIT(t4)=7.2±0.9 GPa, EIT(t5)=6.2±1.8 GPa) as well as with the increase of the elastic (Welastic(t1)=4.11∙10-7±4.2∙10-8Nm, Welastic(t2)= 4.12∙10-7±6.4∙10-8 Nm, Welastic(t3)=4.71∙10-7±6.0∙10-9 Nm, Welastic(t4)= 4.33∙10-7±5.5∙10-9Nm, Welastic(t5)=5.11∙10-7±7.4∙10-8Nm) and inelastic (Winelastic(t1)=1.05∙10-6±1.2∙10-7 Nm, Winelastic(t2) =1.07∙10-6±7.6∙10-8 Nm, Winelastic(t3)=1.26∙10-6±1.9∙10-7Nm, Winelastic(t4)=1.56∙10-6± 1.9∙10-7 Nm, Winelastic(t5)=1.67∙10-6±2.6∙10-7)) reaction of materials. The indentation creep increased logarithmically (R2=0.901) with increasing hold time: CIT(t1) = 0.08±0.01%, CIT(t2) = 0.7±0.1%, CIT(t3) = 3.7±0.3%, CIT(t4) = 12.2±1.5%, CIT(t5) = 13.5±3.8%. The pronounced impact of creep effect on the mechanical properties of human cancellous bone was observed in experimental studies. While the description elastic-inelastic, and thus the Oliver-Pharr method for data analysis, may apply in few limited cases, most biological tissues do not exhibit elastic-inelastic indentation responses. Viscoelastic properties of tissues may play a significant role in remodelling. The aspect is still under an analysis and numerical simulations. Acknowledgements: The presented results are part of the research project founded by National Science Centre (NCN), Poland, no.2014/15/B/ST7/03244.Keywords: bone, creep, indentation, mechanical properties
Procedia PDF Downloads 172169 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity
Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido
Abstract:
Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens
Procedia PDF Downloads 288168 Accuracy of Fitbit Charge 4 for Measuring Heart Rate in Parkinson’s Patients During Intense Exercise
Authors: Giulia Colonna, Jocelyn Hoye, Bart de Laat, Gelsina Stanley, Jose Key, Alaaddin Ibrahimy, Sule Tinaz, Evan D. Morris
Abstract:
Parkinson’s disease (PD) is the second most common neurodegenerative disease and affects approximately 1% of the world’s population. Increasing evidence suggests that aerobic physical exercise can be beneficial in mitigating both motor and non-motor symptoms of the disease. In a recent pilot study of the role of exercise on PD, we sought to confirm exercise intensity by monitoring heart rate (HR). For this purpose, we asked participants to wear a chest strap heart rate monitor (Polar Electro Oy, Kempele). The device sometimes proved uncomfortable. Looking forward to larger clinical trials, it would be convenient to employ a more comfortable and user friendly device. The Fitbit Charge 4 (Fitbit Inc) is a potentially comfortable, user-friendly solution since it is a wrist-worn heart rate monitor. Polar H10 has been used in large trials, and for our purposes, we treated it as the gold standard for the beat-to-beat period (R-R interval) assessment. In previous literature, it has been shown that Fitbit Charge 4 has comparable accuracy to Polar H10 in healthy subjects. It has yet to be determined if the Fitbit is as accurate as the Polar H10 in subjects with PD or in clinical populations, generally. Goal: To compare the Fitbit Charge 4 to the Polar H10 for monitoring HR in PD subjects engaging in an intensive exercise program. Methods: A total of 596 exercise sessions from 11 subjects (6 males) were collected simultaneously by both devices. Subjects with early-stage PD (Hoehn & Yahr <=2) were enrolled in a 6 months exercise training program designed for PD patients. Subjects participated in 3 one-hour exercise sessions per week. They wore both Fitbit and Polar H10 during each session. Sessions included rest, warm-up, intensive exercise, and cool-down periods. We calculated the bias in the HR via Fitbit under rest (5min) and intensive exercise (20min) by comparing the mean HR during each of the periods to the respective means measured by the Polar (HRFitbit – HRPolar). We also measured the sensitivity and specificity of Fitbit for detecting HRs that exceed the threshold for intensive exercise, defined as 70% of an individual’s theoretical maximum HR. Different types of correlation between the two devices were investigated. Results: The mean bias was 1.68 bpm at rest and 6.29 bpm during high intensity exercise, with an overestimation by Fitbit in both conditions. The mean bias of Fitbit across both rest and intensive exercise periods was 3.98 bpm. The sensitivity of the device in identifying high intensity exercise sessions was 97.14 %. The correlation between the two devices was non-linear, suggesting a saturation tendency of Fitbit to saturate at high values of HR. Conclusion: The performance of Fitbit Charge 4 is comparable to Polar H10 for assessing exercise intensity in a cohort of PD subjects. The device should be considered a reasonable replacement for the more cumbersome chest strap technology in future similar studies of clinical populations.Keywords: fitbit, heart rate measurements, parkinson’s disease, wrist-wearable devices
Procedia PDF Downloads 108167 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method
Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh
Abstract:
In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.Keywords: discrete element method, fluid flow, parametric study, sand production/bonds failure
Procedia PDF Downloads 322166 Investigating the Influence of Solidification Rate on the Microstructural, Mechanical and Physical Properties of Directionally Solidified Al-Mg Based Multicomponent Eutectic Alloys Containing High Mg Alloys
Authors: Fatih Kılıç, Burak Birol, Necmettin Maraşlı
Abstract:
The directional solidification process is generally used for homogeneous compound production, single crystal growth, and refining (zone refining), etc. processes. The most important two parameters that control eutectic structures are temperature gradient and grain growth rate which are called as solidification parameters The solidification behavior and microstructure characteristics is an interesting topic due to their effects on the properties and performance of the alloys containing eutectic compositions. The solidification behavior of multicomponent and multiphase systems is an important parameter for determining various properties of these materials. The researches have been conducted mostly on the solidification of pure materials or alloys containing two phases. However, there are very few studies on the literature about multiphase reactions and microstructure formation of multicomponent alloys during solidification. Because of this situation, it is important to study the microstructure formation and the thermodynamical, thermophysical and microstructural properties of these alloys. The production process is difficult due to easy oxidation of magnesium and therefore, there is not a comprehensive study concerning alloys containing high Mg (> 30 wt.% Mg). With the increasing amount of Mg inside Al alloys, the specific weight decreases, and the strength shows a slight increase, while due to formation of β-Al8Mg5 phase, ductility lowers. For this reason, production, examination and development of high Mg containing alloys will initiate the production of new advanced engineering materials. The original value of this research can be described as obtaining high Mg containing (> 30% Mg) Al based multicomponent alloys by melting under vacuum; controlled directional solidification with various growth rates at a constant temperature gradient; and establishing relationship between solidification rate and microstructural, mechanical, electrical and thermal properties. Therefore, within the scope of this research, some > 30% Mg containing ternary or quaternary Al alloy compositions were determined, and it was planned to investigate the effects of directional solidification rate on the mechanical, electrical and thermal properties of these alloys. Within the scope of the research, the influence of the growth rate on microstructure parameters, microhardness, tensile strength, electrical conductivity and thermal conductivity of directionally solidified high Mg containing Al-32,2Mg-0,37Si; Al-30Mg-12Zn; Al-32Mg-1,7Ni; Al-32,2Mg-0,37Fe; Al-32Mg-1,7Ni-0,4Si; Al-33,3Mg-0,35Si-0,11Fe (wt.%) alloys with wide range of growth rate (50-2500 µm/s) and fixed temperature gradient, will be investigated. The work can be planned as; (a) directional solidification of Al-Mg based Al-Mg-Si, Al-Mg-Zn, Al-Mg-Ni, Al-Mg-Fe, Al-Mg-Ni-Si, Al-Mg-Si-Fe within wide range of growth rates (50-2500 µm/s) at a constant temperature gradient by Bridgman type solidification system, (b) analysis of microstructure parameters of directionally solidified alloys by using an optical light microscopy and Scanning Electron Microscopy (SEM), (c) measurement of microhardness and tensile strength of directionally solidified alloys, (d) measurement of electrical conductivity by four point probe technique at room temperature (e) measurement of thermal conductivity by linear heat flow method at room temperature.Keywords: directional solidification, electrical conductivity, high Mg containing multicomponent Al alloys, microhardness, microstructure, tensile strength, thermal conductivity
Procedia PDF Downloads 260165 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam
Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck
Abstract:
The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam
Procedia PDF Downloads 246164 The Impact of a Leadership Change on Individuals' Behaviour and Incentives: Evidence from the Top Tier Italian Football League
Authors: Kaori Narita, Juan de Dios Tena Horrillo, Claudio Detotto
Abstract:
Decisions on replacement of leaders are of significance and high prevalence in any organization, and concerns many of its stakeholders, whether it is a leader in a political party or a CEO of a firm, as indicated by high media coverage of such events. This merits an investigation into the consequences and implications of a leadership change on the performances and behavior of organizations and their workers. Sport economics provides a fruitful field to explore these issues due to the high frequencies of managerial changes in professional sports clubs and the transparency and regularity of observations of team performance and players’ abilities. Much of the existing research on managerial change focuses on how this affects the performance of an organization. However, there is scarcely attention paid to the consequences of such events on the behavior of individuals within the organization. Changes in behavior and attitudes of a group of workers due to a managerial change could be of great interest in management science, psychology, and operational research. On the other hand, these changes cannot be observed in the final outcome of the organization, as this is affected by many other unobserved shocks, for example, the stress level of workers with the need to deal with a difficult situation. To fill this gap, this study shows the first attempt to evaluate the impact of managerial change on players’ behaviors such as attack intensity, aggressiveness, and efforts. The data used in this study is from the top tier Italian football league (“Serie A”), where an average of 13 within season replacements of head coaches were observed over the period of seasons from 2000/2001 to 2017/18. The preliminary estimation employs Pooled Ordinary Least Square (POLS) and club-season Fixed Effect (FE) in order to assess the marginal effect of having a new manager on the number of shots, corners and red/yellow cards after controlling for a home-field advantage, ex ante abilities and current positions in the league of a team and their opponent. The results from this preliminary estimation suggest that the teams do not show a significant difference in their behaviors before and after the managerial change. To build on these preliminary results, other methods, including propensity score matching and non-linear model estimates, will be used. Moreover, the study will further investigate these issues by considering other measurements of attack intensity, aggressiveness, and efforts, such as possessions, a number of fouls and the athletic performance of players, respectively. Finally, the study is going to investigate whether these results vary over the characteristics of a new head coach, for example, their age and experience as a manager and a player. Thus far, this study suggests that certain behaviours of individuals in an organisation are not immediately affected by a change in leadership. To confirm this preliminary finding and lead to a more solid conclusion, further investigation will be conducted in the aforementioned manner, and the results will be elaborated in the conference.Keywords: behaviour, effort, manager characteristics, managerial change, sport economics
Procedia PDF Downloads 134163 Effects of Exposure to a Language on Perception of Non-Native Phonologically Contrastive Duration
Authors: Chuyu Huang, Itsuki Minemi, Kuanlin Chen, Yuki Hirose
Abstract:
It remains unclear how language speakers are able to perceive phonological contrasts that do not exist on their own. This experiment uses the vowel-length distinction in Japanese, which is phonologically contrastive and co-occurs with tonal change in some cases. For speakers whose first language does not distinguish vowel length, contrastive duration is usually misperceived, e.g., Mandarin speakers. Two alternative hypotheses for how Mandarin speakers would perceive a phonological contrast that does not exist in their language make different predictions. The stress parameter model does not have a clear prediction about the impact of tonal type. Mandarin speakers will likely be not able to perceive vowel length as well as Japanese native speakers do, but the performance might not correlate to tonal type because the prosody of their language is distinctive, which requires users to encode lexical prosody and notice subtle differences in word prosody. By contrast, cue-based phonetic models predict that Mandarin speakers may rely on pitch differences, a secondary cue, to perceive vowel length. Two groups of Mandarin speakers, including naive non-Japanese speakers and beginner learners, were recruited to participate in an AX discrimination task involving two Japanese sound stimuli that contain a phonologically contrastive environment. Participants were asked to indicate whether the two stimuli containing a vowel-length contrast (e.g., maapero vs. mapero) sound the same. The experiment was bifactorial. The first factor contrasted three syllabic positions (syllable position; initial/medial/final), as it would be likely to affect the perceptual difficulty, as seen in previous studies, and the second factor contrasted two pitch types (accent type): one with accentual change that could be distinguished with the lexical tones in Mandarin (the different condition), with the other group having no tonal distinction but only differing in vowel length (the same condition). The overall results showed that a significant main effect of accent type by applying a linear mixed-effects model (β = 1.48, SE = 0.35, p < 0.05), which implies that Mandarin speakers tend to more successfully recognize vowel-length differences when the long vowel counterpart takes on a tone that exists in Mandarin. The interaction between the accent type and the syllabic position is also significant (β = 2.30, SE = 0.91, p < 0.05), showing that vowel lengths in the different conditions are more difficult to recognize in the word-final case relative to the initial condition. The second statistical model, which compares naive speakers to beginners, was conducted with logistic regression to test the effects of the participant group. A significant difference was found between the two groups (β = 1.06, 95% CI = [0.36, 2.03], p < 0.05). This study shows that: (1) Mandarin speakers are likely to use pitch cues to perceive vowel length in a non-native language, which is consistent with the cue-based approaches; (2) an exposure effect was observed: the beginner group achieved a higher accuracy for long vowel perception, which implied the exposure effect despite the short period of language learning experience.Keywords: cue-based perception, exposure effect, prosodic perception, vowel duration
Procedia PDF Downloads 220162 Cross-Country Mitigation Policies and Cross Border Emission Taxes
Authors: Massimo Ferrari, Maria Sole Pagliari
Abstract:
Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.Keywords: climate change, general equilibrium, optimal taxation, monetary policy
Procedia PDF Downloads 160161 Surface Roughness in the Incremental Forming of Drawing Quality Cold Rolled CR2 Steel Sheet
Authors: Zeradam Yeshiwas, A. Krishnaia
Abstract:
The aim of this study is to verify the resulting surface roughness of parts formed by the Single-Point Incremental Forming (SPIF) process for an ISO 3574 Drawing Quality Cold Rolled CR2 Steel. The chemical composition of drawing quality Cold Rolled CR2 steel is comprised of 0.12 percent of carbon, 0.5 percent of manganese, 0.035 percent of sulfur, 0.04 percent phosphorous, and the remaining percentage is iron with negligible impurities. The experiments were performed on a 3-axis vertical CNC milling machining center equipped with a tool setup comprising a fixture and forming tools specifically designed and fabricated for the process. The CNC milling machine was used to transfer the tool path code generated in Mastercam 2017 environment into three-dimensional motions by the linear incremental progress of the spindle. The blanks of Drawing Quality Cold Rolled CR2 steel sheets of 1 mm of thickness have been fixed along their periphery by a fixture and hardened high-speed steel (HSS) tools with a hemispherical tip of 8, 10 and 12mm of diameter were employed to fabricate sample parts. To investigate the surface roughness, hyperbolic-cone shape specimens were fabricated based on the chosen experimental design. The effect of process parameters on the surface roughness was studied using three important process parameters, i.e., tool diameter, feed rate, and step depth. In this study, the Taylor-Hobson Surtronic 3+ surface roughness tester profilometer was used to determine the surface roughness of the parts fabricated using the arithmetic mean deviation (Rₐ). In this instrument, a small tip is dragged across a surface while its deflection is recorded. Finally, the optimum process parameters and the main factor affecting surface roughness were found using the Taguchi design of the experiment and ANOVA. A Taguchi experiment design with three factors and three levels for each factor, the standard orthogonal array L9 (3³) was selected for the study using the array selection table. The lowest value of surface roughness is significant for surface roughness improvement. For this objective, the ‘‘smaller-the-better’’ equation was used for the calculation of the S/N ratio. The finishing roughness parameter Ra has been measured for the different process combinations. The arithmetic means deviation (Rₐ) was measured via the experimental design for each combination of the control factors by using Taguchi experimental design. Four roughness measurements were taken for a single component and the average roughness was taken to optimize the surface roughness. The lowest value of Rₐ is very important for surface roughness improvement. For this reason, the ‘‘smaller-the-better’’ Equation was used for the calculation of the S/N ratio. Analysis of the effect of each control factor on the surface roughness was performed with a ‘‘S/N response table’’. Optimum surface roughness was obtained at a feed rate of 1500 mm/min, with a tool radius of 12 mm, and with a step depth of 0.5 mm. The ANOVA result shows that step depth is an essential factor affecting surface roughness (91.1 %).Keywords: incremental forming, SPIF, drawing quality steel, surface roughness, roughness behavior
Procedia PDF Downloads 62160 The Efficiency Analysis in the Health Sector: Marmara Region
Authors: Hale Kirer Silva Lecuna, Beyza Aydin
Abstract:
Health is one of the main components of human capital and sustainable development, and it is very important for economic growth. Health economics, which is an indisputable part of the science of economics, has five stages in general. These are health and development, financing of health services, economic regulation in the health, allocation of resources and efficiency of health services. A well-developed and efficient health sector plays a major role by increasing the level of development of countries. The most crucial pillars of the health sector are the hospitals that are divided into public and private. The main purpose of the hospitals is to provide more efficient services. Therefore the aim is to meet patients’ satisfaction by increasing the service quality. Health-related studies in Turkey date back to the Ottoman and Seljuk Empires. In the near past, Turkey applied 'Health Sector Transformation Programs' under different titles between 2003 and 2010. Our aim in this paper is to measure how effective these transformation programs are for the health sector, to see how much they can increase the efficiency of hospitals over the years, to see the return of investments, to make comments and suggestions on the results, and to provide a new reference for the literature. Within this framework, the public and private hospitals in Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, Istanbul, Kirklareli, Kocaeli, Sakarya, Tekirdağ, Yalova will be examined by using Data Envelopment Analysis (DEA) for the years between 2000 and 2019. DEA is a linear programming-based technique, which gives relatively good results in multivariate studies. DEA basically estimates an efficiency frontier and make a comparison. Constant returns to scale and variable returns to scale are two most commonly used DEA methods. Both models are divided into two as input and output-oriented. To analyze the data, the number of personnel, number of specialist physicians, number of practitioners, number of beds, number of examinations will be used as input variables; and the number of surgeries, in-patient ratio, and crude mortality rate as output variables. 11 hospitals belonging to the Marmara region were included in the study. It is seen that these hospitals worked effectively only in 7 provinces (Balıkesir, Bilecik, Bursa, Edirne, İstanbul, Kırklareli, Yalova) for the year 2001 when no transformation program was implemented. After the transformation program was implemented, for example, in 2014 and 2016, 10 hospitals (Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, İstanbul, Kocaeli, Kırklareli, Tekirdağ, Yalova) were found to be effective. In 2015, ineffective results were observed for Sakarya, Tekirdağ and Yalova. However, since these values are closer to 1 after the transformation program, we can say that the transformation program has positive effects. For Sakarya alone, no effective results have been achieved in any year. When we look at the results in general, it shows that the transformation program has a positive effect on the effectiveness of hospitals.Keywords: data envelopment analysis, efficiency, health sector, Marmara region
Procedia PDF Downloads 130159 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)
Authors: Eric Pla Erra, Mariana Jimenez Martinez
Abstract:
While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)
Procedia PDF Downloads 105158 Carbohydrate Intake and Physical Activity Levels Modify the Association between FTO Gene Variants and Obesity and Type 2 Diabetes: First Nutrigenetics Study in an Asian Indian Population
Authors: K. S. Vimal, D. Bodhini, K. Ramya, N. Lakshmipriya, R. M. Anjana, V. Sudha, J. A. Lovegrove, V. Mohan, V. Radha
Abstract:
Gene-lifestyle interaction studies have been carried out in various populations. However, to date there are no studies in an Asian Indian population. Hence, we examined whether lifestyle factors such as diet and physical activity modify the association between fat mass and obesity–associated (FTO) gene variants and obesity and type 2 diabetes (T2D) in an Asian Indian population. We studied 734 unrelated T2D and 884 normal glucose-tolerant (NGT) participants randomly selected from the Chennai Urban Rural Epidemiology Study (CURES) in Southern India. Obesity was defined according to the World Health Organization Asia Pacific Guidelines (non-obese, BMI < 25 kg/m2; obese, BMI ≥ 25 kg/m2). Six single nucleotide polymorphisms (SNPs) in the FTO gene (rs9940128, rs7193144, rs8050136, rs918031, rs1588413 and rs11076023) identified from recent genome-wide association studies for T2D were genotyped by polymerase chain reaction-restriction fragment length polymorphism and direct sequencing. Dietary assessment was carried out using a validated food frequency questionnaire and physical activity was based upon the self-report. Interaction analyses were performed by including the interaction terms in the model. A joint likelihood ratio test of the main SNP effects and the SNP-diet/physical activity interaction effects was used in the linear regression analyses to maximize statistical power. Statistical analyses were performed using STATA version 13. There was a significant interaction between FTO SNP rs8050136 and carbohydrate energy percentage (Pinteraction=0.04) on obesity, where the ‘A’ allele carriers of the SNP rs8050136 had 2.46 times higher risk of obesity than those with ‘CC’ genotype (P=3.0x10-5) among individuals in the highest tertile of carbohydrate energy percentage. Furthermore, among those who had lower levels of physical activity, the ‘A’ allele carriers of the SNP rs8050136 had 1.89 times higher risk of obesity than those with ‘CC’ genotype (P=4.0x10-5). We also found a borderline interaction between SNP rs11076023 and carbohydrate energy percentage (Pinteraction=0.08) on T2D, where the ‘A’ allele carriers in the highest tertile of carbohydrate energy percentage, had 1.57 times higher risk of T2D than those with ‘TT’ genotype (P=0.002). There was also a significant interaction between SNP rs11076023 and physical activity (Pinteraction=0.03) on T2D. No further significant interactions between SNPs and macronutrient intake or physical activity on obesity and T2D were observed. In conclusion, this is the first study to provide evidence for a gene-diet and gene-physical activity interaction on obesity and T2D in an Asian Indian population. These findings suggest that the association between FTO gene variants and obesity and T2D is influenced by carbohydrate intake and physical activity levels. Greater understanding of how FTO gene influences obesity and T2D through dietary and exercise interventions will advance the development of behavioral intervention and personalised lifestyle strategies predicted to reduce the development of metabolic diseases in ‘A’ allele carriers of both SNPs in this Asian Indian population.Keywords: dietary intake, FTO, obesity, physical activity, type 2 diabetes, Asian Indian.
Procedia PDF Downloads 531157 One Pot Synthesis of Cu–Ni–S/Ni Foam for the Simultaneous Removal and Detection of Norfloxacin
Authors: Xincheng Jiang, Yanyan An, Yaoyao Huang, Wei Ding, Manli Sun, Hong Li, Huaili Zheng
Abstract:
The residual antibiotics in the environment will pose a threat to the environment and human health. Thus, efficient removal and rapid detection of norfloxacin (NOR) in wastewater is very important. The main sources of NOR pollution are the agricultural, pharmaceutical industry and hospital wastewater. The total consumption of NOR in China can reach 5440 tons per year. It is found that neither animals nor humans can totally absorb and metabolize NOR, resulting in the excretion of NOR into the environment. Therefore, residual NOR has been detected in water bodies. The hazards of NOR in wastewater lie in three aspects: (1) the removal capacity of the wastewater treatment plant for NOR is limited (it is reported that the average removal efficiency of NOR in the wastewater treatment plant is only 68%); (2) NOR entering the environment will lead to the emergence of drug-resistant strains; (3) NOR is toxic to many aquatic species. At present, the removal and detection technologies of NOR are applied separately, which leads to a cumbersome operation process. The development of simultaneous adsorption-flocculation removal and FTIR detection of pollutants has three advantages: (1) Adsorption-flocculation technology promotes the detection technology (the enrichment effect on the material surface improves the detection ability); (2) The integration of adsorption-flocculation technology and detection technology reduces the material cost and makes the operation easier; (3) FTIR detection technology endows the water treatment agent with the ability of molecular recognition and semi-quantitative detection for pollutants. Thus, it is of great significance to develop a smart water treatment material with high removal capacity and detection ability for pollutants. This study explored the feasibility of combining NOR removal method with the semi-quantitative detection method. A magnetic Cu-Ni-S/Ni foam was synthesized by in-situ loading Cu-Ni-S nanostructures on the surface of Ni foam. The novelty of this material is the combination of adsorption-flocculation technology and semi-quantitative detection technology. Batch experiments showed that Cu-Ni-S/Ni foam has a high removal rate of NOR (96.92%), wide pH adaptability (pH=4.0-10.0) and strong ion interference resistance (0.1-100 mmol/L). According to the Langmuir fitting model, the removal capacity can reach 417.4 mg/g at 25 °C, which is much higher than that of other water treatment agents reported in most studies. Characterization analysis indicated that the main removal mechanisms are surface complexation, cation bridging, electrostatic attraction, precipitation and flocculation. Transmission FTIR detection experiments showed that NOR on Cu-Ni-S/Ni foam has easily recognizable FTIR fingerprints; the intensity of characteristic peaks roughly reflects the concentration information to some extent. This semi-quantitative detection method has a wide linear range (5-100 mg/L) and a low limit of detection (4.6 mg/L). These results show that Cu-Ni-S/Ni foam has excellent removal performance and semi-quantitative detection ability of NOR molecules. This paper provides a new idea for designing and preparing multi-functional water treatment materials to achieve simultaneous removal and semi-quantitative detection of organic pollutants in water.Keywords: adsorption-flocculation, antibiotics detection, Cu-Ni-S/Ni foam, norfloxacin
Procedia PDF Downloads 76156 Quality Characteristics of Road Runoff in Coastal Zones: A Case Study in A25 Highway, Portugal
Authors: Pedro B. Antunes, Paulo J. Ramísio
Abstract:
Road runoff is a linear source of diffuse pollution that can cause significant environmental impacts. During rainfall events, pollutants from both stationary and mobile sources, which have accumulated on the road surface, are dragged through the superficial runoff. Road runoff in coastal zones may present high levels of salinity and chlorides due to the proximity of the sea and transported marine aerosols. Appearing to be correlated to this process, organic matter concentration may also be significant. This study assesses this phenomenon with the purpose of identifying the relationships between monitored water quality parameters and intrinsic site variables. To achieve this objective, an extensive monitoring program was conducted on a Portuguese coastal highway. The study included thirty rainfall events, in different weather, traffic and salt deposition conditions in a three years period. The evaluations of various water quality parameters were carried out in over 200 samples. In addition, the meteorological, hydrological and traffic parameters were continuously measured. The salt deposition rates (SDR) were determined by means of a wet candle device, which is an innovative feature of the monitoring program. The SDR, variable throughout the year, appears to show a high correlation with wind speed and direction, but mostly with wave propagation, so that it is lower in the summer, in spite of the favorable wind direction in the case study. The distance to the sea, topography, ground obstacles and the platform altitude seems to be also relevant. It was confirmed the high salinity in the runoff, increasing the concentration of the water quality parameters analyzed, with significant amounts of seawater features. In order to estimate the correlations and patterns of different water quality parameters and variables related to weather, road section and salt deposition, the study included exploratory data analysis using different techniques (e.g. Pearson correlation coefficients, Cluster Analysis and Principal Component Analysis), confirming some specific features of the investigated road runoff. Significant correlations among pollutants were observed. Organic matter was highlighted as very dependent of salinity. Indeed, data analysis showed that some important water quality parameters could be divided into two major clusters based on their correlations to salinity (including organic matter associated parameters) and total suspended solids (including some heavy metals). Furthermore, the concentrations of the most relevant pollutants seemed to be very dependent on some meteorological variables, particularly the duration of the antecedent dry period prior to each rainfall event and the average wind speed. Based on the results of a monitoring case study, in a coastal zone, it was proven that SDR, associated with the hydrological characteristics of road runoff, can contribute for a better knowledge of the runoff characteristics, and help to estimate the specific nature of the runoff and related water quality parameters.Keywords: coastal zones, monitoring, road runoff pollution, salt deposition
Procedia PDF Downloads 239155 Induction Machine Design Method for Aerospace Starter/Generator Applications and Parametric FE Analysis
Authors: Wang Shuai, Su Rong, K. J.Tseng, V. Viswanathan, S. Ramakrishna
Abstract:
The More-Electric-Aircraft concept in aircraft industry levies an increasing demand on the embedded starter/generators (ESG). The high-speed and high-temperature environment within an engine poses great challenges to the operation of such machines. In view of such challenges, squirrel cage induction machines (SCIM) have shown advantages due to its simple rotor structure, absence of temperature-sensitive components as well as low torque ripples etc. The tight operation constraints arising from typical ESG applications together with the detailed operation principles of SCIMs have been exploited to derive the mathematical interpretation of the ESG-SCIM design process. The resultant non-linear mathematical treatment yielded unique solution to the SCIM design problem for each configuration of pole pair number p, slots/pole/phase q and conductors/slot zq, easily implemented via loop patterns. It was also found that not all configurations led to feasible solutions and corresponding observations have been elaborated. The developed mathematical procedures also proved an effective framework for optimization among electromagnetic, thermal and mechanical aspects by allocating corresponding degree-of-freedom variables. Detailed 3D FEM analysis has been conducted to validate the resultant machine performance against design specifications. To obtain higher power ratings, electrical machines often have to increase the slot areas for accommodating more windings. Since the available space for embedding such machines inside an engine is usually short in length, axial air gap arrangement appears more appealing compared to its radial gap counterpart. The aforementioned approach has been adopted in case studies of designing series of AFIMs and RFIMs respectively with increasing power ratings. Following observations have been obtained. Under the strict rotor diameter limitation AFIM extended axially for the increased slot areas while RFIM expanded radially with the same axial length. Beyond certain power ratings AFIM led to long cylinder geometry while RFIM topology resulted in the desired short disk shape. Besides the different dimension growth patterns, AFIMs and RFIMs also exhibited dissimilar performance degradations regarding power factor, torque ripples as well as rated slip along with increased power ratings. Parametric response curves were plotted to better illustrate the above influences from increased power ratings. The case studies may provide a basic guideline that could assist potential users in making decisions between AFIM and RFIM for relevant applications.Keywords: axial flux induction machine, electrical starter/generator, finite element analysis, squirrel cage induction machine
Procedia PDF Downloads 455154 Forests, the Sanctuaries to Specialist and Rare Wild Native Bees at the Foothills of Western Himalayas
Authors: Preeti Virkar, V. P. Uniyal, Vinod Kumar Bhatt
Abstract:
With 50% decline in managed honey bee hives in the continents of Europe and America, farmers and landscape managers are turning to native wild bees for their essential ecosystem services of pollination. Wild bees population are too under danger due to the rapid land use changes from anthropogenic activities. With an escalating population reaching 9.0 billion by 2050, human-induced land use changes are predicted to further deteriorate the habitats of numerous species by the turn of this century. The status of bees are uncertain, especially in the tropical regions of the world, which also questions the crisis of global pollinator decline and their essential services to wild and managed flora. Our investigation collectively compares wild native bee diversity and their status in forests and agroecosystems in Doon Valley landscape, situated at the foothills of Himalayan ranges, Uttarakhand, India. We seek to ask whether (1) natural habitat are refuge to richer and rarer bees communities than the agroecosystems, (2) Are agroecosystems closer to natural habitats similar to them than agroecosystems farther away; hence support richer bee communities and hence, (3) Do polyculture farms support richer bee communities than monoculture. The data was collected using observation and pantrap sampling form February to May, 2012 to 2014. We recorded 43 species of bees in Doon Valley. They belonged to 5 families; Megachilidae, Apidae, Andrenidae, Halictidae and Collitidae. A multinomial model approach was used to classify the bees into 2 habitats, in which forests demonstrated to support greater number of specialist (26%, n= 11) species than agroecosystems (7%, n= 3). The valley had many species categorized as the rare (58%, n= 25) and very few generalists (9%, n=4). A linear regression model run on our data demonstrated higher bee diversity in agro-ecosystems in close proximity to forests (H’ for < 200 m = 1.60) compared to those further away (H’ for > 600 m = 0.56) (R2=0.782, SE=0.148, p value=0.004). Organic agriculture supported significantly greater species richness in comparison to conventional farms (Mann-Whitney U test, n1 = 33, n2 = 35; P = 0.001). Forests ecosystems are refuge to rare specialist groups and support bee communities in nearby agroecosystems. The findings of our investigation demonstrate the importance of natural habitats as a potential refuge for rare native wild bee pollinators. Polyculture in the valley behaves similar to natural habitats and supports diverse bee communities in comparison to conventional monocultures. Our study suggests that the farming communities adopt diverse organic agriculture systems to attract wild pollinators beneficial for better crop production. Forests are sanctuaries for bees to nest, forage, and breed. Therefore, our outcome also suggests landscape managers not only preserve protected areas but also enhance the floral diversity in semi-natural and urban areas.Keywords: native bees, pollinators, polyculture, agroecosystem, natural habitat, diversity, monoculture, specialists, generalists
Procedia PDF Downloads 217153 Review of Carbon Materials: Application in Alternative Energy Sources and Catalysis
Authors: Marita Pigłowska, Beata Kurc, Maciej Galiński
Abstract:
The application of carbon materials in the branches of the electrochemical industry shows an increasing tendency each year due to the many interesting properties they possess. These are, among others, a well-developed specific surface, porosity, high sorption capacity, good adsorption properties, low bulk density, electrical conductivity and chemical resistance. All these properties allow for their effective use, among others in supercapacitors, which can store electric charges of the order of 100 F due to carbon electrodes constituting the capacitor plates. Coals (including expanded graphite, carbon black, graphite carbon fibers, activated carbon) are commonly used in electrochemical methods of removing oil derivatives from water after tanker disasters, e.g. phenols and their derivatives by their electrochemical anodic oxidation. Phenol can occupy practically the entire surface of carbon material and leave the water clean of hydrophobic impurities. Regeneration of such electrodes is also not complicated, it is carried out by electrochemical methods consisting in unblocking the pores and reducing resistances, and thus their reactivation for subsequent adsorption processes. Graphite is commonly used as an anode material in lithium-ion cells, while due to the limited capacity it offers (372 mAh g-1), new solutions are sought that meet both capacitive, efficiency and economic criteria. Increasingly, biodegradable materials, green materials, biomass, waste (including agricultural waste) are used in order to reuse them and reduce greenhouse effects and, above all, to meet the biodegradability criterion necessary for the production of lithium-ion cells as chemical power sources. The most common of these materials are cellulose, starch, wheat, rice, and corn waste, e.g. from agricultural, paper and pharmaceutical production. Such products are subjected to appropriate treatments depending on the desired application (including chemical, thermal, electrochemical). Starch is a biodegradable polysaccharide that consists of polymeric units such as amylose and amylopectin that build an ordered (linear) and amorphous (branched) structure of the polymer. Carbon is also used as a catalyst. Elemental carbon has become available in many nano-structured forms representing the hybridization combinations found in the primary carbon allotropes, and the materials can be enriched with a large number of surface functional groups. There are many examples of catalytic applications of coal in the literature, but the development of this field has been hampered by the lack of a conceptual approach combining structure and function and a lack of understanding of material synthesis. In the context of catalytic applications, the integrity of carbon environmental management properties and parameters such as metal conductivity range and bond sequence management should be characterized. Such data, along with surface and textured information, can form the basis for the provision of network support services.Keywords: carbon materials, catalysis, BET, capacitors, lithium ion cell
Procedia PDF Downloads 174152 Altering the Solid Phase Speciation of Arsenic in Paddy Soil: An Approach to Reduce Rice Grain Arsenic Uptake
Authors: Supriya Majumder, Pabitra Banik
Abstract:
Fates of Arsenic (As) on the soil-plant environment belong to the critical emerging issue, which in turn to appraises the threatening implications of a human health risk — assessing the dynamics of As in soil solid components are likely to impose its potential availability towards plant uptake. In the present context, we introduced an improved Sequential Extraction Procedure (SEP) questioning to identify solid-phase speciation of As in paddy soil under variable soil environmental conditions during two consecutive seasons of rice cultivation practices. We coupled gradients of water management practices with the addition of fertilizer amendments to assess the changes in a partition of As through a field experimental study during monsoon and post-monsoon season using two rice cultivars. Water management regimes were varied based on the methods of cultivation of rice by Conventional (waterlogged) vis-a-vis System of Rice Intensification-SRI (saturated). Fertilizer amendment through the nutrient treatment of absolute control, NPK-RD, NPK-RD + Calcium silicate, NPK-RD + Ferrous sulfate, Farmyard manure (FYM), FYM + Calcium silicate, FYM + Ferrous sulfate, Vermicompost (VC), VC + Calcium silicate, VC + Ferrous sulfate were selected to construct the study. After harvest, soil samples were sequentially extracted to estimate partition of As among the different fractions such as: exchangeable (F1), specifically sorbed (F2), As bound to amorphous Fe oxides (F3), crystalline Fe oxides (F4), organic matter (F5) and residual phase (F6). Results showed that the major proportions of As were found in F3, F4 and F6, whereas F1 exhibited the lowest proportion of total soil As. Among the nutrient treatment mediated changes on As fractions, the application of organic manure and ferrous sulfate were significantly found to restrict the release of As from exchangeable phase. Meanwhile, conventional practice produced much higher release of As from F1 as compared to SRI, which may substantially increase the environmental risk. In contrast, SRI practice was found to retain a significantly higher proportion of As in F2, F3, and F4 phase resulting restricted mobilization of As. This was critically reflected towards rice grain As bioavailability where the reduction in grain As concentration of 33% and 55% in SRI concerning conventional treatment (p <0.05) during monsoon and post-monsoon season respectively. Also, prediction assay for rice grain As bioavailability based on the linear regression model was performed. Results demonstrated that rice grain As concentration was positively correlated with As concentration in F1 and negatively correlated with F2, F3, and F4 with a satisfactory level of variation being explained (p <0.001). Finally, we conclude that F1, F2, F3 and F4 are the major soil. As fractions critically may govern the potential availability of As in soil and suggest that rice cultivation with the SRI treatment is particularly at less risk of As availability in soil. Such exhaustive information may be useful for adopting certain management practices for rice grown in contaminated soil concerning to the environmental issues in particular.Keywords: arsenic, fractionation, paddy soil, potential availability
Procedia PDF Downloads 123151 The Increasing Trend in Research Among Orthopedic Residency Applicants is Significant to Matching: A Retrospective Analysis
Authors: Nickolas A. Stewart, Donald C. Hefelfinger, Garrett V. Brittain, Timothy C. Frommeyer, Adrienne Stolfi
Abstract:
Orthopedic surgery is currently considered one of the most competitive specialties that medical students can apply to for residency training. As evidenced by increasing United States Medical Licensing Examination (USMLE) scores, overall grades, and publication, presentation, and abstract numbers, this specialty is getting increasingly competitive. The recent change of USMLE Step 1 scores to pass/fail has resulted in additional challenges for medical students planning to apply for orthopedic residency. Until now, these scores have been a tool used by residency programs to screen applicants as an initial factor to determine the strength of their application. With USMLE STEP 1 converting to a pass/fail grading criterion, the question remains as to what will take its place on the ERAS application. The primary objective of this study is to determine the trends in the number of research projects, abstracts, presentations, and publications among orthopedic residency applicants. Secondly, this study seeks to determine if there is a relationship between the number of research projects, abstracts, presentations, and publications, and match rates. The researchers utilized the National Resident Matching Program's Charting Outcomes in the Match between 2007 and 2022 to identify mean publications and research project numbers by allopathic and osteopathic US orthopedic surgery senior applicants. A paired t test was performed between the mean number of publications and research projects by matched and unmatched applicants. Additionally, simple linear regressions within matched and unmatched applicants were used to determine the association between year and number of abstracts, presentations, and publications, and a number of research projects. For determining whether the increase in the number of abstracts, presentations, and publications, and a number of research projects is significantly different between matched and unmatched applicants, an analysis of covariance is used with an interaction term added to the model, which represents the test for the difference between the slopes of each group. The data shows that from 2007 to 2022, the average number of research publications increased from 3 to 16.5 for matched orthopedic surgery applicants. The paired t-test had a significant p-value of 0.006 for the number of research publications between matched and unmatched applicants. In conclusion, the average number of publications for orthopedic surgery applicants has significantly increased for matched and unmatched applicants from 2007 to 2022. Moreover, this increase has accelerated in recent years, as evidenced by an increase of only 1.5 publications from 2007 to 2001 versus 5.0 publications from 2018 to 2022. The number of abstracts, presentations, and publications is a significant factor regarding an applicant's likelihood to successfully match into an orthopedic residency program. With USMLE Step 1 being converted to pass/fail, the researchers expect students and program directors will place increased importance on additional factors that can help them stand out. This study demonstrates that research will be a primary component in stratifying future orthopedic surgery applicants. In addition, this suggests the average number of research publications will continue to accelerate. Further study is required to determine whether this growth is sustainable.Keywords: publications, orthopedic surgery, research, residency applications
Procedia PDF Downloads 131150 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada
Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach
Abstract:
Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence
Procedia PDF Downloads 148149 Phytochemical and Vitamin Composition of Wild Edible Plants Consumed in South West Ethiopia
Authors: Abebe Yimer, Sirawdink Fikereyesus Forsido, Getachew Addis, Abebe Ayelign
Abstract:
Background: Oxidative stress has been an important health problem as itinduceschronic diseases such as cancer, cardiovascular, diabetics, and neurodegenerative disease. Plant source natural antioxidant has gained attention as synthetic antioxidant negatively impact human health. Wild edible plants arecheap source of dietary-medicine in mainly rural communityin south-west Ethiopia and elsewhere the country. Thus, the study aimed to determine total pheneol,flavoinoids, antioxidant, vitamin C, and beta-carotene content from wild edible plants Solanum nigrum L., Vigna membranacea A. Rich, Dioscorea praehensilis Benth., Trilepisium madagascariense D.C.andCleome gynandra L. Methods: Methanol was used to extract samples of oven-dried edible plants. Total phenolic compound (TPC) was determined using a Folin Ciocalteu method, whereas total flavonoid content (TFC) was determined using the Aluminium chloride colorimetric method. By using 2, 2-diphenyl-1-picrylhydrazyl (DPPH) and ferric reducing antioxidant power (FRAP) tests, antioxidant activities were evaluated in vitro. Additionally, beta-carotene was assessed using a spectrophotometric technique, whilst vitamin C was determined using a titration approach. Results: Total flavonoid contentranged from 0.85±0.03 to 11.25±0.01 mg CE/g in D. praehensilis Benth. tuber and C. gynandra L, respectively. Total phenolic compounds varied from 0.25±0.06 GAE/g in D. praehensilis Benth tuber to 35.73±2.52 GAE/g in S.nigrum L. leaves. In the DPPH test, the highest antioxidant value (87.65%) was obtained in the S.nigrum L. leaves, whereas the smallest amount of antioxidant (50.12%)was contained in D. praehensilis Benth tuber. Similarly in FRAP assay,D. praehensilis Benth tuber showed the least reducing potential(49.16± 2.13mM Fe2+/100 g)whilst the highest reducing potential was presented in the S.nigrum L. leaves(188.12±1.13 mM Fe2+/100 g). The beta-carotene content was found between 11.81±0.00 mg/100g in D. praehensilis Benth tubers to 34.49±0.95 mg/100g in V. membranacea A. Rich leaves. The concentration of vitamin C ranged from 10.00±0.61 in D. praehensilis Benth tubers to 45±1.80 mg/100g in V. membranacea A. Rich leaves. The results showed that high positive linear correlations between TPC and TFC of WEPs (r=0.828), as well as between FRAP and total phenolic contents (r = 0.943) and FRAP and vitamin C (r= 0.928). Conclusion: These findings showed the total phenolic and flavonoid contents of Solanum nigrum L. and Cleome gynandra L, respectively, are abundant. The outcome may be used as a natural supply of dietary antioxidants, which may be useful in preventing oxidative stress. The study's findings also showed that Vigna membranacea A. Rich leaves were cheap source of vitamin C and beta-carotene for people who consumed these wild green. Additional research on the in vivo antioxidant activity, toxicological analysis, and promotion of these wild food plants for agricultural production should be taken into consideration.Keywords: antioxidant activity, beta-carotene, flavonoids, phenolic content, and vitamin c
Procedia PDF Downloads 102148 The Association between Attachment Styles, Satisfaction of Life, Alexithymia, and Psychological Resilience: The Mediational Role of Self-Esteem
Authors: Zahide Tepeli Temiz, Itir Tari Comert
Abstract:
Attachment patterns based on early emotional interactions between infant and primary caregiver continue to be influential in adult life, in terms of mental health and behaviors of individuals. Several studies reveal that infant-caregiver relationships have impressed the affect regulation, coping with stressful and negative situations, general satisfaction of life, and self image in adulthood, besides the attachment styles. The present study aims to examine the relationships between university students’ attachment style and their self-esteem, alexithymic features, satisfaction of life, and level of resilience. In line with this aim, the hypothesis of the prediction of attachment styles (anxious and avoidant) over life satisfaction, self-esteem, alexithymia, and psychological resilience was tested. Additionally, in this study Structural Equational Modeling was conducted to investigate the mediational role of self-esteem in the relationship between attachment styles and alexithymia, life satisfaction, and resilience. This model was examined with path analysis. The sample of the research consists of 425 university students who take education from several region of Turkey. The participants who sign the informed consent completed the Demographic Information Form, Experiences in Close Relationships-Revised, Rosenberg Self-Esteem Scale, The Satisfaction with Life Scale, Toronto Alexithymia Scale, and Resilience Scale for Adults. According to results, anxious, and avoidant dimensions of insecure attachment predicted the self-esteem score and alexithymia in positive direction. On the other hand, these dimensions of attachment predicted life satisfaction in negative direction. The results of linear regression analysis indicated that anxious and avoidant attachment styles didn’t predict the resilience. This result doesn’t support the theory and research indicating the relationship between attachment style and psychological resilience. The results of path analysis revealed the mediational role self esteem in the relation between anxious, and avoidant attachment styles and life satisfaction. In addition, SEM analysis indicated the indirect effect of attachment styles over alexithymia and resilience besides their direct effect. These findings support the hypothesis of this research relation to mediating role of self-esteem. Attachment theorists suggest that early attachment experiences, including supportive and responsive family interactions, have an effect on resilience to harmful situations in adult life, ability to identify, describe, and regulate emotions and also general satisfaction with life. Several studies examining the relationship between attachment styles and life satisfaction, alexithymia, and psychological resilience draw attention to mediational role of self-esteem. Results of this study support the theory of attachment patterns with the mediation of self-image influence the emotional, cognitive, and behavioral regulation of person throughout the adulthood. Therefore, it is thought that any intervention intended for recovery in attachment relationship will increase the self-esteem, life satisfaction, and resilience level, on the one side, decrease the alexithymic features, on the other side.Keywords: alexithymia, anxious attachment, avoidant attachment, life satisfaction, path analysis, resilience, self-esteem, structural equation
Procedia PDF Downloads 195147 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India
Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit
Abstract:
Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique
Procedia PDF Downloads 127146 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 136