Search results for: electrical property
150 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 571149 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor
Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang
Abstract:
To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel
Procedia PDF Downloads 354148 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes
Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen
Abstract:
Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades
Procedia PDF Downloads 164147 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado
Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha
Abstract:
The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.Keywords: low-order streams, metabolism, net primary production, trophic state
Procedia PDF Downloads 258146 Utilizing Temporal and Frequency Features in Fault Detection of Electric Motor Bearings with Advanced Methods
Authors: Mohammad Arabi
Abstract:
The development of advanced technologies in the field of signal processing and vibration analysis has enabled more accurate analysis and fault detection in electrical systems. This research investigates the application of temporal and frequency features in detecting faults in electric motor bearings, aiming to enhance fault detection accuracy and prevent unexpected failures. The use of methods such as deep learning algorithms and neural networks in this process can yield better results. The main objective of this research is to evaluate the efficiency and accuracy of methods based on temporal and frequency features in identifying faults in electric motor bearings to prevent sudden breakdowns and operational issues. Additionally, the feasibility of using techniques such as machine learning and optimization algorithms to improve the fault detection process is also considered. This research employed an experimental method and random sampling. Vibration signals were collected from electric motors under normal and faulty conditions. After standardizing the data, temporal and frequency features were extracted. These features were then analyzed using statistical methods such as analysis of variance (ANOVA) and t-tests, as well as machine learning algorithms like artificial neural networks and support vector machines (SVM). The results showed that using temporal and frequency features significantly improves the accuracy of fault detection in electric motor bearings. ANOVA indicated significant differences between normal and faulty signals. Additionally, t-tests confirmed statistically significant differences between the features extracted from normal and faulty signals. Machine learning algorithms such as neural networks and SVM also significantly increased detection accuracy, demonstrating high effectiveness in timely and accurate fault detection. This study demonstrates that using temporal and frequency features combined with machine learning algorithms can serve as an effective tool for detecting faults in electric motor bearings. This approach not only enhances fault detection accuracy but also simplifies and streamlines the detection process. However, challenges such as data standardization and the cost of implementing advanced monitoring systems must also be considered. Utilizing temporal and frequency features in fault detection of electric motor bearings, along with advanced machine learning methods, offers an effective solution for preventing failures and ensuring the operational health of electric motors. Given the promising results of this research, it is recommended that this technology be more widely adopted in industrial maintenance processes.Keywords: electric motor, fault detection, frequency features, temporal features
Procedia PDF Downloads 49145 Characteristics of the Rocks Glacier Deposits in the Southern Carpathians, Romania
Authors: Petru Urdea
Abstract:
As a distinct part of the mountain system, the rock glacier system is a particularly periglacial debris system. Being an open system, it works in a manner of interconnection with others subsystems like glacial, cliffs, rocky slopes sand talus slope subsystems, which are sources of sediments. One characteristic is that for long periods of time it is like a storage unit for debris, and ice, and temporary for snow and water. In the Southern Carpathians 306 rock glaciers were identified. The vast majority of these rock glaciers, are talus rock glaciers, 74%, and 26%, are debris rock glaciers. In the area occupied by granites and granodiorites are present, 49% of all the rock glaciers, representing 61% of the area occupied by Southern Carpathians rock glaciers. This lithological dependence also leaves its mark on the specifics of the deposits, everything bearing the imprint of the particular way the rocks respond to the physical weathering processes, all in a periglacial regime. If in the domain of granites and granodiorites the blocks are large, - of metric order, even 10 m3 - , in the domain of the metamorphic rocks only gneisses can cut similar sizes. Amphibolites, amphibolitic schists, micaschists, sericite-chlorite schists and phyllites crop out in much smaller blocks, of decimetric order, mostly in the form of slabs. In the case of rock glaciers made up of large blocks, with a strcture of open-works type, the density and volume of voids between the blocks is greater, the smaller debris generating more compact structures with fewer voids. All these influences the thermal regime, associated with a certain type of air circulation during the seasons and the emergence of permafrost formation conditions. The rock glaciers are fed by rock falls, rock avalanches, debris flows, avalanches, so that the structure is heterogeneous, which is also reflected in the detailed topography of the rock glaciers. This heterogeneity is also influenced by the spatial assembly of the rock bodies in the supply area and, an element that cannot be omitted, the behavior of the rocks during periglacial weathering. The production of small gelifracts determines the filling of voids and the appearance of more compact structures, with effects on the creep process. In general, surface deposits are coarser, those in depth are finer, their characteristics being detectable by applying geophysical methods. The electrical tomography (ERT) and georadar (GPR) investigations carried out in the Făgăraş Mountains, Retezat and the Parâng Mountains, each with a different lithological specificity, allowed the identification of some differentiations, including the presence of permafrost bodies.Keywords: rock glaciers deposits, structure, lithology, permafrost, Southern Carpathians, Romania
Procedia PDF Downloads 26144 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing
Authors: Kedar Hardikar, Joe Varghese
Abstract:
Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applicationsKeywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.
Procedia PDF Downloads 135143 Prismatic Bifurcation Study of a Functionally Graded Dielectric Elastomeric Tube Using Linearized Incremental Theory of Deformations
Authors: Sanjeet Patra, Soham Roychowdhury
Abstract:
In recent times, functionally graded dielectric elastomer (FGDE) has gained significant attention within the realm of soft actuation due to its dual capacity to exert highly localized stresses while maintaining its compliant characteristics on application of electro-mechanical loading. Nevertheless, the full potential of dielectric elastomer (DE) has not been fully explored due to their susceptibility to instabilities when subjected to electro-mechanical loads. As a result, study and analysis of such instabilities becomes crucial for the design and realization of dielectric actuators. Prismatic bifurcation is a type of instability that has been recognized in a DE tube. Though several studies have reported on the analysis for prismatic bifurcation in an isotropic DE tube, there is an insufficiency in studies related to prismatic bifurcation of FGDE tubes. Therefore, this paper aims to determine the onset of prismatic bifurcations on an incompressible FGDE tube when subjected to electrical loading across the thickness of the tube and internal pressurization. The analysis has been conducted by imposing two axial boundary conditions on the tube, specifically axially free ends and axially clamped ends. Additionally, the rigidity modulus of the tube has been linearly graded in the direction of thickness where the inner surface of the tube has a lower stiffness than the outer surface. The static equilibrium equations for deformation of the axisymmetric tube are derived and solved using numerical technique. The condition for prismatic bifurcation of the axisymmetric static equilibrium solutions has been obtained by using the linearized incremental constitutive equations. Two modes of bifurcations, corresponding to two different non-circular cross-sectional geometries, have been explored in this study. The outcomes reveal that the FGDE tubes experiences prismatic bifurcation before the Hessian criterion of failure is satisfied. It is observed that the lower mode of bifurcation can be triggered at a lower critical voltage as compared to the higher mode of bifurcation. Furthermore, the tubes with larger stiffness gradient require higher critical voltages for triggering the bifurcation. Moreover, with the increase in stiffness gradient, a linear variation of the critical voltage is observed with the thickness of the tube. It has been found that on applying internal pressure to a tube with low thickness, the tube becomes less susceptible to bifurcations. A thicker tube with axially free end is found to be more stable than the axially clamped end tube at higher mode of bifurcation.Keywords: critical voltage, functionally graded dielectric elastomer, linearized incremental approach, modulus of rigidity, prismatic bifurcation
Procedia PDF Downloads 78142 Field Synergy Analysis of Combustion Characteristics in the Afterburner of Solid Oxide Fuel Cell System
Authors: Shing-Cheng Chang, Cheng-Hao Yang, Wen-Sheng Chang, Chih-Chia Lin, Chun-Han Li
Abstract:
The solid oxide fuel cell (SOFC) is a promising green technology which can achieve a high electrical efficiency. Due to the high operating temperature of SOFC stack, the off-gases at high temperature from anode and cathode outlets are introduced into an afterburner to convert the chemical energy into thermal energy by combustion. The heat is recovered to preheat the fresh air and fuel gases before they pass through the stack during the SOFC power generation system operation. For an afterburner of the SOFC system, the temperature control with a good thermal uniformity is important. A burner with a well-designed geometry usually can achieve a satisfactory performance. To design an afterburner for an SOFC system, the computational fluid dynamics (CFD) simulation is adoptable. In this paper, the hydrogen combustion characteristics in an afterburner with simple geometry are studied by using CFD. The burner is constructed by a cylinder chamber with the configuration of a fuel gas inlet, an air inlet, and an exhaust outlet. The flow field and temperature distributions inside the afterburner under different fuel and air flow rates are analyzed. To improve the temperature uniformity of the afterburner during the SOFC system operation, the flow paths of anode/cathode off-gases are varied by changing the positions of fuels and air inlet channel to improve the heat and flow field synergy in the burner furnace. Because the air flow rate is much larger than the fuel gas, the flow structure and heat transfer in the afterburner is dominated by the air flow path. The present work studied the effects of fluid flow structures on the combustion characteristics of an SOFC afterburner by three simulation models with a cylindrical combustion chamber and a tapered outlet. All walls in the afterburner are assumed to be no-slip and adiabatic. In each case, two set of parameters are simulated to study the transport phenomena of hydrogen combustion. The equivalence ratios are in the range of 0.08 to 0.1. Finally, the pattern factor for the simulation cases is calculated to investigate the effect of gas inlet locations on the temperature uniformity of the SOFC afterburner. The results show that the temperature uniformity of the exhaust gas can be improved by simply adjusting the position of the gas inlet. The field synergy analysis indicates the design of the fluid flow paths should be in the way that can significantly contribute to the heat transfer, i.e. the field synergy angle should be as small as possible. In the study cases, the averaged synergy angle of the burner is about 85̊, 84̊, and 81̊ respectively.Keywords: afterburner, combustion, field synergy, solid oxide fuel cell
Procedia PDF Downloads 137141 Policy Initiatives That Increase Mass-Market Participation of Fuel Cell Electric Vehicles
Authors: Usman Asif, Klaus Schmidt
Abstract:
In recent years, the development of alternate fuel vehicles has helped to reduce carbon emissions worldwide. As the number of vehicles will continue to increase in the future, the energy demand will also increase. Therefore, we must consider automotive technologies that are efficient and less harmful to the environment in the long run. Battery Electric Vehicles (BEVs) have gained popularity in recent years because of their lower maintenance, lower fuel costs, and lower carbon emissions. Nevertheless, BEVs show several disadvantages, such as slow charging times and lower range than traditional combustion-powered vehicles. These factors keep many people from switching to BEVs. The authors of this research believe that these limitations can be overcome by using fuel cell technology. Fuel cell technology converts chemical energy into electrical energy from hydrogen power and therefore serves as fuel to power the motor and thus replacing heavy lithium batteries that are expensive and hard to recycle. Also, in contrast to battery-powered electric vehicle technology, Fuel Cell Electric Vehicles (FCEVs) offer higher ranges and lower fuel-up times and therefore are more competitive with electric vehicles. However, FCEVs have not gained the same popularity as electric vehicles due to stringent legal frameworks, underdeveloped infrastructure, high fuel transport, and storage costs plus the expense of fuel cell technology itself. This research will focus on the legal frameworks for hydrogen-powered vehicles, and how a change in these policies may affect and improve hydrogen fueling infrastructure and lower hydrogen transport and storage costs. These policies may also facilitate reductions in fuel cell technology costs. In order to attain a better framework, a number of countries have developed conceptual roadmaps. These roadmaps have set out a series of objectives to increase the access of FCEVs to their respective markets. This research will specifically focus on policies in Japan, Europe, and the USA in their attempt to shape the automotive industry of the future. The researchers also suggest additional policies that may help to accelerate the advancement of FCEVs to mass-markets. The approach was to provide a solid literature review using resources from around the globe. After a subsequent analysis and synthesis of this review, the authors concluded that in spite of existing legal challenges that have hindered the advancement of fuel-cell technology in the automobile industry in the past, new initiatives that enhance and advance the very same technology in the future are underway.Keywords: fuel cell electric vehicles, fuel cell technology, legal frameworks, policies and regulations
Procedia PDF Downloads 117140 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle
Authors: Aloke Bapli, Debabrata Seth
Abstract:
Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation
Procedia PDF Downloads 156139 Making Beehives More 'Intelligent'- The Case of Capturing, Reducing, and Managing Bee Pest Infestation in Hives through Modification of Hive Entrance Holes and the Installation of Multiple In-Hive Bee Pest Traps
Authors: Prince Amartey
Abstract:
Bees are clever creatures, thus, capturing bees implies that the hives are intelligent in the sense that they have all of the required circumstances to attract and trap the bees. If the hive goes above and beyond to keep the bees in the hive and to keep the activities of in-hive pests to a minimal in order for the bees to develop to their maximum potential, the hive is becoming or is more 'intelligent'. Some bee pests, such as tiny beehive beetles, are endemic to Africa; however, the way we now extract honey by cutting off the combs and pressing for honey prevents the spread of these bees' insect enemies. However, when we explore entering the commercialization. When freshly collected combs are returned to the hives following the adoption of the frame and other systems, there is a need to consider putting in strategies to manage the accompanying pest concerns that arise with unprotected combs.The techniques for making hives more'intelligent' are thus more important presently, given that the African apicultural business does not wish to encourage the use of pesticides in the hives. This include changing the hive's entrance holes in order to improve the bees' own mechanism for defending the entry sites, as well as collecting pests by setting exterior and in-hive traps to prevent pest infiltration into hives by any means feasible. Material and Methods: The following five (5) mechanisms are proposed to make the hives more 'intelligent.' i. The usage of modified frames with five (5) beetle traps positioned horizontally on the vertical 'legs' to catch the beetle along the combs' surfaces-multiple bee ii. Baited bioelectric frame traps, which has both vertical sections of frame covered with a 3mm mesh that allows pest entry but not bees. The pest is attracted by strips of combs of honey, open brood, pollen on metal plates inserted horizontally on the vertical ‘legs’ of the frames. An electrical ‘mine’ system in place that electrocutes the pests as they step on the wires in the trap to enter the frame trap iii. The ten rounded hive entry holes are adapted as the bees are able to police the entrance to prevent entry of pest. The holes are arranged in two rows, with one on top of the other What Are the Main Contributions of Your Research?-Results Discussions and Conclusions The techniques implemented decrease pest ingress, while in-hive traps capture those that escape entry into the hives. Furthermore, the stand alteration traps larvae and stops their growth into adults. As beekeeping commercialization grows throughout Africa, these initiatives will minimize insect infestation in hives and necessarily enhance honey output.Keywords: bee pests, modified frames, multiple beetle trap, Baited bioelectric frame traps
Procedia PDF Downloads 78138 Real-Time Working Environment Risk Analysis with Smart Textiles
Authors: Jose A. Diaz-Olivares, Nafise Mahdavian, Farhad Abtahi, Kaj Lindecrantz, Abdelakram Hafid, Fernando Seoane
Abstract:
Despite new recommendations and guidelines for the evaluation of occupational risk assessments and their prevention, work-related musculoskeletal disorders are still one of the biggest causes of work activity disruption, productivity loss, sick leave and chronic work disability. It affects millions of workers throughout Europe, with a large-scale economic and social burden. These specific efforts have failed to produce significant results yet, probably due to the limited availability and high costs of occupational risk assessment at work, especially when the methods are complex, consume excessive resources or depend on self-evaluations and observations of poor accuracy. To overcome these limitations, a pervasive system of risk assessment tools in real time has been developed, which has the characteristics of a systematic approach, with good precision, usability and resource efficiency, essential to facilitate the prevention of musculoskeletal disorders in the long term. The system allows the combination of different wearable sensors, placed on different limbs, to be used for data collection and evaluation by a software solution, according to the needs and requirements in each individual working environment. This is done in a non-disruptive manner for both the occupational health expert and the workers. The creation of this solution allows us to attend different research activities that require, as an essential starting point, the recording of data with ergonomic value of very diverse origin, especially in real work environments. The software platform is here presented with a complimentary smart clothing system for data acquisition, comprised of a T-shirt containing inertial measurement units (IMU), a vest sensorized with textile electronics, a wireless electrocardiogram (ECG) and thoracic electrical bio-impedance (TEB) recorder and a glove sensorized with variable resistors, dependent on the angular position of the wrist. The collected data is processed in real-time through a mobile application software solution, implemented in commercially available Android-based smartphones and tablet platforms. Based on the collection of this information and its analysis, real-time risk assessment and feedback about postural improvement is possible, adapted to different contexts. The result is a tool which provides added value to ergonomists and occupational health agents, as in situ analysis of postural behavior can assist in a quantitative manner in the evaluation of work techniques and the occupational environment.Keywords: ergonomics, mobile technologies, risk assessment, smart textiles
Procedia PDF Downloads 118137 Airon Project: IoT-Based Agriculture System for the Optimization of Irrigation Water Consumption
Authors: África Vicario, Fernando J. Álvarez, Felipe Parralejo, Fernando Aranda
Abstract:
The irrigation systems of traditional agriculture, such as gravity-fed irrigation, produce a great waste of water because, generally, there is no control over the amount of water supplied in relation to the water needed. The AIRON Project tries to solve this problem by implementing an IoT-based system to sensor the irrigation plots so that the state of the crops and the amount of water used for irrigation can be known remotely. The IoT system consists of a sensor network that measures the humidity of the soil, the weather conditions (temperature, relative humidity, wind and solar radiation) and the irrigation water flow. The communication between this network and a central gateway is conducted by means of long-range wireless communication that depends on the characteristics of the irrigation plot. The main objective of the AIRON project is to deploy an IoT sensor network in two different plots of the irrigation community of Aranjuez in the Spanish region of Madrid. The first plot is 2 km away from the central gateway, so LoRa has been used as the base communication technology. The problem with this plot is the absence of mains electric power, so devices with energy-saving modes have had to be used to maximize the external batteries' use time. An ESP32 SOC board with a LoRa module is employed in this case to gather data from the sensor network and send them to a gateway consisting of a Raspberry Pi with a LoRa hat. The second plot is located 18 km away from the gateway, a range that hampers the use of LoRa technology. In order to establish reliable communication in this case, the long-term evolution (LTE) standard is used, which makes it possible to reach much greater distances by using the cellular network. As mains electric power is available in this plot, a Raspberry Pi has been used instead of the ESP32 board to collect sensor data. All data received from the two plots are stored on a proprietary server located at the irrigation management company's headquarters. The analysis of these data by means of machine learning algorithms that are currently under development should allow a short-term prediction of the irrigation water demand that would significantly reduce the waste of this increasingly valuable natural resource. The major finding of this work is the real possibility of deploying a remote sensing system for irrigated plots by using Commercial-Off-The-Shelf (COTS) devices, easily scalable and adaptable to design requirements such as the distance to the control center or the availability of mains electrical power at the site.Keywords: internet of things, irrigation water control, LoRa, LTE, smart farming
Procedia PDF Downloads 85136 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator
Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo
Abstract:
Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors
Procedia PDF Downloads 407135 Experimental Uniaxial Tensile Characterization of One-Dimensional Nickel Nanowires
Authors: Ram Mohan, Mahendran Samykano, Shyam Aravamudhan
Abstract:
Metallic nanowires with sub-micron and hundreds of nanometer diameter have a diversity of applications in nano/micro-electromechanical systems (NEMS/MEMS). Characterizing the mechanical properties of such sub-micron and nano-scale metallic nanowires are tedious; require sophisticated and careful experimentation to be performed within high-powered microscopy systems (scanning electron microscope (SEM), atomic force microscope (AFM)). Also, needed are nanoscale devices for placing the nanowires; loading them with the intended conditions; obtaining the data for load–deflection during the deformation within the high-powered microscopy environment poses significant challenges. Even picking the grown nanowires and placing them correctly within a nanoscale loading device is not an easy task. Mechanical characterizations through experimental methods for such nanowires are still very limited. Various techniques at different levels of fidelity, resolution, and induced errors have been attempted by material science and nanomaterial researchers. The methods for determining the load, deflection within the nanoscale devices also pose a significant problem. The state of the art is thus still at its infancy. All these factors result and is seen in the wide differences in the characterization curves and the reported properties in the current literature. In this paper, we discuss and present our experimental method, results, and discussions of uniaxial tensile loading and the development of subsequent stress–strain characteristics curves for Nickel nanowires. Nickel nanowires in the diameter range of 220–270 nm were obtained in our laboratory via an electrodeposition method, which is a solution based, template method followed in our present work for growing 1-D Nickel nanowires. Process variables such as the presence of magnetic field, its intensity; and varying electrical current density during the electrodeposition process were found to influence the morphological and physical characteristics including crystal orientation, size of the grown nanowires1. To further understand the correlation and influence of electrodeposition process variables, associated formed structural features of our grown Nickel nanowires to their mechanical properties, careful experiments within scanning electron microscope (SEM) were conducted. Details of the uniaxial tensile characterization, testing methodology, nanoscale testing device, load–deflection characteristics, microscopy images of failure progression, and the subsequent stress–strain curves are discussed and presented.Keywords: uniaxial tensile characterization, nanowires, electrodeposition, stress-strain, nickel
Procedia PDF Downloads 406134 Critical Factors for Successful Adoption of Land Value Capture Mechanisms – An Exploratory Study Applied to Indian Metro Rail Context
Authors: Anjula Negi, Sanjay Gupta
Abstract:
Paradigms studied inform inadequacies of financial resources, be it to finance metro rails for construction or to meet operational revenues or to derive profits in the long term. Funding sustainability is far and wide for much-needed public transport modes, like urban rail or metro rails, to be successfully operated. India embarks upon a sustainable transport journey and has proposed metro rail systems countrywide. As an emerging economic leader, its fiscal constraints are paramount, and the land value capture (LVC) mechanism provides necessary support and innovation toward development. India’s metro rail policy promotes multiple methods of financing, including private-sector investments and public-private-partnership. The critical question that remains to be addressed is what factors can make such mechanisms work. Globally, urban rail is a revolution noted by many researchers as future mobility. Researchers in this study deep dive by way of literature review and empirical assessments into factors that can lead to the adoption of LVC mechanisms. It is understood that the adoption of LVC methods is in the nascent stages in India. Research posits numerous challenges being faced by metro rail agencies in raising funding and for incremental value capture. A few issues pertaining to land-based financing, inter alia: are long-term financing, inter-institutional coordination, economic/ market suitability, dedicated metro funds, land ownership issues, piecemeal approach to real estate development, property development legal frameworks, etc. The question under probe is what are the parameters that can lead to success in the adoption of land value capture (LVC) as a financing mechanism. This research provides insights into key parameters crucial to the adoption of LVC in the context of Indian metro rails. Researchers have studied current forms of LVC mechanisms at various metro rails of the country. This study is significant as little research is available on the adoption of LVC, which is applicable to the Indian context. Transit agencies, State Government, Urban Local Bodies, Policy makers and think tanks, Academia, Developers, Funders, Researchers and Multi-lateral agencies may benefit from this research to take ahead LVC mechanisms in practice. The study deems it imperative to explore and understand key parameters that impact the adoption of LVC. Extensive literature review and ratification by experts working in the metro rails arena were undertaken to arrive at parameters for the study. Stakeholder consultations in the exploratory factor analysis (EFA) process were undertaken for principal component extraction. 43 seasoned and specialized experts participated in a semi-structured questionnaire to scale the maximum likelihood on each parameter, represented by various types of stakeholders. Empirical data was collected on chosen eighteen parameters, and significant correlation was extracted for output descriptives and inferential statistics. Study findings reveal these principal components as institutional governance framework, spatial planning features, legal frameworks, funding sustainability features and fiscal policy measures. In particular, funding sustainability features highlight sub-variables of beneficiaries to pay and use of multiple revenue options towards success in LVC adoption. Researchers recommend incorporation of these variables during early stage in design and project structuring for success in adoption of LVC. In turn leading to improvements in revenue sustainability of a public transport asset and help in undertaking informed transport policy decisions.Keywords: Exploratory factor analysis, land value capture mechanism, financing metro rails, revenue sustainability, transport policy
Procedia PDF Downloads 82133 Recycling of Sintered NdFeB Magnet Waste Via Oxidative Roasting and Selective Leaching
Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward a circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 °C to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700 – 800 °C prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe₃O₄) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 °C resulted in a greater Fe₂O₃ to Nd₂(SO₄)₃ ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 °C followed by acid leaching and roasting at 800 °C gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 182132 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach
Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis
Abstract:
The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company
Procedia PDF Downloads 113131 The Effects of Qigong Exercise Intervention on the Cognitive Function in Aging Adults
Authors: D. Y. Fong, C. Y. Kuo, Y. T. Chiang, W. C. Lin
Abstract:
Objectives: Qigong is an ancient Chinese practice in pursuit of a healthier body and a more peaceful mindset. It emphasizes on the restoration of vital energy (Qi) in body, mind, and spirit. The practice is the combination of gentle movements and mild breathing which help the doers reach the condition of tranquility. On account of the features of Qigong, first, we use cross-sectional methodology to compare the differences among the varied levels of Qigong practitioners on cognitive function with event-related potential (ERP) and electroencephalography (EEG). Second, we use the longitudinal methodology to explore the effects on the Qigong trainees for pretest and posttest on ERP and EEG. Current study adopts Attentional Network Test (ANT) task to examine the participants’ cognitive function, and aging-related researches demonstrated a declined tread on the cognition in older adults and exercise might ameliorate the deterioration. Qigong exercise integrates physical posture (muscle strength), breathing technique (aerobic ability) and focused intention (attention) that researchers hypothesize it might improve the cognitive function in aging adults. Method: Sixty participants were involved in this study, including 20 young adults (21.65±2.41 y) with normal physical activity (YA), 20 Qigong experts (60.69 ± 12.42 y) with over 7 years Qigong practice experience (QE), and 20 normal and healthy adults (52.90±12.37 y) with no Qigong practice experience as experimental group (EG). The EG participants took Qigong classes 2 times a week and 2 hours per time for 24 weeks with the purpose of examining the effect of Qigong intervention on cognitive function. ANT tasks (alert network, orient network, and executive control) were adopted to evaluate participants’ cognitive function via ERP’s P300 components and P300 amplitude topography. Results: Behavioral data: 1.The reaction time (RT) of YA is faster than the other two groups, and EG was faster than QE in the cue and flanker conditions of ANT task. 2. The RT of posttest was faster than pretest in EG in the cue and flanker conditions. 3. No difference among the three groups on orient, alert, and execute control networks. ERP data: 1. P300 amplitude detection in QE was larger than EG at Fz electrode in orient, alert, and execute control networks. 2. P300 amplitude in EG was larger at pretest than posttest on the orient network. 3. P300 Latency revealed no difference among the three groups in the three networks. Conclusion: Taken together these findings, they provide neuro-electrical evidence that older adults involved in Qigong practice may develop a more overall compensatory mechanism and also benefit the performance of behavior.Keywords: Qigong, cognitive function, aging, event-related potential (ERP)
Procedia PDF Downloads 393130 Recycling of Sintered Neodymium-Iron-Boron (NdFeB) Magnet Waste via Oxidative Roasting and Selective Leaching
Authors: Woranittha Kritsarikan
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward the circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 ᵒC to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 hours. The leachate was then subjected to drying and roasting at 700 – 800 ᵒC prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to the increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe3O4) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperature. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 ᵒC resulted in a greater Fe2O3 to Nd2(SO4)3 ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 ᵒC followed by acid leaching and roasting at 800 ᵒC gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 177129 Hydrological Challenges and Solutions in the Nashik Region: A Multi Tracer and Geochemistry Approach to Groundwater Management
Authors: Gokul Prasad, Pennan Chinnasamy
Abstract:
The degradation of groundwater resources, attributed to factors such as excessive abstraction and contamination, has emerged as a global concern. This study delves into the stable isotopes of water) in a hard-rock aquifer situated in the Upper Godavari watershed, an agriculturally rich region in India underlain by Basalt. The higher groundwater draft (> 90%) poses significant risks; comprehending groundwater sources, flow patterns, and their environmental impacts is pivotal for researchers and water managers. The region has faced five droughts in the past 20 years; four are categorized as medium. The recharge rates are variable and show a very minimum contribution to groundwater. The rainfall pattern shows vast variability, with the region receiving seasonal monsoon rainfall for just four months and the rest of the year experiencing minimal rainfall. This research closely monitored monsoon precipitation inputs and examined spatial and temporal fluctuations in δ18O and δ2H in both groundwater and precipitation. By discerning individual recharge events during monsoons, it became possible to identify periods when evaporation led to groundwater quality deterioration, characterized by elevated salinity and stable isotope values in the return flow. The locally derived meteoric water line (LMWL) (δ2H = 6.72 * δ18O + 1.53, r² = 0.6) provided valuable insights into the groundwater system. The leftward shift of the Nashik LMWL in relation to the GMWL and LMWL indicated groundwater evaporation (-33 ‰), supported by spatial variations in electrical conductivity (EC) data. Groundwater in the eastern and northern watershed areas exhibited higher salinity > 3000uS/cm, expanding > 40% of the area compared to the western and southern regions due to geological disparities (alluvium vs basalt). The findings emphasize meteoric precipitation as the primary groundwater source in the watershed. However, spatial variations in isotope values and chemical constituents indicate other contributing factors, including evaporation, groundwater source type, and natural or anthropogenic (specifically agricultural and industrial) contaminants. Therefore, the study recommends focused hydro geochemistry and isotope analysis in areas with strong agricultural and industrial influence for the development of holistic groundwater management plans for protecting the groundwater aquifers' quantity and quality.Keywords: groundwater quality, stable isotopes, salinity, groundwater management, hard-rock aquifer
Procedia PDF Downloads 48128 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 112127 An Integrated Multisensor/Modeling Approach Addressing Climate Related Extreme Events
Authors: H. M. El-Askary, S. A. Abd El-Mawla, M. Allali, M. M. El-Hattab, M. El-Raey, A. M. Farahat, M. Kafatos, S. Nickovic, S. K. Park, A. K. Prasad, C. Rakovski, W. Sprigg, D. Struppa, A. Vukovic
Abstract:
A clear distinction between weather and climate is a necessity because while they are closely related, there are still important differences. Climate change is identified when we compute the statistics of the observed changes in weather over space and time. In this work we will show how the changing climate contribute to the frequency, magnitude and extent of different extreme events using a multi sensor approach with some synergistic modeling activities. We are exploring satellite observations of dust over North Africa, Gulf Region and the Indo Gangetic basin as well as dust versus anthropogenic pollution events over the Delta region in Egypt and Seoul through remote sensing and utilize the behavior of the dust and haze on the aerosol optical properties. Dust impact on the retreat of the glaciers in the Himalayas is also presented. In this study we also focus on the identification and monitoring of a massive dust plume that blew off the western coast of Africa towards the Atlantic on October 8th, 2012 right before the development of Hurricane Sandy. There is evidence that dust aerosols played a non-trivial role in the cyclogenesis process of Sandy. Moreover, a special dust event "An American Haboob" in Arizona is discussed as it was predicted hours in advance because of the great improvement we have in numerical, land–atmosphere modeling, computing power and remote sensing of dust events. Therefore we performed a full numerical simulation to that event using the coupled atmospheric-dust model NMME–DREAM after generating a mask of the potentially dust productive regions using land cover and vegetation data obtained from satellites. Climate change also contributes to the deterioration of different marine habitats. In that regard we are also presenting some work dealing with change detection analysis of Marine Habitats over the city of Hurghada, Red Sea, Egypt. The motivation for this work came from the fact that coral reefs at Hurghada have undergone significant decline. They are damaged, displaced, polluted, stepped on, and blasted off, in addition to the effects of climate change on the reefs. One of the most pressing issues affecting reef health is mass coral bleaching that result from an interaction between human activities and climatic changes. Over another location, namely California, we have observed that it exhibits highly-variable amounts of precipitation across many timescales, from the hourly to the climate timescale. Frequently, heavy precipitation occurs, causing damage to property and life (floods, landslides, etc.). These extreme events, variability, and the lack of good, medium to long-range predictability of precipitation are already a challenge to those who manage wetlands, coastal infrastructure, agriculture and fresh water supply. Adding on to the current challenges for long-range planning is climate change issue. It is known that La Niña and El Niño affect precipitation patterns, which in turn are entwined with global climate patterns. We have studied ENSO impact on precipitation variability over different climate divisions in California. On the other hand the Nile Delta has experienced lately an increase in the underground water table as well as water logging, bogging and soil salinization. Those impacts would pose a major threat to the Delta region inheritance and existing communities. There has been an undergoing effort to address those vulnerabilities by looking into many adaptation strategies.Keywords: remote sensing, modeling, long range transport, dust storms, North Africa, Gulf Region, India, California, climate extremes, sea level rise, coral reefs
Procedia PDF Downloads 488126 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay
Authors: Anderson Braga Mendes
Abstract:
Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance
Procedia PDF Downloads 274125 Performance Analysis of Double Gate FinFET at Sub-10NM Node
Authors: Suruchi Saini, Hitender Kumar Tyagi
Abstract:
With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.Keywords: current on-off ratio, FinFET, short-channel effects, transconductance
Procedia PDF Downloads 61124 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives
Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes
Abstract:
The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system
Procedia PDF Downloads 121123 Cardiac Arrest after Cardiac Surgery
Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov
Abstract:
Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients
Procedia PDF Downloads 54122 Acceleration of Adsorption Kinetics by Coupling Alternating Current with Adsorption Process onto Several Adsorbents
Authors: A. Kesraoui, M. Seffen
Abstract:
Applications of adsorption onto activated carbon for water treatment are well known. The process has been demonstrated to be widely effective for removing dissolved organic substances from wastewaters, but this treatment has a major drawback is the high operating cost. The main goal of our research work is to improve the retention capacity of Tunisian biomass for the depollution of industrial wastewater and retention of pollutants considered toxic. The biosorption process is based on the retention of molecules and ions onto a solid surface composed of biological materials. The evaluation of the potential use of these materials is important to propose as an alternative to the adsorption process generally expensive, used to remove organic compounds. Indeed, these materials are very abundant in nature and are low cost. Certainly, the biosorption process is effective to remove the pollutants, but it presents a slow kinetics. The improvement of the biosorption rates is a challenge to make this process competitive with respect to oxidation and adsorption onto lignocellulosic fibers. In this context, the alternating current appears as a new alternative, original and a very interesting phenomenon in the acceleration of chemical reactions. Our main goal is to increase the retention acceleration of dyes (indigo carmine, methylene blue) and phenol by using a new alternative: alternating current. The adsorption experiments have been performed in a batch reactor by adding some of the adsorbents in 150 mL of pollutants solution with the desired concentration and pH. The electrical part of the mounting comprises a current source which delivers an alternating current voltage of 2 to 15 V. It is connected to a voltmeter that allows us to read the voltage. In a 150 mL capacity cell, we plunged two zinc electrodes and the distance between two Zinc electrodes has been 4 cm. Thanks to alternating current, we have succeeded to improve the performance of activated carbon by increasing the speed of the indigo carmine adsorption process and reducing the treatment time. On the other hand, we have studied the influence of the alternating current on the biosorption rate of methylene blue onto Luffa cylindrica fibers and the hybrid material (Luffa cylindrica-ZnO). The results showed that the alternating current accelerated the biosorption rate of methylene blue onto the Luffa cylindrica and the Luffa cylindrica-ZnO hybrid material and increased the adsorbed amount of methylene blue on both adsorbents. In order to improve the removal of phenol, we performed the coupling between the alternating current and the biosorption onto two adsorbents: Luffa cylindrica and the hybrid material (Luffa cylindrica-ZnO). In fact, the alternating current has succeeded to improve the performance of adsorbents by increasing the speed of the adsorption process and the adsorption capacity and reduce the processing time.Keywords: adsorption, alternating current, dyes, modeling
Procedia PDF Downloads 160121 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids
Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout
Abstract:
Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.Keywords: graphene, layered material, field emission, plasma, doping
Procedia PDF Downloads 361