Search results for: electrical machines
67 Sensitivity Improvement of Optical Ring Resonator for Strain Analysis with the Direction of Strain Recognition Possibility
Authors: Tayebeh Sahraeibelverdi, Ahmad Shirazi Hadi Veladi, Mazdak Radmalekshah
Abstract:
Optical sensors became attractive due to preciseness, low power consumption, and intrinsic electromagnetic interference-free characteristic. Among the waveguide optical sensors, cavity-based ones attended for the high Q-factor. Micro ring resonators as a potential platform have been investigated for various applications as biosensors to pressure sensors thanks to their sensitive ring structure responding to any small change in the refractive index. Furthermore, these small micron size structures can come in an array, bringing the opportunity to have any of the resonance in a specific wavelength and be addressed in this way. Another exciting application is applying a strain to the ring and making them an optical strain gauge where the traditional ones are based on the piezoelectric material. Making them in arrays needs electrical wiring and about fifty times bigger in size. Any physical element that impacts the waveguide cross-section, Waveguide elastic-optic property change, or ring circumference can play a role. In comparison, ring size change has a larger effect than others. Here an engineered ring structure is investigated to study the strain effect on the ring resonance wavelength shift and its potential for more sensitive strain devices. At the same time, these devices can measure any strain by mounting on the surface of interest. The idea is to change the" O" shape ring to a "C" shape ring with a small opening starting from 2π/360 or one degree. We used the Mode solution of Lumbrical software to investigate the effect of changing the ring's opening and the shift induced by applied strain. The designed ring radius is a three Micron silicon on isolator ring which can be fabricated by standard complementary metal-oxide-semiconductor (CMOS) micromachining. The measured wavelength shifts from1-degree opening of the ring to a 6-degree opening have been investigated. Opening the ring for 1-degree affects the ring's quality factor from 3000 to 300, showing an order of magnitude Q-factor reduction. Assuming a strain making the ring-opening from 1 degree to 6 degrees, our simulation results showing negligible Q-factor reduction from 300 to 280. A ring resonator quality factor can reach up to 108 where an order of magnitude reduction is negligible. The resonance wavelength shift showed a blue shift and was obtained to be 1581, 1579,1578,1575nm for 1-, 2-, 4- and 6-degree ring-opening, respectively. This design can find the direction of the strain-induced by applying the opening on different parts of the ring. Moreover, by addressing the specified wavelength, we can precisely find the direction. We can open a significant opportunity to find cracks and any surface mechanical property very specifically and precisely. This idea can be implemented on polymer ring resonators while they can come with a flexible substrate and can be very sensitive to any strain making the two ends of the ring in the slit part come closer or further.Keywords: optical ring resonator, strain gauge, strain sensor, surface mechanical property analysis
Procedia PDF Downloads 12566 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section
Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert
Abstract:
Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics
Procedia PDF Downloads 25665 Superoleophobic Nanocellulose Aerogel Membrance as Bioinspired Cargo Carrier on Oil by Sol-Gel Method
Authors: Zulkifli, I. W. Eltara, Anawati
Abstract:
Understanding the complementary roles of surface energy and roughness on natural nonwetting surfaces has led to the development of a number of biomimetic superhydrophobic surfaces, which exhibit apparent contact angles with water greater than 150 degrees and low contact angle hysteresis. However, superoleophobic surfaces—those that display contact angles greater than 150 degrees with organic liquids having appreciably lower surface tensions than that of water—are extremely rare. In addition to chemical composition and roughened texture, a third parameter is essential to achieve superoleophobicity, namely, re-entrant surface curvature in the form of overhang structures. The overhangs can be realized as fibers. Superoleophobic surfaces are appealing for example, antifouling, since purely superhydrophobic surfaces are easily contaminated by oily substances in practical applications, which in turn will impair the liquid repellency. On the other studied have demonstrate that such aqueous nanofibrillar gels are unexpectedly robust to allow formation of highly porous aerogels by direct water removal by freeze-drying, they are flexible, unlike most aerogels that suffer from brittleness, and they allow flexible hierarchically porous templates for functionalities, e.g. for electrical conductivity. No crosslinking, solvent exchange nor supercritical drying are required to suppress the collapse during the aerogel preparation, unlike in typical aerogel preparations. The aerogel used in current work is an ultralight weight solid material composed of native cellulose nanofibers. The native cellulose nanofibers are cleaved from the self-assembled hierarchy of macroscopic cellulose fibers. They have become highly topical, as they are proposed to show extraordinary mechanical properties due to their parallel and grossly hydrogen bonded polysaccharide chains. We demonstrate that superoleophobic nanocellulose aerogels coating by sol-gel method, the aerogel is capable of supporting a weight nearly 3 orders of magnitude larger than the weight of the aerogel itself. The load support is achieved by surface tension acting at different length scales: at the macroscopic scale along the perimeter of the carrier, and at the microscopic scale along the cellulose nanofibers by preventing soaking of the aerogel thus ensuring buoyancy. Superoleophobic nanocellulose aerogels have recently been achieved using unmodified cellulose nanofibers and using carboxy methylated, negatively charged cellulose nanofibers as starting materials. In this work, the aerogels made from unmodified cellulose nanofibers were subsequently treated with fluorosilanes. To complement previous work on superoleophobic aerogels, we demonstrate their application as cargo carriers on oil, gas permeability, plastrons, and drag reduction, and we show that fluorinated nanocellulose aerogels are high-adhesive superoleophobic surfaces. We foresee applications including buoyant, gas permeable, dirt-repellent coatings for miniature sensors and other devices floating on generic liquid surfaces.Keywords: superoleophobic, nanocellulose, aerogel, sol-gel
Procedia PDF Downloads 35064 Multiscale Modelization of Multilayered Bi-Dimensional Soils
Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur
Abstract:
Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets
Procedia PDF Downloads 12263 Review of Carbon Materials: Application in Alternative Energy Sources and Catalysis
Authors: Marita Pigłowska, Beata Kurc, Maciej Galiński
Abstract:
The application of carbon materials in the branches of the electrochemical industry shows an increasing tendency each year due to the many interesting properties they possess. These are, among others, a well-developed specific surface, porosity, high sorption capacity, good adsorption properties, low bulk density, electrical conductivity and chemical resistance. All these properties allow for their effective use, among others in supercapacitors, which can store electric charges of the order of 100 F due to carbon electrodes constituting the capacitor plates. Coals (including expanded graphite, carbon black, graphite carbon fibers, activated carbon) are commonly used in electrochemical methods of removing oil derivatives from water after tanker disasters, e.g. phenols and their derivatives by their electrochemical anodic oxidation. Phenol can occupy practically the entire surface of carbon material and leave the water clean of hydrophobic impurities. Regeneration of such electrodes is also not complicated, it is carried out by electrochemical methods consisting in unblocking the pores and reducing resistances, and thus their reactivation for subsequent adsorption processes. Graphite is commonly used as an anode material in lithium-ion cells, while due to the limited capacity it offers (372 mAh g-1), new solutions are sought that meet both capacitive, efficiency and economic criteria. Increasingly, biodegradable materials, green materials, biomass, waste (including agricultural waste) are used in order to reuse them and reduce greenhouse effects and, above all, to meet the biodegradability criterion necessary for the production of lithium-ion cells as chemical power sources. The most common of these materials are cellulose, starch, wheat, rice, and corn waste, e.g. from agricultural, paper and pharmaceutical production. Such products are subjected to appropriate treatments depending on the desired application (including chemical, thermal, electrochemical). Starch is a biodegradable polysaccharide that consists of polymeric units such as amylose and amylopectin that build an ordered (linear) and amorphous (branched) structure of the polymer. Carbon is also used as a catalyst. Elemental carbon has become available in many nano-structured forms representing the hybridization combinations found in the primary carbon allotropes, and the materials can be enriched with a large number of surface functional groups. There are many examples of catalytic applications of coal in the literature, but the development of this field has been hampered by the lack of a conceptual approach combining structure and function and a lack of understanding of material synthesis. In the context of catalytic applications, the integrity of carbon environmental management properties and parameters such as metal conductivity range and bond sequence management should be characterized. Such data, along with surface and textured information, can form the basis for the provision of network support services.Keywords: carbon materials, catalysis, BET, capacitors, lithium ion cell
Procedia PDF Downloads 17262 Mechanical Response Investigation of Wafer Probing Test with Vertical Cobra Probe via the Experiment and Transient Dynamic Simulation
Authors: De-Shin Liu, Po-Chun Wen, Zhen-Wei Zhuang, Hsueh-Chih Liu, Pei-Chen Huang
Abstract:
Wafer probing tests play an important role in semiconductor manufacturing procedures in accordance with the yield and reliability requirement of the wafer after the backend-of-the-line process. Accordingly, the stable physical and electrical contact between the probe and the tested wafer during wafer probing is regarded as an essential issue in identifying the known good die. The probe card can be integrated with multiple probe needles, which are classified as vertical, cantilever and micro-electro-mechanical systems type probe selections. Among all potential probe types, the vertical probe has several advantages as compared with other probe types, including maintainability, high probe density and feasibility for high-speed wafer testing. In the present study, the mechanical response of the wafer probing test with the vertical cobra probe on 720 μm thick silicon (Si) substrate with a 1.4 μm thick aluminum (Al) pad is investigated by the experiment and transient dynamic simulation approach. Because the deformation mechanism of the vertical cobra probe is determined by both bending and buckling mechanisms, the stable correlation between contact forces and overdrive (OD) length must be carefully verified. Moreover, the decent OD length with corresponding contact force contributed to piercing the native oxide layer of the Al pad and preventing the probing test-induced damage on the interconnect system. Accordingly, the scratch depth of the Al pad under various OD lengths is estimated by the atomic force microscope (AFM) and simulation work. In the wafer probing test configuration, the contact phenomenon between the probe needle and the tested object introduced large deformation and twisting of mesh gridding, causing the subsequent numerical divergence issue. For this reason, the arbitrary Lagrangian-Eulerian method is utilized in the present simulation work to conquer the aforementioned issue. The analytic results revealed a slight difference when the OD is considered as 40 μm, and the simulated is almost identical to the measured scratch depths of the Al pad under higher OD lengths up to 70 μm. This phenomenon can be attributed to the unstable contact of the probe at low OD length with the scratch depth below 30% of Al pad thickness, and the contact status will be being stable when the scratch depth over 30% of pad thickness. The splash of the Al pad is observed by the AFM, and the splashed Al debris accumulates on a specific side; this phenomenon is successfully simulated in the transient dynamic simulation. Thus, the preferred testing OD lengths are found as 45 μm to 70 μm, and the corresponding scratch depths on the Al pad are represented as 31.4% and 47.1% of Al pad thickness, respectively. The investigation approach demonstrated in this study contributed to analyzing the mechanical response of wafer probing test configuration under large strain conditions and assessed the geometric designs and material selections of probe needles to meet the requirement of high resolution and high-speed wafer-level probing test for thinned wafer application.Keywords: wafer probing test, vertical probe, probe mark, mechanical response, FEA simulation
Procedia PDF Downloads 5461 Destructive and Nondestructive Characterization of Advanced High Strength Steels DP1000/1200
Authors: Carla M. Machado, André A. Silva, Armando Bastos, Telmo G. Santos, J. Pamies Teixeira
Abstract:
Advanced high-strength steels (AHSS) are increasingly being used in automotive components. The use of AHSS sheets plays an important role in reducing weight, as well as increasing the resistance to impact in vehicle components. However, the large-scale use of these sheets becomes more difficult due to the limitations during the forming process. Such limitations are due to the elastically driven change of shape of a metal sheet during unloading and following forming, known as the springback effect. As the magnitude of the springback tends to increase with the strength of the material, it is among the most worrisome problems in the use of AHSS steels. The prediction of strain hardening, especially under non-proportional loading conditions, is very limited due to the lack of constitutive models and mainly due to very limited experimental tests. It is very clear from the literature that in experimental terms there is not much work to evaluate deformation behavior under real conditions, which implies a very limited and scarce development of mathematical models for these conditions. The Bauschinger effect is also fundamental to the difference between kinematic and isotropic hardening models used to predict springback in sheet metal forming. It is of major importance to deepen the phenomenological knowledge of the mechanical and microstructural behavior of the materials, in order to be able to reproduce with high fidelity the behavior of extension of the materials by means of computational simulation. For this, a multi phenomenological analysis and characterization are necessary to understand the various aspects involved in plastic deformation, namely the stress-strain relations and also the variations of electrical conductivity and magnetic permeability associated with the metallurgical changes due to plastic deformation. Aiming a complete mechanical-microstructural characterization, uniaxial tensile tests involving successive cycles of loading and unloading were performed, as well as biaxial tests such as the Erichsen test. Also, nondestructive evaluation comprising eddy currents to verify microstructural changes due to plastic deformation and ultrasonic tests to evaluate the local variations of thickness were made. The material parameters for the stable yield function and the monotonic strain hardening were obtained using uniaxial tension tests in different material directions and balanced biaxial tests. Both the decrease of the modulus of elasticity and Bauschinger effect were determined through the load-unload tensile tests. By means of the eddy currents tests, it was possible to verify changes in the magnetic permeability of the material according to the different plastically deformed areas. The ultrasonic tests were an important aid to quantify the local plastic extension. With these data, it is possible to parameterize the different models of kinematic hardening to better approximate the results obtained by simulation with the experimental results, which are fundamental for the springback prediction of the stamped parts.Keywords: advanced high strength steel, Bauschinger effect, sheet metal forming, springback
Procedia PDF Downloads 22660 Carbon Nanotubes Functionalization via Ullmann-Type Reactions Yielding C-C, C-O and C-N Bonds
Authors: Anna Kolanowska, Anna Kuziel, Sławomir Boncel
Abstract:
Carbon nanotubes (CNTs) represent a combination of lightness and nanoscopic size with high tensile strength, excellent thermal and electrical conductivity. By now, CNTs have been used as a support in heterogeneous catalysis (CuCl anchored to pre-functionalized CNTs) in the Ullmann-type coupling with aryl halides toward formation of C-N and C-O bonds. The results indicated that the stability of the catalyst was much improved and the elaborated catalytic system was efficient and recyclable. However, CNTs have not been considered as the substrate itself in the Ullmann-type reactions. But if successful, this functionalization would open new areas of CNT chemistry leading to enhanced in-solvent/matrix nanotube individualization. The copper-catalyzed Ullmann-type reaction is an attractive method for the formation of carbon-heteroatom and carbon-carbon bonds in organic synthesis. This condensation reaction is usually conducted at temperature as high as 200 oC, often in the presence of stoichiometric amounts of copper reagent and with activated aryl halides. However, a small amount of organic additive (e.g. diamines, amino acids, diols, 1,10-phenanthroline) can be applied in order to increase the solubility and stability of copper catalyst, and at the same time to allow performing the reaction under mild conditions. The copper (pre-)catalyst is prepared by in situ mixing of copper salt and the appropriate chelator. Our research is focused on the application of Ullmann-type reaction for the covalent functionalization of CNTs. Firstly, CNTs were chlorinated by using iodine trichloride (ICl3) in carbon tetrachloride (CCl4). This method involves formation of several chemical species (ICl, Cl2 and I2Cl6), but the most reactive is the dimer. The fact (that the dimer is the main individual in CCl4) is the reason for high reactivity and possibly high functionalization levels of CNTs. This method, indeed, yielded a notable amount of chlorine onto the MWCNT surface. The next step was the reaction of CNT-Cl with three substrates: aniline, iodobenzene and phenol for the formation C-N, C-C and C-O bonds, respectively, in the presence of 1,10-phenanthroline and cesium carbonate (Cs2CO3) as a base. As the CNT substrates, two multi-wall CNT (MWCNT) types were used: commercially available Nanocyl NC7000™ (9.6 nm diameter, 1.5 µm length, 90% purity) and thicker MWCNTs (in-house) synthesized in our laboratory using catalytic chemical vapour deposition (c-CVD). In-house CNTs had diameter ranging between 60-70 nm and length up to 300 µm. Since classical Ullmann reaction was found as suffering from poor yields, we have investigated the effect of various solvents (toluene, acetonitrile, dimethyl sulfoxide and N,N-dimethylformamide) on the coupling of substrates. Owing to the fact that the aryl halides show the reactivity order of I>Br>Cl>F, we have also investigated the effect of iodine presence on CNT surface on reaction yield. In this case, in first step we have used iodine monochloride instead of iodine trichloride. Finally, we have used the optimized reaction conditions with p-bromophenol and 1,2,4-trihydroxybenzene for the control of CNT dispersion.Keywords: carbon nanotubes, coupling reaction, functionalization, Ullmann reaction
Procedia PDF Downloads 16759 Study of White Salted Noodles Air Dehydration Assisted by Microwave as Compared to Conventional Air Dried Process
Authors: Chiun-C. R. Wang, I-Yu Chiu
Abstract:
Drying is the most difficult and critical step to control in the dried salted noodles production. Microwave drying has the specific advantage of rapid and uniform heating due to the penetration of microwaves into the body of the product. Microwave-assisted facility offers a quick and energy saving method during food dehydration as compares to the conventional air-dried method for the noodle preparation. Recently, numerous studies in the rheological characteristics of pasta or spaghetti were carried out with microwave–assisted and conventional air driers and many agricultural products were dried successfully. There is very few research associated with the evaluation of physicochemical characteristics and cooking quality of microwave-assisted air dried salted noodles. The purposes of this study were to compare the difference between conventional air and microwave-assisted air drying method on the physicochemical properties and eating quality of rice bran noodles. Three different microwave power including 0.5 KW, 0.75 KW and 1.0 KW installing with 50℃ hot air were applied for dehydration of rice bran noodles in this study. Three proportion of rice bran ranging in 0-20% were incorporated into salted noodles processing. The appearance, optimum cooking time, cooking yield and losses, textural profiles analysis, and sensory evaluation of rice bran noodles were measured in this study. The results indicated that high power (1.0 KW) microwave facility caused partially burnt and porous on the surface of rice bran noodles. However, no significant difference of noodle was appeared on the surface of noodles between low power (0.5 KW) microwave-assisted salted noodles and control set. The optimum cooking time of noodles was decreased as higher power microwave was applied or higher proportion of rice bran was incorporated in the preparation of salted noodles. The higher proportion of rice bran (20%) or higher power of microwave-assisted dried noodles obtained the higher color intensity and the higher cooking losses as compared with conventional air dried noodles. Meanwhile, the higher power of microwave-assisted air dried noodles indicated the larger air cell inside the noodles and appeared little burnt stripe on the surface of noodles. The firmness of cooked rice bran noodles slightly decreased in the cooked noodles which were dried by high power microwave-assisted method. The shearing force, tensile strength, elasticity and texture profiles of cooked rice noodles decreased with the progress of the proportion of rice bran. The results of sensory evaluation indicated conventional dried noodles obtained the higher springiness, cohesiveness and overall acceptability of cooked noodles than high power (1.0 KW) microwave-assisted dried noodles. However, low power (0.5 KW) microwave-assisted dried noodles showed the comparable sensory attributes and acceptability with conventional dried noodles. Moreover, the sensory attributes including firmness, springiness, cohesiveness decreased, but stickiness increased with the increases of rice bran proportion in the salted noodles. These results inferred that incorporation of lower proportion of rice bran and lower power microwave-assisted dried noodles processing could produce faster cooking time and more acceptable quality of cooked noodles as compared to conventional dried noodles.Keywords: white salted noodles, microwave-assisted air drying processing, cooking yield, appearance, texture profiles, scanning electrical microscopy, sensory evaluation
Procedia PDF Downloads 49258 Laboratory Indices in Late Childhood Obesity: The Importance of DONMA Indices
Authors: Orkide Donma, Mustafa M. Donma, Muhammet Demirkol, Murat Aydin, Tuba Gokkus, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu
Abstract:
Obesity in childhood establishes a ground for adulthood obesity. Especially morbid obesity is an important problem for the children because of the associated diseases such as diabetes mellitus, cancer and cardiovascular diseases. In this study, body mass index (BMI), body fat ratios, anthropometric measurements and ratios were evaluated together with different laboratory indices upon evaluation of obesity in morbidly obese (MO) children. Children with nutritional problems participated in the study. Written informed consent was obtained from the parents. Study protocol was approved by the Ethics Committee. Sixty-two MO girls aged 129.5±35.8 months and 75 MO boys aged 120.1±26.6 months were included into the scope of the study. WHO-BMI percentiles for age-and-sex were used to assess the children with those higher than 99th as morbid obesity. Anthropometric measurements of the children were recorded after their physical examination. Bio-electrical impedance analysis was performed to measure fat distribution. Anthropometric ratios, body fat ratios, Index-I and Index-II as well as insulin sensitivity indices (ISIs) were calculated. Girls as well as boys were binary grouped according to homeostasis model assessment-insulin resistance (HOMA-IR) index of <2.5 and >2.5, fasting glucose to insulin ratio (FGIR) of <6 and >6 and quantitative insulin sensitivity check index (QUICKI) of <0.33 and >0.33 as the frequently used cut-off points. They were evaluated based upon their BMIs, arms, legs, trunk, whole body fat percentages, body fat ratios such as fat mass index (FMI), trunk-to-appendicular fat ratio (TAFR), whole body fat ratio (WBFR), anthropometric measures and ratios [waist-to-hip, head-to-neck, thigh-to-arm, thigh-to-ankle, height/2-to-waist, height/2-to-hip circumference (C)]. SPSS/PASW 18 program was used for statistical analyses. p≤0.05 was accepted as statistically significance level. All of the fat percentages showed differences between below and above the specified cut-off points in girls when evaluated with HOMA-IR and QUICKI. Differences were observed only in arms fat percent for HOMA-IR and legs fat percent for QUICKI in boys (p≤ 0.05). FGIR was unable to detect any differences for the fat percentages of boys. Head-to-neck C was the only anthropometric ratio recommended to be used for all ISIs (p≤0.001 for both girls and boys in HOMA-IR, p≤0.001 for girls and p≤0.05 for boys in FGIR and QUICKI). Indices which are recommended for use in both genders were Index-I, Index-II, HOMA/BMI and log HOMA (p≤0.001). FMI was also a valuable index when evaluated with HOMA-IR and QUICKI (p≤0.001). The important point was the detection of the severe significance for HOMA/BMI and log HOMA while they were evaluated also with the other indices, FGIR and QUICKI (p≤0.001). These parameters along with Index-I were unique at this level of significance for all children. In conclusion, well-accepted ratios or indices may not be valid for the evaluation of both genders. This study has emphasized the limiting properties for boys. This is particularly important for the selection process of some ratios and/or indices during the clinical studies. Gender difference should be taken into consideration for the evaluation of the ratios or indices, which will be recommended to be used particularly within the scope of obesity studies.Keywords: anthropometry, childhood obesity, gender, insulin sensitivity index
Procedia PDF Downloads 35557 Groundwater Arsenic Contamination in Gangetic Jharkhand, India: Risk Implications for Human Health and Sustainable Agriculture
Authors: Sukalyan Chakraborty
Abstract:
Arsenic contamination in groundwater has been a matter of serious concern worldwide. Globally, arsenic contaminated water has caused serious chronic human diseases and in the last few decades the transfer of arsenic to human beings via food chain has gained much attention because food represents a further potential exposure pathway to arsenic in instances where crops are irrigated with high arsenic groundwater, grown in contaminated fields or cooked with arsenic laden water. In the present study, the groundwater of Sahibganj district of Jharkhand has been analysed to find the degree of contamination and its probable associated risk due to direct consumption or irrigation. The present study area comprising of three blocks, namely Sahibganj, Rajmahal and Udhwa in Sahibganj district of Jharkhand state, India, situated in the western bank of river Ganga has been investigated for arsenic contamination in groundwater, soil and crops predominantly growing in the region. Associated physicochemical parameters of groundwater including pH, temperature, electrical conductivity (EC), total dissolved solids (TDS), dissolved oxygen (DO), oxidation reduction potential (ORP), ammonium, nitrate and chloride were assessed to understand the mobilisation mechanism and chances of arsenic exposure from soil to crops and further into the food chain. Results suggested the groundwater to be dominantly Ca-HCO3- type with low redox potential and high total dissolved solids load. Major cations followed the order of Ca ˃ Na ˃ Mg ˃ K. The concentration of major anions was found in the order of HCO3− > Cl− > SO42− > NO3− > PO43− varied between 0.009 to 0.20 mg L-1. Fe concentrations of the groundwater samples were below WHO permissible limit varying between 54 to 344 µg L-1. Phosphate concentration was high and showed a significant positive correlation with arsenic. As concentrations ranged from 7 to 115 µg L-1 in premonsoon, between 2 and 98 µg L-1 in monsoon and 1 to 133µg L-1 in postmonsoon season. Arsenic concentration was found to be much higher than the WHO or BIS permissible limit in majority of the villages in the study area. Arsenic was also seen to be positively correlated with iron and phosphate. PCA results demonstrated the role of both geological condition and anthropogenic inputs to influence the water quality. Arsenic was also found to increase with depth up to 100 m from the surface. Calculation of carcinogenic and non-carcinogenic effects of the arsenic concentration in the communities exposed to the groundwater for drinking and other purpose indicated high risk with an average of more than 1 in a 1000 population. Health risk analysis revealed high to very high carcinogenic and non-carcinogenic risk for adults and children in the communities dependent on groundwater of the study area. Observation suggested the groundwater to be considerably polluted with arsenic and posing significant health risk for the exposed communities. The mobilisation mechanism of arsenic also could be identified from the results suggesting reductive dissolution of Fe oxyhydroxides due to high phosphate concentration from agricultural input arsenic release from the sediments along river Ganges.Keywords: arsenic, physicochemical parameters, mobilisation, health effects
Procedia PDF Downloads 22656 Carbon Nanotubes (CNTs) as Multiplex Surface Enhanced Raman Scattering Sensing Platforms
Authors: Pola Goldberg Oppenheimer, Stephan Hofmann, Sumeet Mahajan
Abstract:
Owing to its fingerprint molecular specificity and high sensitivity, surface-enhanced Raman scattering (SERS) is an established analytical tool for chemical and biological sensing capable of single-molecule detection. A strong Raman signal can be generated from SERS-active platforms given the analyte is within the enhanced plasmon field generated near a noble-metal nanostructured substrate. The key requirement for generating strong plasmon resonances to provide this electromagnetic enhancement is an appropriate metal surface roughness. Controlling nanoscale features for generating these regions of high electromagnetic enhancement, the so-called SERS ‘hot-spots’, is still a challenge. Significant advances have been made in SERS research, with wide-ranging techniques to generate substrates with tunable size and shape of the nanoscale roughness features. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for miniaturised sensing devices. Carbon nanotubes (CNTs) have been concurrently, a topic of extensive research however, their applications for plasmonics has been only recently beginning to gain interest. CNTs can provide low-cost, large-active-area patternable substrates which, coupled with appropriate functionalization capable to provide advanced SERS-platforms. Herein, advanced methods to generate CNT-based SERS active detection platforms will be discussed. First, a novel electrohydrodynamic (EHD) lithographic technique will be introduced for patterning CNT-polymer composites, providing a straightforward, single-step approach for generating high-fidelity sub-micron-sized nanocomposite structures within which anisotropic CNTs are vertically aligned. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements with each of the EHD-CNTs individual structural units functioning as an isolated sensor. Further, gold-functionalized VACNTFs are fabricated as SERS micro-platforms. The dependence on the VACNTs’ diameters and density play an important role in the Raman signal strength, thus highlighting the importance of structural parameters, previously overlooked in designing and fabricating optimized CNTs-based SERS nanoprobes. VACNTs forests patterned into predesigned pillar structures are further utilized for multiplex detection of bio-analytes. Since CNTs exhibit electrical conductivity and unique adsorption properties, these are further harnessed in the development of novel chemical and bio-sensing platforms.Keywords: carbon nanotubes (CNTs), EHD patterning, SERS, vertically aligned carbon nanotube forests (VACNTF)
Procedia PDF Downloads 33055 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 12854 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation
Authors: Matthias Leitner, Gernot Pottlacher
Abstract:
Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion
Procedia PDF Downloads 21753 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems
Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille
Abstract:
Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable
Procedia PDF Downloads 39852 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal
Authors: C. Bateira, J. Fernandes, A. Costa
Abstract:
The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards
Procedia PDF Downloads 17651 Chiral Molecule Detection via Optical Rectification in Spin-Momentum Locking
Authors: Jessie Rapoza, Petr Moroshkin, Jimmy Xu
Abstract:
Chirality is omnipresent, in nature, in life, and in the field of physics. One intriguing example is the homochirality that has remained a great secret of life. Another is the pairs of mirror-image molecules – enantiomers. They are identical in atomic composition and therefore indistinguishable in the scalar physical properties. Yet, they can be either therapeutic or toxic, depending on their chirality. Recent studies suggest a potential link between abnormal levels of certain D-amino acids and some serious health impairments, including schizophrenia, amyotrophic lateral sclerosis, and potentially cancer. Although indistinguishable in their scalar properties, the chirality of a molecule reveals itself in interaction with the surrounding of a certain chirality, or more generally, a broken mirror-symmetry. In this work, we report on a system for chiral molecule detection, in which the mirror-symmetry is doubly broken, first by asymmetric structuring a nanopatterned plasmonic surface than by the incidence of circularly polarized light (CPL). In this system, the incident circularly-polarized light induces a surface plasmon polariton (SPP) wave, propagating along the asymmetric plasmonic surface. This SPP field itself is chiral, evanescently bound to a near-field zone on the surface (~10nm thick), but with an amplitude greatly intensified (by up to 104) over that of the incident light. It hence probes just the molecules on the surface instead of those in the volume. In coupling to molecules along its path on the surface, the chiral SPP wave favors one chirality over the other, allowing for chirality detection via the change in an optical rectification current measured at the edges of the sample. The asymmetrically structured surface converts the high-frequency electron plasmonic-oscillations in the SPP wave into a net DC drift current that can be measured at the edge of the sample via the mechanism of optical rectification. The measured results validate these design concepts and principles. The observed optical rectification current exhibits a clear differentiation between a pair of enantiomers. Experiments were performed by focusing a 1064nm CW laser light at the sample - a gold grating microchip submerged in an approximately 1.82M solution of either L-arabinose or D-arabinose and water. A measurement of the current output was then recorded under both rights and left circularly polarized lights. Measurements were recorded at various angles of incidence to optimize the coupling between the spin-momentums of the incident light and that of the SPP, that is, spin-momentum locking. In order to suppress the background, the values of the photocurrent for the right CPL are subtracted from those for the left CPL. Comparison between the two arabinose enantiomers reveals a preferential signal response of one enantiomer to left CPL and the other enantiomer to right CPL. In sum, this work reports on the first experimental evidence of the feasibility of chiral molecule detection via optical rectification in a metal meta-grating. This nanoscale interfaced electrical detection technology is advantageous over other detection methods due to its size, cost, ease of use, and integration ability with read-out electronic circuits for data processing and interpretation.Keywords: Chirality, detection, molecule, spin
Procedia PDF Downloads 9150 Localized Recharge Modeling of a Coastal Aquifer from a Dam Reservoir (Korba, Tunisia)
Authors: Nejmeddine Ouhichi, Fethi Lachaal, Radhouane Hamdi, Olivier Grunberger
Abstract:
Located in Cap Bon peninsula (Tunisia), the Lebna dam was built in 1987 to balance local water salt intrusion taking place in the coastal aquifer of Korba. The first intention was to reduce coastal groundwater over-pumping by supplying surface water to a large irrigation system. The unpredicted beneficial effect was recorded with the occurrence of a direct localized recharge to the coastal aquifer by leakage through the geological material of the southern bank of the lake. The hydrological balance of the reservoir dam gave an estimation of the annual leakage volume, but dynamic processes and sound quantification of recharge inputs are still required to understand the localized effect of the recharge in terms of piezometry and quality. Present work focused on simulating the recharge process to confirm the hypothesis, and established a sound quantification of the water supply to the coastal aquifer and extend it to multi-annual effects. A spatial frame of 30km² was used for modeling. Intensive outcrops and geophysical surveys based on 68 electrical resistivity soundings were used to characterize the aquifer 3D geometry and the limit of the Plio-quaternary geological material concerned by the underground flow paths. Permeabilities were determined using 17 pumping tests on wells and piezometers. Six seasonal piezometric surveys on 71 wells around southern reservoir dam banks were performed during the 2019-2021 period. Eight monitoring boreholes of high frequency (15min) piezometric data were used to examine dynamical aspects. Model boundary conditions were specified using the geophysics interpretations coupled with the piezometric maps. The dam-groundwater flow model was performed using Visual MODFLOW software. Firstly, permanent state calibration based on the first piezometric map of February 2019 was established to estimate the permanent flow related to the different reservoir levels. Secondly, piezometric data for the 2019-2021 period were used for transient state calibration and to confirm the robustness of the model. Preliminary results confirmed the temporal link between the reservoir level and the localized recharge flow with a strong threshold effect for levels below 16 m.a.s.l. The good agreement of computed flow through recharge cells on the southern banks and hydrological budget of the reservoir open the path to future simulation scenarios of the dilution plume imposed by the localized recharge. The dam reservoir-groundwater flow-model simulation results approve a potential for storage of up to 17mm/year in existing wells, under gravity-feed conditions during level increases on the reservoir into the three years of operation. The Lebna dam groundwater flow model characterized a spatiotemporal relation between groundwater and surface water.Keywords: leakage, MODFLOW, saltwater intrusion, surface water-groundwater interaction
Procedia PDF Downloads 13749 Li-Ion Batteries vs. Synthetic Natural Gas: A Life Cycle Analysis Study on Sustainable Mobility
Authors: Guido Lorenzi, Massimo Santarelli, Carlos Augusto Santos Silva
Abstract:
The growth of non-dispatchable renewable energy sources in the European electricity generation mix is promoting the research of technically feasible and cost-effective solutions to make use of the excess energy, produced when the demand is low. The increasing intermittent renewable capacity is becoming a challenge to face especially in Europe, where some countries have shares of wind and solar on the total electricity produced in 2015 higher than 20%, with Denmark around 40%. However, other consumption sectors (mainly transportation) are still considerably relying on fossil fuels, with a slow transition to other forms of energy. Among the opportunities for different mobility concepts, electric (EV) and biofuel-powered vehicles (BPV) are the options that currently appear more promising. The EVs are targeting mainly the light duty users because of their zero (Full electric) or reduced (Hybrid) local emissions, while the BPVs encourage the use of alternative resources with the same technologies (thermal engines) used so far. The batteries which are applied to EVs are based on ions of Lithium because of their overall good performance in energy density, safety, cost and temperature performance. Biofuels, instead, can be various and the major difference is in their physical state (liquid or gaseous). In this study gaseous biofuels are considered and, more specifically, Synthetic Natural Gas (SNG) produced through a process of Power-to-Gas consisting in an electrochemical upgrade (with Solid Oxide Electrolyzers) of biogas with CO2 recycling. The latter process combines a first stage of electrolysis, where syngas is produced, and a second stage of methanation in which the product gas is turned into methane and then made available for consumption. A techno-economic comparison between the two alternatives is possible, but it does not capture all the different aspects involved in the two routes for the promotion of a more sustainable mobility. For this reason, a more comprehensive methodology, i.e. Life Cycle Assessment, is adopted to describe the environmental implications of using excess electricity (directly or indirectly) for new vehicle fleets. The functional unit of the study is 1 km and the two options are compared in terms of overall CO2 emissions, both considering Cradle to Gate and Cradle to Grave boundaries. Showing how production and disposal of materials affect the environmental performance of the analyzed routes is useful to broaden the perspective on the impacts that different technologies produce, in addition to what is emitted during the operational life. In particular, this applies to batteries for which the decommissioning phase has a larger impact on the environmental balance compared to electrolyzers. The lower (more than one order of magnitude) energy density of Li-ion batteries compared to SNG implies that for the same amount of energy used, more material resources are needed to obtain the same effect. The comparison is performed in an energy system that simulates the Western European one, in order to assess which of the two solutions is more suitable to lead the de-fossilization of the transport sector with the least resource depletion and the mildest consequences for the ecosystem.Keywords: electrical energy storage, electric vehicles, power-to-gas, life cycle assessment
Procedia PDF Downloads 17748 Ultrasonic Atomizer for Turbojet Engines
Authors: Aman Johri, Sidhant Sood, Pooja Suresh
Abstract:
This paper suggests a new and more efficient method of atomization of fuel in a combustor nozzle of a high bypass turbofan engine, using ultrasonic vibrations. Since atomization of fuel just before the fuel spray is injected into the combustion chamber is an important and crucial aspect related to functioning of a propulsion system, the technology suggested by this paper and the experimental analysis on the system components eventually proves to assist in complete and rapid combustion of the fuel in the combustor module of the engine. Current propulsion systems use carburetors, atomization nozzles and apertures in air intake pipes for atomization. The idea of this paper is to deploy new age hybrid technology, namely the Ultrasound Field Effect (UFE) to effectively atomize fuel before it enters the combustion chamber, as a viable and effective method to increase efficiency and improve upon existing designs. The Ultrasound Field Effect is applied axially, on diametrically opposite ends of an atomizer tube that gloves onto the combustor nozzle, where the fuel enters and exits under a pre-defined pressure. The Ultrasound energy vibrates the fuel particles to a breakup frequency. At reaching this frequency, the fuel particles start disintegrating into smaller diameter particles perpendicular to the axis of application of the field from the parent boundary layer of fuel flow over the baseplate. These broken up fuel droplets then undergo swirling effect as per the original nozzle design, with a higher breakup ratio than before. A significant reduction of the size of fuel particles eventually results in an increment in the propulsive efficiency of the engine. Moreover, the Ultrasound atomizer operates within a control frequency such that effects of overheating and induced vibrations are least felt on the overall performance of the engine. The design of an electrical manifold for the multiple-nozzle system over a typical can-annular combustor is developed along with this study, such that the product can be installed and removed easily for maintenance and repairing, can allow for easy access for inspections and transmits least amount of vibrational energy to the surface of the combustor. Since near-field ultrasound is used, the vibrations are easily controlled, thereby successfully reducing vibrations on the outer shell of the combustor. Experimental analysis is carried out on the effect of ultrasonic vibrations on flowing jet turbine fuel using an ultrasound generator probe and results of an effective decrease in droplet size across a constant diameter, away from the boundary layer of flow is noted using visual aid by observing under ultraviolet light. The choice of material for the Ultrasound inducer tube and crystal along with the operating range of temperatures, pressures, and frequencies of the Ultrasound field effect are also studied in this paper, while taking into account the losses incurred due to constant vibrations and thermal loads on the tube surface.Keywords: atomization, ultrasound field effect, titanium mesh, breakup frequency, parent boundary layer, baseplate, propulsive efficiency, jet turbine fuel, induced vibrations
Procedia PDF Downloads 23947 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection
Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda
Abstract:
In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards
Procedia PDF Downloads 13746 Monitoring the Production of Large Composite Structures Using Dielectric Tool Embedded Capacitors
Authors: Galatee Levadoux, Trevor Benson, Chris Worrall
Abstract:
With the rise of public awareness on climate change comes an increasing demand for renewable sources of energy. As a result, the wind power sector is striving to manufacture longer, more efficient and reliable wind turbine blades. Currently, one of the leading causes of blade failure in service is improper cure of the resin during manufacture. The infusion process creating the main part of the composite blade structure remains a critical step that is yet to be monitored in real time. This stage consists of a viscous resin being drawn into a mould under vacuum, then undergoing a curing reaction until solidification. Successful infusion assumes the resin fills all the voids and cures completely. Given that the electrical properties of the resin change significantly during its solidification, both the filling of the mould and the curing reaction are susceptible to be followed using dieletrometry. However, industrially available dielectrics sensors are currently too small to monitor the entire surface of a wind turbine blade. The aim of the present research project is to scale up the dielectric sensor technology and develop a device able to monitor the manufacturing process of large composite structures, assessing the conformity of the blade before it even comes out of the mould. An array of flat copper wires acting as electrodes are embedded in a polymer matrix fixed in an infusion mould. A multi-frequency analysis from 1 Hz to 10 kHz is performed during the filling of the mould with an epoxy resin and the hardening of the said resin. By following the variations of the complex admittance Y*, the filling of the mould and curing process are monitored. Results are compared to numerical simulations of the sensor in order to validate a virtual cure-monitoring system. The results obtained by drawing glycerol on top of the copper sensor displayed a linear relation between the wetted length of the sensor and the complex admittance measured. Drawing epoxy resin on top of the sensor and letting it cure at room temperature for 24 hours has provided characteristic curves obtained when conventional interdigitated sensor are used to follow the same reaction. The response from the developed sensor has shown the different stages of the polymerization of the resin, validating the geometry of the prototype. The model created and analysed using COMSOL has shown that the dielectric cure process can be simulated, so long as a sufficient time and temperature dependent material properties can be determined. The model can be used to help design larger sensors suitable for use with full-sized blades. The preliminary results obtained with the sensor prototype indicate that the infusion and curing process of an epoxy resin can be followed with the chosen configuration on a scale of several decimeters. Further work is to be devoted to studying the influence of the sensor geometry and the infusion parameters on the results obtained. Ultimately, the aim is to develop a larger scale sensor able to monitor the flow and cure of large composite panels industrially.Keywords: composite manufacture, dieletrometry, epoxy, resin infusion, wind turbine blades
Procedia PDF Downloads 16545 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry
Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard
Abstract:
Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor
Procedia PDF Downloads 32644 Model-Based Global Maximum Power Point Tracking at Photovoltaic String under Partial Shading Conditions Using Multi-Input Interleaved Boost DC-DC Converter
Authors: Seyed Hossein Hosseini, Seyed Majid Hashemzadeh
Abstract:
Solar energy is one of the remarkable renewable energy sources that have particular characteristics such as unlimited, no environmental pollution, and free access. Generally, solar energy can be used in thermal and photovoltaic (PV) types. The cost of installation of the PV system is very high. Additionally, due to dependence on environmental situations such as solar radiation and ambient temperature, electrical power generation of this system is unpredictable and without power electronics devices, there is no guarantee to maximum power delivery at the output of this system. Maximum power point tracking (MPPT) should be used to achieve the maximum power of a PV string. MPPT is one of the essential parts of the PV system which without this section, it would be impossible to reach the maximum amount of the PV string power and high losses are caused in the PV system. One of the noticeable challenges in the problem of MPPT is the partial shading conditions (PSC). In PSC, the output photocurrent of the PV module under the shadow is less than the PV string current. The difference between the mentioned currents passes from the module's internal parallel resistance and creates a large negative voltage across shaded modules. This significant negative voltage damages the PV module under the shadow. This condition is called hot-spot phenomenon. An anti-paralleled diode is inserted across the PV module to prevent the happening of this phenomenon. This diode is known as the bypass diode. Due to the performance of the bypass diode under PSC, the P-V curve of the PV string has several peaks. One of the P-V curve peaks that makes the maximum available power is the global peak. Model-based Global MPPT (GMPPT) methods can estimate the optimal point with higher speed than other GMPPT approaches. Centralized, modular, and interleaved DC-DC converter topologies are the significant structures that can be used for GMPPT at a PV string. there are some problems in the centralized structure such as current mismatch losses at PV sting, loss of power of the shaded modules because of bypassing by bypass diodes under PSC, needing to series connection of many PV modules to reach the desired voltage level. In the modular structure, each PV module is connected to a DC-DC converter. In this structure, by increasing the amount of demanded power from the PV string, the number of DC-DC converters that are used at the PV system will increase. As a result, the cost of the modular structure is very high. We can implement the model-based GMPPT through the multi-input interleaved boost DC-DC converter to increase the power extraction from the PV string and reduce hot-spot and current mismatch error in a PV string under different environmental condition and variable load circumstances. The interleaved boost DC-DC converter has many privileges than other mentioned structures, such as high reliability and efficiency, better regulation of DC voltage at DC link, overcome the notable errors such as module's current mismatch and hot spot phenomenon, and power switches voltage stress reduction.Keywords: solar energy, photovoltaic systems, interleaved boost converter, maximum power point tracking, model-based method, partial shading conditions
Procedia PDF Downloads 12743 Innovation Outputs from Higher Education Institutions: A Case Study of the University of Waterloo, Canada
Authors: Wendy De Gomez
Abstract:
The University of Waterloo is situated in central Canada in the Province of Ontario- one hour from the metropolitan city of Toronto. For over 30 years, it has held Canada’s top spot as the most innovative university; and has been consistently ranked in the top 25 computer science and top 50 engineering schools in the world. Waterloo benefits from the federal government’s over 100 domestic innovation policies which have assisted in the country’s 15th place global ranking in the World Intellectual Property Organization’s (WIPO) 2022 Global Innovation Index. Yet undoubtedly, the University of Waterloo’s unique characteristics are what propels its innovative creativeness forward. This paper will provide a contextual definition of innovation in higher education and then demonstrate the five operational attributes that contribute to the University of Waterloo’s innovative reputation. The methodology is based on statistical analyses obtained from ranking bodies such as the QS World University Rankings, a secondary literature review related to higher education innovation in Canada, and case studies that exhibit the operationalization of the attributes outlined below. The first attribute is geography. Specifically, the paper investigates the network structure effect of the Toronto-Waterloo high-tech corridor and the resultant industrial relationships built there. The second attribute is University Policy 73-Intellectal Property Rights. This creator-owned policy grants all ownership to the creator/inventor regardless of the use of the University of Waterloo property or funding. Essentially, through the incentivization of IP ownership by all researchers, further commercialization and entrepreneurship are formed. Third, this IP policy works hand in hand with world-renowned business incubators such as the Accelerator Centre in the dedicated research and technology park and velocity, a 14-year-old facility that equips and guides founders to build and scale companies. Communitech, a 25-year-old provincially backed facility in the region, also works closely with the University of Waterloo to build strong teams, access capital, and commercialize products. Fourth, Waterloo’s co-operative education program contributes 31% of all co-op participants to the Canadian economy. Home to the world’s largest co-operative education program, data shows that over 7,000 from around the world recruit Waterloo students for short- and long-term placements- directly contributing to the student’s ability to learn and optimize essential employment skills when they graduate. Finally, the students themselves at Waterloo are exceptional. The entrance average ranges from the low 80s to the mid-90s depending on the program. In computer, electrical, mechanical, mechatronics, and systems design engineering, to have a 66% chance of acceptance, the applicant’s average must be 95% or above. Singularly, none of these five attributes could lead to the university’s outstanding track record of innovative creativity, but when bundled up into a 1000 acre- 100 building main campus with 6 academic faculties, 40,000+ students, and over 1300 world-class faculty, the recipe for success becomes quite evident.Keywords: IP policy, higher education, economy, innovation
Procedia PDF Downloads 6942 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 2141 A Flexible Piezoelectric - Polymer Composite for Non-Invasive Detection of Multiple Vital Signs of Human
Authors: Sarah Pasala, Elizabeth Zacharias
Abstract:
Vital sign monitoring is crucial for both everyday health and medical diagnosis. A significant factor in assessing a human's health is their vital signs, which include heart rate, breathing rate, blood pressure, and electrocardiogram (ECG) readings. Vital sign monitoring has been the focus of many system and method innovations recently. Piezoelectrics are materials that convert mechanical energy into electrical energy and can be used for vital sign monitoring. Piezoelectric energy harvesters that are stretchable and flexible can detect very low frequencies like airflow, heartbeat, etc. Current advancements in piezoelectric materials and flexible sensors have made it possible to create wearable and implantable medical devices that can continuously monitor physiological signals in humans. But because of their non-biocompatible nature, they also produce a large amount of e-waste and require another surgery to remove the implant. This paper presents a biocompatible and flexible piezoelectric composite material for wearable and implantable devices that offers a high-performance platform for seamless and continuous monitoring of human physiological signals and tactile stimuli. It also addresses the issue of e-waste and secondary surgery. A Lead-free piezoelectric, SrBi4Ti4O15, is found to be suitable for this application because the properties can be tailored by suitable substitutions and also by varying the synthesis temperature protocols. In the present work, SrBi4Ti4O15 modified by rare-earth has been synthesized and studied. Coupling factors are calculated from resonant (fr) and anti-resonant frequencies (fa). It is observed that Samarium substitution in SBT has increased the Curie temperature, dielectric and piezoelectric properties. From impedance spectroscopy studies, relaxation, and non-Debye type behaviour are observed. The composite of bioresorbable poly(l-lactide) and Lead-free rare earth modified Bismuth Layered Ferroelectrics leads to a flexible piezoelectric device for non-invasive measurement of vital signs, such as heart rate, breathing rate, blood pressure, and electrocardiogram (ECG) readings and also artery pulse signals in near-surface arteries. These composites are suitable to detect slight movement of the muscles and joints. This Lead-free rare earth modified Bismuth Layered Ferroelectrics – polymer composite is synthesized using a ball mill and the solid-state double sintering method. XRD studies indicated the two phases in the composite. SEM studies revealed the grain size to be uniform and in the range of 100 nm. The electromechanical coupling factor is improved. The elastic constants are calculated and the mechanical flexibility is found to be improved as compared to the single-phase rare earth modified Bismuth Latered piezoelectric. The results indicate that this composite is suitable for the non-invasive detection of multiple vital signs of humans.Keywords: composites, flexible, non-invasive, piezoelectric
Procedia PDF Downloads 3740 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan
Authors: Hsien-Te Lin
Abstract:
The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy
Procedia PDF Downloads 13839 Electrical Transport through a Large-Area Self-Assembled Monolayer of Molecules Coupled with Graphene for Scalable Electronic Applications
Authors: Chunyang Miao, Bingxin Li, Shanglong Ning, Christopher J. B. Ford
Abstract:
While it is challenging to fabricate electronic devices close to atomic dimensions in conventional top-down lithography, molecular electronics is promising to help maintain the exponential increase in component densities via using molecular building blocks to fabricate electronic components from the bottom up. It offers smaller, faster, and more energy-efficient electronic and photonic systems. A self-assembled monolayer (SAM) of molecules is a layer of molecules that self-assembles on a substrate. They are mechanically flexible, optically transparent, low-cost, and easy to fabricate. A large-area multi-layer structure has been designed and investigated by the team, where a SAM of designed molecules is sandwiched between graphene and gold electrodes. Each molecule can act as a quantum dot, with all molecules conducting in parallel. When a source-drain bias is applied, significant current flows only if a molecular orbital (HOMO or LUMO) lies within the source-drain energy window. If electrons tunnel sequentially on and off the molecule, the charge on the molecule is well-defined and the finite charging energy causes Coulomb blockade of transport until the molecular orbital comes within the energy window. This produces ‘Coulomb diamonds’ in the conductance vs source-drain and gate voltages. For different tunnel barriers at either end of the molecule, it is harder for electrons to tunnel out of the dot than in (or vice versa), resulting in the accumulation of two or more charges and a ‘Coulomb staircase’ in the current vs voltage. This nanostructure exhibits highly reproducible Coulomb-staircase patterns, together with additional oscillations, which are believed to be attributed to molecular vibrations. Molecules are more isolated than semiconductor dots, and so have a discrete phonon spectrum. When tunnelling into or out of a molecule, one or more vibronic states can be excited in the molecule, providing additional transport channels and resulting in additional peaks in the conductance. For useful molecular electronic devices, achieving the optimum orbital alignment of molecules to the Fermi energy in the leads is essential. To explore it, a drop of ionic liquid is employed on top of the graphene to establish an electric field at the graphene, which screens poorly, gating the molecules underneath. Results for various molecules with different alignments of Fermi energy to HOMO have shown highly reproducible Coulomb-diamond patterns, which agree reasonably with DFT calculations. In summary, this large-area SAM molecular junction is a promising candidate for future electronic circuits. (1) The small size (1-10nm) of the molecules and good flexibility of the SAM lead to the scalable assembly of ultra-high densities of functional molecules, with advantages in cost, efficiency, and power dissipation. (2) The contacting technique using graphene enables mass fabrication. (3) Its well-observed Coulomb blockade behaviour, narrow molecular resonances, and well-resolved vibronic states offer good tuneability for various functionalities, such as switches, thermoelectric generators, and memristors, etc.Keywords: molecular electronics, Coulomb blokade, electron-phonon coupling, self-assembled monolayer
Procedia PDF Downloads 6338 Teaching Children about Their Brains: Evaluating the Role of Neuroscience Undergraduates in Primary School Education
Authors: Clea Southall
Abstract:
Many children leave primary school having formed preconceptions about their relationship with science. Thus, primary school represents a critical window for stimulating scientific interest in younger children. Engagement relies on the provision of hands-on activities coupled with an ability to capture a child’s innate curiosity. This requires children to perceive science topics as interesting and relevant to their everyday life. Teachers and pupils alike have suggested the school curriculum be tailored to help stimulate scientific interest. Young children are naturally inquisitive about the human body; the brain is one topic which frequently engages pupils, although it is not currently included in the UK primary curriculum. Teaching children about the brain could have wider societal impacts such as increasing knowledge of neurological disorders. However, many primary school teachers do not receive formal neuroscience training and may feel apprehensive about delivering lessons on the nervous system. This is exacerbated by a lack of educational neuroscience resources. One solution is for undergraduates to form partnerships with schools - delivering engaging lessons and supplementing teacher knowledge. The aim of this project was to evaluate the success of a short lesson on the brain delivered by an undergraduate neuroscientist to primary school pupils. Prior to entering schools, semi-structured online interviews were conducted with teachers to gain pedagogical advice and relevant websites were searched for neuroscience resources. Subsequently, a single lesson plan was created comprising of four hands-on activities. The activities were devised in a top-down manner, beginning with learning about the brain as an entity, before focusing on individual neurons. Students were asked to label a ‘brain map’ to assess prior knowledge of brain structure and function. They viewed animal brains and created ‘pipe-cleaner neurons’ which were later used to depict electrical transmission. The same session was delivered by an undergraduate student to 570 key stage 2 (KS2) pupils across five schools in Leeds, UK. Post-session surveys, designed for teachers and pupils respectively, were used to evaluate the session. Children in all year groups had relatively poor knowledge of brain structure and function at the beginning of the session. When asked to label four brain regions with their respective functions, older pupils labeled a mean of 1.5 (± 1.0) brain regions compared to 0.8 (± 0.96) for younger pupils (p=0.002). However, by the end of the session, 95% of pupils felt their knowledge of the brain had increased. Hands-on activities were rated most popular by pupils and were considered the most successful aspect of the session by teachers. Although only half the teachers were aware of neuroscience educational resources, nearly all (95%) felt they would have more confidence in teaching a similar session in the future. All teachers felt the session was engaging and that the content could be linked to the current curriculum. Thus, a short fifty-minute session can successfully enhance pupils’ knowledge of a new topic: the brain. Partnerships with an undergraduate student can provide an alternative method for supplementing teacher knowledge, increasing their confidence in delivering future lessons on the nervous system.Keywords: education, neuroscience, primary school, undergraduate
Procedia PDF Downloads 209