Search results for: transparent electrodes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 837

Search results for: transparent electrodes

117 Temperature Dependence of Photoluminescence Intensity of Europium Dinuclear Complex

Authors: Kwedi L. M. Nsah, Hisao Uchiki

Abstract:

Quantum computation is a new and exciting field making use of quantum mechanical phenomena. In classical computers, information is represented as bits, with values either 0 or 1, but a quantum computer uses quantum bits in an arbitrary superposition of 0 and 1, enabling it to reach beyond the limits predicted by classical information theory. lanthanide ion quantum computer is an organic crystal, having a lanthanide ion. Europium is a favored lanthanide, since it exhibits nuclear spin coherence times, and Eu(III) is photo-stable and has two stable isotopes. In a europium organic crystal, the key factor is the mutual dipole-dipole interaction between two europium atoms. Crystals of the complex were formed by making a 2 :1 reaction of Eu(fod)3 and bpm. The transparent white crystals formed showed brilliant red luminescence with a 405 nm laser. The photoluminescence spectroscopy was observed both at room and cryogenic temperatures (300-14 K). The luminescence spectrum of [Eu(fod)3(μ-bpm) Eu(fod)3] showed characteristic of Eu(III) emission transitions in the range 570–630 nm, due to the deactivation of 5D0 emissive state to 7Fj. For the application of dinuclear Eu3+ complex to q-bit device, attention was focused on 5D0 -7F0 transition, around 580 nm. The presence of 5D0 -7F0 transition at room temperature revealed that at least one europium symmetry had no inversion center. Since the line was unsplit by the crystal field effect, any multiplicity observed was due to a multiplicity of Eu3+ sites. For q-bit element, more narrow line width of 5D0 → 7F0 PL band in Eu3+ ion was preferable. Cryogenic temperatures (300 K – 14 K) was applicable to reduce inhomogeneous broadening and distinguish between ions. A CCD image sensor was used for low temperature Photoluminescence measurement, and a far better resolved luminescent spectrum was gotten by cooling the complex at 14 K. A red shift by 15 cm-1 in the 5D0 - 7F0 peak position was observed upon cooling, the line shifted towards lower wavenumber. An emission spectrum at the 5D0 - 7F0 transition region was obtained to verify the line width. At this temperature, a peak with magnitude three times that at room temperature was observed. The temperature change of the 5D0 state of Eu(fod)3(μ-bpm)Eu(fod)3 showed a strong dependence in the vicinity of 60 K to 100 K. Thermal quenching was observed at higher temperatures than 100 K, at which point it began to decrease slowly with increasing temperature. The temperature quenching effect of Eu3+ with increase temperature was caused by energy migration. 100 K was the appropriate temperature for the observation of the 5D0 - 7F0 emission peak. Europium dinuclear complex bridged by bpm was successfully prepared and monitored at cryogenic temperatures. At 100 K the Eu3+-dope complex has a good thermal stability and this temperature is appropriate for the observation of the 5D0 - 7F0 emission peak. Sintering the sample above 600o C could also be a method to consider but the Eu3+ ion can be reduced to Eu2+, reasons why cryogenic temperature measurement is preferably over other methods.

Keywords: Eu(fod)₃, europium dinuclear complex, europium ion, quantum bit, quantum computer, 2, 2-bipyrimidine

Procedia PDF Downloads 153
116 Effect of Packaging Material and Water-Based Solutions on Performance of Radio Frequency Identification for Food Packaging Applications

Authors: Amelia Frickey, Timothy (TJ) Sheridan, Angelica Rossi, Bahar Aliakbarian

Abstract:

The growth of large food supply chains demanded improved end-to-end traceability of food products, which has led to companies being increasingly interested in using smart technologies such as Radio Frequency Identification (RFID)-enabled packaging to track items. As technology is being widely used, there are several technological or economic issues that should be overcome to facilitate the adoption of this track-and-trace technology. One of the technological challenges of RFID technology is its sensitivity to different environmental form factors, including packaging materials and the content of the packaging. Although researchers have assessed the performance loss due to the proximity of water and aqueous solutions, there is still the need to further investigate the impacts of food products on the reading range of RFID tags. However, to the best of our knowledge, there are not enough studies to determine the correlation between RFID tag performance and food beverages properties. The goal of this project was to investigate the effect of the solution properties (pH and conductivity) and different packaging materials filled with food-like water-based solutions on the performance of an RFID tag. Three commercially available ultra high-frequency RFID tags were placed on three different bottles and filled with different concentrations of water-based solutions, including sodium chloride, citric acid, sucrose, and ethanol. Transparent glass, Polyethylneterephtalate (PET), and Tetrapak® were used as the packaging materials commonly used in the beverage industries. Tag readability (Theoretical Read Range, TRR) and sensitivity (Power on Tag Forward, PoF) were determined using an anechoic chamber. First, the best place to attach the tag for each packaging material was investigated using empty and water-filled bottles. Then, the bottles were filled with the food-like solutions and tested with the three different tags and the PoF and TRR at the fixed frequency of 915MHz. In parallel, the pH and conductivity of solutions were measured. The best-performing tag was then selected to test the bottles filled with wine, orange, and apple juice. Despite various solutions altering the performance of each tag, the change in tag performance had no correlation with the pH or conductivity of the solution. Additionally, packaging material played a significant role in tag performance. Each tag tested performed optimally under different conditions. This study is the first part of comprehensive research to determine the regression model for the prediction of tag performance behavior based on the packaging material and the content. More investigations, including more tags and food products, are needed to be able to develop a robust regression model. The results of this study can be used by RFID tag manufacturers to design suitable tags for specific products with similar properties.

Keywords: smart food packaging, supply chain management, food waste, radio frequency identification

Procedia PDF Downloads 92
115 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 538
114 The Prediction of Reflection Noise and Its Reduction by Shaped Noise Barriers

Authors: I. L. Kim, J. Y. Lee, A. K. Tekile

Abstract:

In consequence of the very high urbanization rate of Korea, the number of traffic noise damages in areas congested with population and facilities is steadily increasing. The current environmental noise levels data in major cities of the country show that the noise levels exceed the standards set for both day and night times. This research was about comparative analysis in search for optimal soundproof panel shape and design factor that can minimize sound reflection noise. In addition to the normal flat-type panel shape, the reflection noise reduction of swelling-type, combined swelling and curved-type, and screen-type were evaluated. The noise source model Nord 2000, which often provides abundant information compared to models for the similar purpose, was used in the study to determine the overall noise level. Based on vehicle categorization in Korea, the noise levels for varying frequency from different heights of the sound source (directivity heights of Harmonize model) have been calculated for simulation. Each simulation has been made using the ray-tracing method. The noise level has also been calculated using the noise prediction program called SoundPlan 7.2, for comparison. The noise level prediction was made at 15m (R1), 30 m (R2) and at middle of the road, 2m (R3) receiving the point. By designing the noise barriers by shape and running the prediction program by inserting the noise source on the 2nd lane to the noise barrier side, among the 6 lanes considered, the reflection noise slightly decreased or increased in all noise barriers. At R1, especially in the cases of the screen-type noise barriers, there was no reduction effect predicted in all conditions. However, the swelling-type showed a decrease of 0.7~1.2 dB at R1, performing the best reduction effect among the tested noise barriers. Compared to other forms of noise barriers, the swelling-type was thought to be the most suitable for reducing the reflection noise; however, since a slight increase was predicted at R2, further research based on a more sophisticated categorization of related design factors is necessary. Moreover, as swellings are difficult to produce and the size of the modules are smaller than other panels, it is challenging to install swelling-type noise barriers. If these problems are solved, its applicable region will not be limited to other types of noise barriers. Hence, when a swelling-type noise barrier is installed at a downtown region where the amount of traffic is increasing every day, it will both secure visibility through the transparent walls and diminish any noise pollution due to the reflection. Moreover, when decorated with shapes and design, noise barriers will achieve a visual attraction than a flat-type one and thus will alleviate any psychological hardships related to noise, other than the unique physical soundproofing functions of the soundproof panels.

Keywords: reflection noise, shaped noise barriers, sound proof panel, traffic noise

Procedia PDF Downloads 490
113 Influence of Sewage Sludge on Agricultural Land Quality and Crop

Authors: Catalina Iticescu, Lucian P. Georgescu, Mihaela Timofti, Gabriel Murariu

Abstract:

Since the accumulation of large quantities of sewage sludge is producing serious environmental problems, numerous environmental specialists are looking for solutions to solve this problem. The sewage sludge obtained by treatment of municipal wastewater may be used as fertiliser on agricultural soils because such sludge contains large amounts of nitrogen, phosphorus and organic matter. In many countries, sewage sludge is used instead of chemical fertilizers in agriculture, this being the most feasible method to reduce the increasingly larger quantities of sludge. The use of sewage sludge on agricultural soils is allowed only with a strict monitoring of their physical and chemical parameters, because heavy metals exist in varying amounts in sewage sludge. Exceeding maximum permitted quantities of harmful substances may lead to pollution of agricultural soil and may cause their removal aside because the plants may take up the heavy metals existing in soil and these metals will most probably be found in humans and animals through food. The sewage sludge analyzed for the present paper was extracted from the Wastewater Treatment Station (WWTP) Galati, Romania. The physico-chemical parameters determined were: pH (upH), total organic carbon (TOC) (mg L⁻¹), N-total (mg L⁻¹), P-total (mg L⁻¹), N-NH₄ (mg L⁻¹), N-NO₂ (mg L⁻¹), N-NO₃ (mg L⁻¹), Fe-total (mg L⁻¹), Cr-total (mg L⁻¹), Cu (mg L⁻¹), Zn (mg L⁻¹), Cd (mg L⁻¹), Pb (mg L⁻¹), Ni (mg L⁻¹). The determination methods were electrometrical (pH, C, TSD) - with a portable HI 9828 HANNA electrodes committed multiparameter and spectrophotometric - with a Spectroquant NOVA 60 - Merck spectrophotometer and with specific Merck parameter kits. The tests made pointed out the fact that the sludge analysed is low heavy metal falling within the legal limits, the quantities of metals measured being much lower than the maximum allowed. The results of the tests made to determine the content of nutrients in the sewage sludge have shown that the existing nutrients may be used to increase the fertility of agricultural soils. Other tests were carried out on lands where sewage sludge was applied in order to establish the maximum quantity of sludge that may be used so as not to constitute a source of pollution. The tests were made on three plots: a first batch with no mud and no chemical fertilizers applied, a second batch on which only sewage sludge was applied, and a third batch on which small amounts of chemical fertilizers were applied in addition to sewage sludge. The results showed that the production increases when the soil is treated with sludge and small amounts of chemical fertilizers. Based on the results of the present research, a fertilization plan has been suggested. This plan should be reconsidered each year based on the crops planned, the yields proposed, the agrochemical indications, the sludge analysis, etc.

Keywords: agricultural use, crops, physico–chemical parameters, sewage sludge

Procedia PDF Downloads 263
112 A Study of a Diachronic Relationship between Two Weak Inflection Classes in Norwegian, with Emphasis on Unexpected Productivity

Authors: Emilija Tribocka

Abstract:

This contribution presents parts of an ongoing study of a diachronic relationship between two weak verb classes in Norwegian, the a-class (cf. the paradigm of ‘throw’: kasta – kastar – kasta – kasta) and the e-class (cf. the paradigm of ‘buy’: kjøpa – kjøper – kjøpte – kjøpt). The study investigates inflection class shifts between the two classes with Old Norse, the ancestor of Modern Norwegian, as a starting point. Examination of inflection in 38 verbs in four chosen dialect areas (106 places of attestations) demonstrates that the shifts from the a-class to the e-class are widespread to varying degrees in three out of four investigated areas and are more common than the shifts in the opposite direction. The diachronic productivity of the e-class is unexpected for several reasons. There is general agreement that type frequency is an important factor influencing productivity. The a-class (53% of all weak verbs) was more type frequent in Old Norse than the e-class (42% of all weak verbs). Thus, given the type frequency, the expansion of the e-class is unexpected. Furthermore, in the ‘core’ areas of expanded e-class inflection, the shifts disregard phonological principles creating forms with uncomfortable consonant clusters, e.g., fiskte instead of fiska, the preterit of fiska ‘fish’. Later on, these forms may be contracted, i.e., fiskte > fiste. In this contribution, two factors influencing the shifts are presented: phonological form and token frequency. Verbs with the stem ending in a consonant cluster, particularly when the cluster ends in -t, hardly ever shift to the e-class. As a matter of fact, verbs with this structure belonging to the e-class in Old Norse shift to the a-class in Modern Norwegian, e.g., ON e-class verb skipta ‘change’ shifts to the a-class. This shift occurs as a result of the lack of morpho-phonological transparency between the stem and the preterit suffix of the e-class, -te. As there is a phonological fusion between the stem ending in -t and the suffix beginning in -t, the transparent a-class inflection is chosen. Token frequency plays an important role in the shifts, too, in some dialects. In one of the investigated areas, the most token frequent verbs of the ON e-class remain in the e-class (e.g., høyra ‘hear’, leva ‘live’, kjøpa ‘buy’), while less frequent verbs may shift to the a-class. Furthermore, the results indicate that the shift from the a-class to the e-class occurs in some of the most token frequent verbs of the ON a-class in this area, e.g., lika ‘like’, lova ‘promise’, svara ‘answer’. The latter is unexpected as frequent items tend to remain stable. This study presents a case of unexpected productivity, demonstrating that minor patterns can grow and outdo major patterns. Thus, type frequency is not the only factor that determines productivity. The study addresses the role of phonological form and token frequency in the spread of inflection patterns.

Keywords: inflection class, productivity, token frequency, phonological form

Procedia PDF Downloads 36
111 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor

Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang

Abstract:

To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.

Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel

Procedia PDF Downloads 329
110 Microplastics Accumulation and Abundance Standardization for Fluvial Sediments: Case Study for the Tena River

Authors: Mishell E. Cabrera, Bryan G. Valencia, Anderson I. Guamán

Abstract:

Human dependence on plastic products has led to global pollution, with plastic particles ranging in size from 0.001 to 5 millimeters, which are called microplastics (hereafter, MPs). The abundance of microplastics is used as an indicator of pollution. However, reports of pollution (abundance of MPs) in river sediments do not consider that the accumulation of sediments and MPs depends on the energy of the river. That is, the abundance of microplastics will be underestimated if the sediments analyzed come from places where the river flows with a lot of energy, and the abundance will be overestimated if the sediment analyzed comes from places where the river flows with less energy. This bias can generate an error greater than 300% of the MPs value reported for the same river and should increase when comparisons are made between 2 rivers with different characteristics. Sections where the river flows with higher energy allow sands to be deposited and limit the accumulation of MPs, while sections, where the same river has lower energy, allow fine sediments such as clays and silts to be deposited and should facilitate the accumulation of MPs particles. That is, the abundance of MPs in the same river is underrepresented when the sediment analyzed is sand, and the abundance of MPs is overrepresented if the sediment analyzed is silt or clay. The present investigation establishes a protocol aimed at incorporating sample granulometry to calibrate MPs quantification and eliminate over- or under-representation bias (hereafter granulometric bias). A total of 30 samples were collected by taking five samples within six work zones. The slope of the sampling points was less than 8 degrees, referred to as low slope areas, according to the Van Zuidam slope classification. During sampling, blanks were used to estimate possible contamination by MPs during sampling. Samples were dried at 60 degrees Celsius for three days. A flotation technique was employed to isolate the MPs using sodium metatungstate with a density of 2 gm/l. For organic matter digestion, 30% hydrogen peroxide and Fenton were used at a ratio of 6:1 for 24 hours. The samples were stained with rose bengal at a concentration of 200 mg/L and were subsequently dried in an oven at 60 degrees Celsius for 1 hour to be identified and photographed in a stereomicroscope with the following conditions: Eyepiece magnification: 10x, Zoom magnification (zoom knob): 4x, Objective lens magnification: 0.35x for analysis in ImageJ. A total of 630 fibers of MPs were identified, mainly red, black, blue, and transparent colors, with an overall average length of 474,310 µm and an overall median length of 368,474 µm. The particle size of the 30 samples was calculated using 100 g per sample using sieves with the following apertures: 2 mm, 1 mm, 500 µm, 250 µm, 125 µm and 0.63 µm. This sieving allowed a visual evaluation and a more precise quantification of the microplastics present. At the same time, the weight of sediment in each fraction was calculated, revealing an evident magnitude: as the presence of sediment in the < 63 µm fraction increases, a significant increase in the number of MPs particles is observed.

Keywords: microplastics, pollution, sediments, Tena River

Procedia PDF Downloads 49
109 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 117
108 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids

Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout

Abstract:

Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.

Keywords: graphene, layered material, field emission, plasma, doping

Procedia PDF Downloads 343
107 Development and Obtaining of Solid Dispersions to Increase the Solubility of Efavirenz in Anti-HIV Therapy

Authors: Salvana P. M. Costa, Tarcyla A. Gomes, Giovanna C. R. M. Schver, Leslie R. M. Ferraz, Cristovão R. Silva, Magaly A. M. Lyra, Danilo A. F. Fonte, Larissa A. Rolim, Amanda C. Q. M. Vieira, Miracy M. Albuquerque, Pedro J. Rolim-neto

Abstract:

Efavirenz (EFV) is considered one of the most widely used anti-HIV drugs. However, it is classified as a drug class II (poorly soluble, highly permeable) according to the biopharmaceutical classification system, presenting problems of absorption in the gastrointestinal tract and thereby inadequate bioavailability for its therapeutic action. This study aimed to overcome these barriers by developing and obtaining solid dispersions (SD) in order to increase the EFZ bioavailability. For the development of SD with EFV, theoretical and practical studies were initially performed. Thus, there was a choice of a carrier to be used. For this, it was analyzed the various criteria such as glass transition temperature of the polymer, intra- and intermolecular interactions of hydrogen bonds between drug and polymer, the miscibility between the polymer and EFV. The choice of the obtainment method of the SD came from the analysis of which method is the most consolidated in both industry and literature. Subsequently, the choice of drug and carrier concentrations in the dispersions was carried out. In order to obtain DS to present the drug in its amorphous form, as the DS were obtained, they were analyzed by X-ray diffraction (XRD). SD are more stable the higher the amount of polymer present in the formulation. With this assumption, a SD containing 10% of drug was initially prepared and then this proportion was increased until the XRD showed the presence of EFV in its crystalline form. From this point, it was not produced SD with a higher concentration of drug. Thus, it was allowed to select PVP-K30, PVPVA 64 and the SOLUPLUS formulation as carriers, once it was possible the formation of hydrogen bond between EFV and polymers since these have hydrogen acceptor groups capable of interacting with the donor group of the drug hydrogen. It is worth mentioning also that the films obtained, independent of concentration used, were presented homogeneous and transparent. Thus, it can be said that the EFV is miscible in the three polymers used in the study. The SD and Physical Mixtures (PM) with these polymers were prepared by the solvent method. The EFV diffraction profile showed main peaks at around 2θ of 6,24°, in addition to other minor peaks at 14,34°, 17,08°, 20,3°, 21,36° and 25,06°, evidencing its crystalline character. Furthermore, the polymers showed amorphous nature, as evidenced by the absence of peaks in their XRD patterns. The XRD patterns showed the PM overlapping profile of the drug with the polymer, indicating the presence of EFV in its crystalline form. Regardless the proportion of drug used in SD, all the samples showed the same characteristics with no diffraction peaks EFV, demonstrating the behavior amorphous products. Thus, the polymers enabled, effectively, the formation of amorphous SD, probably due to the potential hydrogen bonds between them and the drug. Moreover, the XRD analysis showed that the polymers were able to maintain its amorphous form in a concentration of up to 80% drug.

Keywords: amorphous form, Efavirenz, solid dispersions, solubility

Procedia PDF Downloads 544
106 A Comparison of qCON/qNOX to the Bispectral Index as Indices of Antinociception in Surgical Patients Undergoing General Anesthesia with Laryngeal Mask Airway

Authors: Roya Yumul, Ofelia Loani Elvir-Lazo, Sevan Komshian, Ruby Wang, Jun Tang

Abstract:

BACKGROUND: An objective means for monitoring the anti-nociceptive effects of perioperative medications has long been desired as a way to provide anesthesiologists information regarding a patient’s level of antinociception and preclude any untoward autonomic responses and reflexive muscular movements from painful stimuli intraoperatively. To this end, electroencephalogram (EEG) based tools including BIS and qCON were designed to provide information about the depth of sedation while qNOX was produced to inform on the degree of antinociception. The goal of this study was to compare the reliability of qCON/qNOX to BIS as specific indicators of response to nociceptive stimulation. METHODS: Sixty-two patients undergoing general anesthesia with LMA were included in this study. Institutional Review Board (IRB) approval was obtained, and informed consent was acquired prior to patient enrollment. Inclusion criteria included American Society of Anesthesiologists (ASA) class I-III, 18 to 80 years of age, and either gender. Exclusion criteria included the inability to consent. Withdrawal criteria included conversion to the endotracheal tube and EEG malfunction. BIS and qCON/qNOX electrodes were simultaneously placed on all patients prior to induction of anesthesia and were monitored throughout the case, along with other perioperative data, including patient response to noxious stimuli. All intraoperative decisions were made by the primary anesthesiologist without influence from qCON/qNOX. Student’s t-distribution, prediction probability (PK), and ANOVA were used to statistically compare the relative ability to detect nociceptive stimuli for each index. Twenty patients were included for the preliminary analysis. RESULTS: A comparison of overall intraoperative BIS, qCON and qNOX indices demonstrated no significant difference between the three measures (N=62, p> 0.05). Meanwhile, index values for qNOX (62±18) were significantly higher than those for BIS (46±14) and qCON (54±19) immediately preceding patient responses to nociceptive stimulation in a preliminary analysis (N=20, * p= 0.0408). Notably, certain hemodynamic measurements demonstrated a significant increase in response to painful stimuli (MAP increased from 74 ±13 mm Hg at baseline to 84 ± 18 mm Hg during noxious stimuli [p= 0.032] and HR from 76 ± 12 BPM at baseline to 80 ± 13 BPM during noxious stimuli [p=0.078] respectively). CONCLUSION: In this observational study, BIS and qCON/qNOX provided comparable information on patients’ level of sedation throughout the course of an anesthetic. Meanwhile, increases in qNOX values demonstrated a superior correlation to an imminent response to stimulation relative to all other indices

Keywords: antinociception, BIS, general anesthesia, LMA, qCON/qNOX

Procedia PDF Downloads 113
105 Natural Monopolies and Their Regulation in Georgia

Authors: Marina Chavleishvili

Abstract:

Introduction: Today, the study of monopolies, including natural monopolies, is topical. In real life, pure monopolies are natural monopolies. Natural monopolies are used widely and are regulated by the state. In particular, the prices and rates are regulated. The paper considers the problems associated with the operation of natural monopolies in Georgia, in particular, their microeconomic analysis, pricing mechanisms, and legal mechanisms of their operation. The analysis was carried out on the example of the power industry. The rates of natural monopolies in Georgia are controlled by the Georgian National Energy and Water Supply Regulation Commission. The paper analyzes the positive role and importance of the regulatory body and the issues of improving the legislative base that will support the efficient operation of the branch. Methodology: In order to highlight natural monopolies market tendencies, the domestic and international markets are studied. An analysis of monopolies is carried out based on the endogenous and exogenous factors that determine the condition of companies, as well as the strategies chosen by firms to increase the market share. According to the productivity-based competitiveness assessment scheme, the segmentation opportunities, business environment, resources, and geographical location of monopolist companies are revealed. Main Findings: As a result of the analysis, certain assessments and conclusions were made. Natural monopolies are quite a complex and versatile economic element, and it is important to specify and duly control their frame conditions. It is important to determine the pricing policy of natural monopolies. The rates should be transparent, should show the level of life in the country, and should correspond to the incomes. The analysis confirmed the significance of the role of the Antimonopoly Service in the efficient management of natural monopolies. The law should adapt to reality and should be applied only to regulate the market. The present-day differential electricity tariffs varying depending on the consumed electrical power need revision. The effects of the electricity price discrimination are important, segmentation in different seasons in particular. Consumers use more electricity in winter than in summer, which is associated with extra capacities and maintenance costs. If the price of electricity in winter is higher than in summer, the electricity consumption will decrease in winter. The consumers will start to consume the electricity more economically, what will allow reducing extra capacities. Conclusion: Thus, the practical realization of the views given in the paper will contribute to the efficient operation of natural monopolies. Consequently, their activity will be oriented not on the reduction but on the increase of increments of the consumers or producers. Overall, the optimal management of the given fields will allow for improving the well-being throughout the country. In the article, conclusions are made, and the recommendations are developed to deliver effective policies and regulations toward the natural monopolies in Georgia.

Keywords: monopolies, natural monopolies, regulation, antimonopoly service

Procedia PDF Downloads 63
104 An Appraisal of Mitigation and Adaptation Measures under Paris Agreement 2015: Developing Nations' Pie

Authors: Olubisi Friday Oluduro

Abstract:

The Paris Agreement 2015, the result of negotiations under the United Nations Framework Convention on Climate Change (UNFCCC), after Kyoto Protocol expiration, sets a long-term goal of limiting the increase in the global average temperature to well below 2 degrees Celsius above pre-industrial levels, and of pursuing efforts to limiting this temperature increase to 1.5 degrees Celsius. An advancement on the erstwhile Kyoto Protocol which sets commitments to only a limited number of Parties to reduce their greenhouse gas (GHGs) emissions, it includes the goal to increase the ability to adapt to the adverse impacts of climate change and to make finance flows consistent with a pathway towards low GHGs emissions. For it achieve these goals, the Agreement requires all Parties to undertake efforts towards reaching global peaking of GHG emissions as soon as possible and towards achieving a balance between anthropogenic emissions by sources and removals by sinks in the second half of the twenty-first century. In addition to climate change mitigation, the Agreement aims at enhancing adaptive capacity, strengthening resilience and reducing the vulnerability to climate change in different parts of the world. It acknowledges the importance of addressing loss and damage associated with the adverse of climate change. The Agreement also contains comprehensive provisions on support to be provided to developing countries, which includes finance, technology transfer and capacity building. To ensure that such supports and actions are transparent, the Agreement contains a number reporting provisions, requiring parties to choose the efforts and measures that mostly suit them (Nationally Determined Contributions), providing for a mechanism of assessing progress and increasing global ambition over time by a regular global stocktake. Despite the somewhat global look of the Agreement, it has been fraught with manifold limitations threatening its very existential capability to produce any meaningful result. Considering these obvious limitations some of which were the very cause of the failure of its predecessor—the Kyoto Protocol—such as the non-participation of the United States, non-payment of funds into the various coffers for appropriate strategic purposes, among others. These have left the developing countries largely threatened eve the more, being more vulnerable than the developed countries, which are really responsible for the climate change scourge. The paper seeks to examine the mitigation and adaptation measures under the Paris Agreement 2015, appraise the present situation since the Agreement was concluded and ascertain whether the developing countries have been better or worse off since the Agreement was concluded, and examine why and how, while projecting a way forward in the present circumstance. It would conclude with recommendations towards ameliorating the situation.

Keywords: mitigation, adaptation, climate change, Paris agreement 2015, framework

Procedia PDF Downloads 141
103 Paradigms of Sustainability: Roles and Impact of Communication in the Fashion System

Authors: Elena Pucci, Margherita Tufarelli, Leonardo Giliberti

Abstract:

As central for human and social development of the future, sustainability is becoming a recurring theme also in the fashion industry, where the need to explore new possible directions aimed at achieving sustainability goals and their communication is rising. Scholars have been devoted to the overall environmental impact of the textile and fashion industry, which, emerging as one of the world’s most polluting, today concretely assumes the need to take the path of sustainability in both products and production processes. Every day we witness the impact of our consumption, showing that the sustainability concept is as vast as complex: with a sometimes ambiguous definition, sustainability can concern projects, products, companies, sales, packagings, supply chains in relation to the actors proximity as well as traceability, raw materials procurement, and disposal. However, in its primary meaning, sustainability is the ability to maintain specific values and resources for future generations. The contribution aims to address sustainability in the fashion system as a layered problem that requires substantial changes at different levels: in the fashion product (materials, production processes, timing, distribution, and disposal), in the functioning of the system (life cycle, impact, needs, communication) and last but not least in the practice of fashion design which should conceive durable, low obsolescence and possibly demountable products. Moreover, consumers play a central role for the growing awareness, together with an increasingly strong sensitivity towards the environment and sustainable clothing. Since it is also a market demand, undertaking significant efforts to achieve total transparency and sustainability in all production and distribution processes is becoming fundamental for the fashion system. Sustainability is not to be understood as purely environmental but as the pursuit of collective well-being in relation to conscious production, human rights, and social dignity with the aim to achieve intelligent, resource, and environmentally friendly production and consumption patterns. Assuming sustainability as a layered problem makes the role of communication crucial to convey scientific or production specific content so that people can obtain and interpret information to make related decisions. Hence, if it is true that “what designers make becomes the future we inhabit'', design is facing great and challenging responsibility. The fashion industry needs a system of rules able to assess the sustainability of products, which is transparent and easily interpreted by consumers, identifying and enhancing virtuous practices. There are still complex and fragmented value chains that make it extremely difficult for brands and manufacturers to know the history of their products, to identify exactly where the risks lie, and to respond to the growing demand from consumers and civil society for responsible and sustainable production practices in the fashion industry.

Keywords: fashion design, fashion system, sustainability, communication, complexity

Procedia PDF Downloads 102
102 A Hydrometallurgical Route for the Recovery of Molybdenum from Spent Mo-Co Catalyst

Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra

Abstract:

Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum has increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. The present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3.0 mol/L HCl, and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2.0 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe- Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by countercurrent simulation studies. According to McCabe- Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two-stage counter current at A/O= 1:1 with the negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO₃ in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO₃ was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO₃ correspond to molybdite Syn-MoO₃ structure. FE-SEM depicts the rod-like morphology of synthesized MoO₃. EDX analysis of MoO₃ shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO₃ can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as a catalyst.

Keywords: cyphos Il 102, extraction, spent mo-co catalyst, recovery

Procedia PDF Downloads 147
101 The Structuring of Economic of Brazilian Innovation and the Institutional Proposal to the Legal Management for Global Conformity to Treat the Technological Risks

Authors: Daniela Pellin, Wilson Engelmann

Abstract:

Brazil has sought to accelerate your development through technology and innovation as a response to the global influences, which has received in internal management practices. For this, it had edited the Brazilian Law of Innovation 13.243/2016. However observing the Law overestimated economic aspects the respective application will not consider the stakeholders and the technological risks because there is no legal treatment. The economic exploitation and the technological risks must be controlled by limits of democratic system to find better social development to contribute with the economics agents for making decision to conform with global directions. The research understands this is a problem to face given the social particularities of the country because there has been the literal import of the North American Triple Helix Theory consolidated in developed countries and the negative consequences when applied in developing countries. Because of this symptomatic scenario, it is necessary to create adjustment to conduct the management of the law besides social democratic interests to increase the country development. For this, therefore, the Government will have to adopt some conducts promoting side by side with universities, civil society and companies, informational transparency, catch of partnerships, create a Confort Letter document for preparation to ensure the operation, joint elaboration of a Manual of Good Practices, make accountability and data dissemination. Also the Universities must promote informational transparency, drawing up partnership contracts and generating revenue, development of information. In addition, the civil society must do data analysis about proposals received for discussing to give opinion related. At the end, companies have to give public and transparent information about investments and economic benefits, risks and innovation manufactured. The research intends as a general objective to demonstrate that the efficiency of the propeller deployment will be possible if the innovative decision-making process goes through the institutional logic. As specific objectives, the American influence must undergo some modifications to better suit the economic-legal incentives to potentiate the development of the social system. The hypothesis points to institutional model for application to the legal system can be elaborated based on emerging characteristics of the country, in such a way that technological risks can be foreseen and there will be global conformity with attention to the full development of society as proposed by the researchers.The method of approach will be the systemic-constructivist with bibliographical review, data collection and analysis with the construction of the institutional and democratic model for the management of the Law.

Keywords: development, governance of law, institutionalization, triple helix

Procedia PDF Downloads 122
100 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst

Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra

Abstract:

Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.

Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery

Procedia PDF Downloads 249
99 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times

Authors: John Dimopoulos

Abstract:

This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.

Keywords: design, hypermodernity, object-oriented ontology, weapon-being

Procedia PDF Downloads 129
98 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels

Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand

Abstract:

The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.

Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing

Procedia PDF Downloads 283
97 Roads and Agriculture: Impacts of Connectivity in Peru

Authors: Julio Aguirre, Yohnny Campana, Elmer Guerrero, Daniel De La Torre Ugarte

Abstract:

A well-developed transportation network is a necessary condition for a country to derive full benefits from good trade and macroeconomic policies. Road infrastructure plays a key role in the economic development of rural areas of developing countries; where agriculture is the main economic activity. The ability to move agricultural production from the place of production to the market, and then to the place of consumption, greatly influence the economic value of farming activities, and of the resources involved in the production process, i.e., labor and land. Consequently, investment in transportation networks contributes to enhance or overcome the natural advantages or disadvantages that topography and location have imposed over the agricultural sector. This is of particular importance when dealing with countries, like Peru, with a great topographic diversity. The objective of this research is to estimate the impacts of road infrastructure on the performance of the agricultural sector. Specific variables of interest are changes in travel time, shifts of production for self-consumption to production for the market, changes in farmers income, and impacts on the diversification of the agricultural sector. In the study, a cross-section model with instrumental variables is the central methodological instrument. The data is obtained from agricultural and transport geo-referenced databases, and the instrumental variable specification utilized is based on the Kruskal algorithm. The results show that the expansion of road connectivity reduced farmers' travel time by an average of 3.1 hours and the proportion of output sold in the market increases by up to 40 percentage points. The increase in connectivity has an unexpected increase in the districts index of diversification of agricultural production. The results are robust to the inclusion of year and region fixed-effects, and to control for geography (i.e., slope and altitude), population variables, and mining activity. Other results are also very eloquent. For example, a clear positive impact can be seen in access to local markets, but this does not necessarily correlate with an increase in the production of the sector. This can be explained by the fact that agricultural development not only requires provision of roads but additional complementary infrastructure and investments intended to provide the necessary conditions so that producers can offer quality products (improved management practices, timely maintenance of irrigation infrastructure, transparent management of water rights, among other factors). Therefore, complementary public goods are needed to enhance the effects of roads on the welfare of the population, beyond enabling them to increase their access to markets.

Keywords: agriculture devolepment, market access, road connectivity, regional development

Procedia PDF Downloads 175
96 Stability Study of Hydrogel Based on Sodium Alginate/Poly (Vinyl Alcohol) with Aloe Vera Extract for Wound Dressing Application

Authors: Klaudia Pluta, Katarzyna Bialik-Wąs, Dagmara Malina, Mateusz Barczewski

Abstract:

Hydrogel networks, due to their unique properties, are highly attractive materials for wound dressing. The three-dimensional structure of hydrogels provides tissues with optimal moisture, which supports the wound healing process. Moreover, a characteristic feature of hydrogels is their absorption properties which allow for the absorption of wound exudates. For the fabrication of biomedical hydrogels, a combination of natural polymers ensuring biocompatibility and synthetic ones that provide adequate mechanical strength are often used. Sodium alginate (SA) is one of the polymers widely used in wound dressing materials because it exhibits excellent biocompatibility and biodegradability. However, due to poor strength properties, often alginate-based hydrogel materials are enhanced by the addition of another polymer such as poly(vinyl alcohol) (PVA). This paper is concentrated on the preparation methods of sodium alginate/polyvinyl alcohol hydrogel system incorporating Aloe vera extract and glycerin for wound healing material with particular focus on the role of their composition on structure, thermal properties, and stability. Briefly, the hydrogel preparation is based on the chemical cross-linking method using poly(ethylene glycol) diacrylate (PEGDA, Mn = 700 g/mol) as a crosslinking agent and ammonium persulfate as an initiator. In vitro degradation tests of SA/PVA/AV hydrogels were carried out in Phosphate-Buffered Saline (pH – 7.4) as well as in distilled water. Hydrogel samples were firstly cut into half-gram pieces (in triplicate) and immersed in immersion fluid. Then, all specimens were incubated at 37°C and then the pH and conductivity values were measurements at time intervals. The post-incubation fluids were analyzed using SEC/GPC to check the content of oligomers. The separation was carried out at 35°C on a poly(hydroxy methacrylate) column (dimensions 300 x 8 mm). 0.1M NaCl solution, whose flow rate was 0.65 ml/min, was used as the mobile phase. Three injections with a volume of 50 µl were made for each sample. The thermogravimetric data of the prepared hydrogels were collected using a Netzsch TG 209 F1 Libra apparatus. The samples with masses of about 10 mg were weighed separately in Al2O3 crucibles and then were heated from 30°C to 900°C with a scanning rate of 10 °C∙min−1 under a nitrogen atmosphere. Based on the conducted research, a fast and simple method was developed to produce potential wound dressing material containing sodium alginate, poly(vinyl alcohol) and Aloe vera extract. As a result, transparent and flexible SA/PVA/AV hydrogels were obtained. The degradation experiments indicated that most of the samples immersed in PBS as well as in distilled water were not degraded throughout the whole incubation time.

Keywords: hydrogels, wound dressings, sodium alginate, poly(vinyl alcohol)

Procedia PDF Downloads 142
95 Liquid Food Sterilization Using Pulsed Electric Field

Authors: Tanmaya Pradhan, K. Midhun, M. Joy Thomas

Abstract:

Increasing the shelf life and improving the quality are important objectives for the success of packaged liquid food industry. One of the methods by which this can be achieved is by deactivating the micro-organisms present in the liquid food through pasteurization. Pasteurization is done by heating, but some serious disadvantages such as the reduction in food quality, flavour, taste, colour, etc. were observed because of heat treatment, which leads to the development of newer methods instead of pasteurization such as treatment using UV radiation, high pressure, nuclear irradiation, pulsed electric field, etc. In recent years the use of the pulsed electric field (PEF) for inactivation of the microbial content in the food is gaining popularity. PEF uses a very high electric field for a short time for the inactivation of microorganisms, for which we require a high voltage pulsed power source. Pulsed power sources used for PEF treatments are usually in the range of 5kV to 50kV. Different pulse shapes are used, such as exponentially decaying and square wave pulses. Exponentially decaying pulses are generated by high power switches with only turn-on capacity and, therefore, discharge the total energy stored in the capacitor bank. These pulses have a sudden onset and, therefore, a high rate of rising but have a very slow decay, which yields extra heat, which is ineffective in microbial inactivation. Square pulses can be produced by an incomplete discharge of a capacitor with the help of a switch having both on/off control or by using a pulse forming network. In this work, a pulsed power-based system is designed with the help of high voltage capacitors and solid-state switches (IGBT) for the inactivation of pathogenic micro-organism in liquid food such as fruit juices. The high voltage generator is based on the Marx generator topology, which can produce variable amplitude, frequency, and pulse width according to the requirements. Liquid food is treated in a chamber where pulsed electric field is produced between stainless steel electrodes using the pulsed output voltage of the supply. Preliminary bacterial inactivation tests were performed by subjecting orange juice inoculated with Escherichia Coli bacteria. With the help of the developed pulsed power source and the chamber, the inoculated orange has been PEF treated. The voltage was varied to get a peak electric field up to 15kV/cm. For a total treatment time of 200µs, a 30% reduction in the bacterial count has been observed. The detailed results and analysis will be presented in the final paper.

Keywords: Escherichia coli bacteria, high voltage generator, microbial inactivation, pulsed electric field, pulsed forming line, solid-state switch

Procedia PDF Downloads 152
94 State Violence: The Brazilian Amnesty Law and the Fight Against Impunity

Authors: Flavia Kroetz

Abstract:

From 1964 to 1985, Brazil was ruled by a dictatorial regime that, under the discourse of fight against terrorism and subversion, implemented cruel and atrocious practices against anyone who opposed the State ideology. At the same time, several Latin American countries faced dictatorial periods and experienced State repression through apparatuses of violence institutionalized in the very governmental structure. Despite the correspondence between repressive methods adopted by authoritarian regimes in States such as Argentina, Chile, El Salvador, Peru and Uruguay, the mechanisms of democratic transition adopted with the end of each dictatorship were significantly different. While some States have found ways to deal with past atrocities through serious and transparent investigations of the crimes perpetrated in the name of repression, in others, as in Brazil, a culture of impunity remains rooted in society, manifesting itself in the widespread disbelief of the population in governmental and democratic institutions. While Argentina, Chile, Peru and Uruguay are convincing examples of the possibility and importance of the prosecution of crimes such as torture, forced disappearance and murder committed by the State, El Salvador demonstrates the complete failure to punish or at least remove from power the perpetrators of serious crimes against civilians and political opponents. In a scenario of widespread violations of human rights, State violence becomes entrenched within society as a daily and even necessary practice. In Brazil, a lack of political and judicial will withstands the impunity of those who, during the military regime, committed serious crimes against human rights under the authority of the State. If the reproduction of violence is a direct consequence of the culture of denial and the rejection of everyone considered to be different, ‘the other’, then the adoption of transitional mechanisms that underpin the historical and political contexts of the time seems essential. Such mechanisms must strengthen democracy through the effective implementation of the rights to memory and to truth, the right to justice and reparations for victims and their families, as well as institutional changes in order to remove from power those who, when in power, could not distinguish between legality and authoritarianism. Against this background, this research analyses the importance of transitional justice for the restoration of democracy, considering the adoption of amnesty laws as a strategy to preclude criminal prosecution of offenses committed during dictatorial regimes. The study investigates the scope of Law No 6.683/79, the Brazilian amnesty law, which, according to a 2010 decision of the Brazilian Constitutional Supreme Court, granted amnesty to those responsible for political crimes and related crimes, committed between September 2, 1961 and August 15, 1979. Was the purpose of this Law to grant amnesty to violent crimes committed by the State? If so, is it possible to recognize the legitimacy of a Congress composed of indirectly elected politicians controlled by the dictatorship?

Keywords: amnesty law, criminal justice, dictatorship, state violence

Procedia PDF Downloads 422
93 Fabrication of Aluminum Nitride Thick Layers by Modified Reactive Plasma Spraying

Authors: Cécile Dufloux, Klaus Böttcher, Heike Oppermann, Jürgen Wollweber

Abstract:

Hexagonal aluminum nitride (AlN) is a promising candidate for several wide band gap semiconductor compound applications such as deep UV light emitting diodes (UVC LED) and fast power transistors (HEMTs). To date, bulk AlN single crystals are still commonly grown from the physical vapor transport (PVT). Single crystalline AlN wafers obtained from this process could offer suitable substrates for a defect-free growth of ultimately active AlGaN layers, however, these wafers still lack from small sizes, limited delivery quantities and high prices so far.Although there is already an increasing interest in the commercial availability of AlN wafers, comparatively cheap Si, SiC or sapphire are still predominantly used as substrate material for the deposition of active AlGaN layers. Nevertheless, due to a lattice mismatch up to 20%, the obtained material shows high defect densities and is, therefore, less suitable for high power devices as described above. Therefore, the use of AlN with specially adapted properties for optical and sensor applications could be promising for mass market products which seem to fulfill fewer requirements. To respond to the demand of suitable AlN target material for the growth of AlGaN layers, we have designed an innovative technology based on reactive plasma spraying. The goal is to produce coarse grained AlN boules with N-terminated columnar structure and high purity. In this process, aluminum is injected into a microwave stimulated nitrogen plasma. AlN, as the product of the reaction between aluminum powder and the plasma activated N2, is deposited onto the target. We used an aluminum filament as the initial material to minimize oxygen contamination during the process. The material was guided through the nitrogen plasma so that the mass turnover was 10g/h. To avoid any impurity contamination by an erosion of the electrodes, an electrode-less discharge was used for the plasma ignition. The pressure was maintained at 600-700 mbar, so the plasma reached a temperature high enough to vaporize the aluminum which subsequently was reacting with the surrounding plasma. The obtained products consist of thick polycrystalline AlN layers with a diameter of 2-3 cm. The crystallinity was determined by X-ray crystallography. The grain structure was systematically investigated by optical and scanning electron microscopy. Furthermore, we performed a Raman spectroscopy to provide evidence of stress in the layers. This paper will discuss the effects of process parameters such as microwave power and deposition geometry (specimen holder, radiation shields, ...) on the topography, crystallinity, and stress distribution of AlN.

Keywords: aluminum nitride, polycrystal, reactive plasma spraying, semiconductor

Procedia PDF Downloads 263
92 The Solid-Phase Sensor Systems for Fluorescent and SERS-Recognition of Neurotransmitters for Their Visualization and Determination in Biomaterials

Authors: Irina Veselova, Maria Makedonskaya, Olga Eremina, Alexandr Sidorov, Eugene Goodilin, Tatyana Shekhovtsova

Abstract:

Such catecholamines as dopamine, norepinephrine, and epinephrine are the principal neurotransmitters in the sympathetic nervous system. Catecholamines and their metabolites are considered to be important markers of socially significant diseases such as atherosclerosis, diabetes, coronary heart disease, carcinogenesis, Alzheimer's and Parkinson's diseases. Currently, neurotransmitters can be studied via electrochemical and chromatographic techniques that allow their characterizing and quantification, although these techniques can only provide crude spatial information. Besides, the difficulty of catecholamine determination in biological materials is associated with their low normal concentrations (~ 1 nM) in biomaterials, which may become even one more order lower because of some disorders. In addition, in blood they are rapidly oxidized by monoaminooxidases from thrombocytes and, for this reason, the determination of neurotransmitter metabolism indicators in an organism should be very rapid (15—30 min), especially in critical states. Unfortunately, modern instrumental analysis does not offer a complex solution of this problem: despite its high sensitivity and selectivity, HPLC-MS cannot provide sufficiently rapid analysis, while enzymatic biosensors and immunoassays for the determination of the considered analytes lack sufficient sensitivity and reproducibility. Fluorescent and SERS-sensors remain a compelling technology for approaching the general problem of selective neurotransmitter detection. In recent years, a number of catecholamine sensors have been reported including RNA aptamers, fluorescent ribonucleopeptide (RNP) complexes, and boronic acid based synthetic receptors and the sensor operated in a turn-off mode. In this work we present the fluorescent and SERS turn-on sensor systems based on the bio- or chemorecognizing nanostructured films {chitosan/collagen-Tb/Eu/Cu-nanoparticles-indicator reagents} that provide the selective recognition, visualization, and sensing of the above mentioned catecholamines on the level of nanomolar concentrations in biomaterials (cell cultures, tissue etc.). We have (1) developed optically transparent porous films and gels of chitosan/collagen; (2) ensured functionalization of the surface by molecules-'recognizers' (by impregnation and immobilization of components of the indicator systems: biorecognizing and auxiliary reagents); (3) performed computer simulation for theoretical prediction and interpretation of some properties of the developed materials and obtained analytical signals in biomaterials. We are grateful for the financial support of this research from Russian Foundation for Basic Research (grants no. 15-03-05064 a, and 15-29-01330 ofi_m).

Keywords: biomaterials, fluorescent and SERS-recognition, neurotransmitters, solid-phase turn-on sensor system

Procedia PDF Downloads 378
91 Towards the Rapid Synthesis of High-Quality Monolayer Continuous Film of Graphene on High Surface Free Energy Existing Plasma Modified Cu Foil

Authors: Maddumage Don Sandeepa Lakshad Wimalananda, Jae-Kwan Kim, Ji-Myon Lee

Abstract:

Graphene is an extraordinary 2D material that shows superior electrical, optical, and mechanical properties for the applications such as transparent contacts. Further, chemical vapor deposition (CVD) technique facilitates to synthesizing of large-area graphene, including transferability. The abstract is describing the use of high surface free energy (SFE) and nano-scale high-density surface kinks (rough) existing Cu foil for CVD graphene growth, which is an opposite approach to modern use of catalytic surfaces for high-quality graphene growth, but the controllable rough morphological nature opens new era to fast synthesis (less than the 50s with a short annealing process) of graphene as a continuous film over conventional longer process (30 min growth). The experiments were shown that high SFE condition and surface kinks on Cu(100) crystal plane existing Cu catalytic surface facilitated to synthesize graphene with high monolayer and continuous nature because it can influence the adsorption of C species with high concentration and which can be facilitated by faster nucleation and growth of graphene. The fast nucleation and growth are lowering the diffusion of C atoms to Cu-graphene interface, which is resulting in no or negligible formation of bilayer patches. High energy (500W) Ar plasma treatment (inductively Coupled plasma) was facilitated to form rough and high SFE existing (54.92 mJm-2) Cu foil. This surface was used to grow the graphene by using CVD technique at 1000C for 50s. The introduced kink-like high SFE existing point on Cu(100) crystal plane facilitated to faster nucleation of graphene with a high monolayer ratio (I2D/IG is 2.42) compared to another different kind of smooth morphological and low SFE existing Cu surfaces such as Smoother surface, which is prepared by the redeposit of Cu evaporating atoms during the annealing (RRMS is 13.3nm). Even high SFE condition was favorable to synthesize graphene with monolayer and continuous nature; It fails to maintain clean (surface contains amorphous C clusters) and defect-free condition (ID/IG is 0.46) because of high SFE of Cu foil at the graphene growth stage. A post annealing process was used to heal and overcome previously mentioned problems. Different CVD atmospheres such as CH4 and H2 were used, and it was observed that there is a negligible change in graphene nature (number of layers and continuous condition) but it was observed that there is a significant difference in graphene quality because the ID/IG ratio of the graphene was reduced to 0.21 after the post-annealing with H2 gas. Addition to the change of graphene defectiveness the FE-SEM images show there was a reduction of C cluster contamination of the surface. High SFE conditions are favorable to form graphene as a monolayer and continuous film, but it fails to provide defect-free graphene. Further, plasma modified high SFE existing surface can be used to synthesize graphene within 50s, and a post annealing process can be used to reduce the defectiveness.

Keywords: chemical vapor deposition, graphene, morphology, plasma, surface free energy

Procedia PDF Downloads 224
90 Making the Neighbourhood: Analyzing Mapping Procedures to Deal with Plurality and Conflict

Authors: Barbara Roosen, Oswald Devisch

Abstract:

Spatial projects are often contested. Despite participatory trajectories in official spatial development processes, citizens engage often by their power to say no. Participatory mapping helps to produce more legible and democratic ways of decision-making. It has proven its value in producing a multitude of knowledges and views, for individuals and community groups and local stakeholders to imagine desired and undesired futures and to give them the rhetorical power to present their views throughout the development process. From this perspective, mapping works as a social process in which individuals and groups share their knowledge, learn from each other and negotiate their relationship with each other as well as with space and power. In this way, these processes eventually aim to activate communities to intervene in cooperation in real problems. However, these are fragile and bumpy processes, sometimes leading to (local) conflict and intractable situations. Heterogeneous subjectivities and knowledge that become visible during the mapping process and which are contested by members of the community, is often the first trigger. This paper discusses a participatory mapping project conducted in a residential subdivision in Flanders to provide a deeper understanding of how or under which conditions the mapping process could moderate discordant situations amongst inhabitants, local organisations and local authorities, towards a more constructive outcome. In our opinion, this implies a thorough documentation and presentation of the different steps of the mapping process to design and moderate an open and transparent dialogue. The mapping project ‘Make the Neighbourhood’, is set up in the aftermath of a socio-spatial design intervention in the neighbourhood that led to polarization within the community. To start negotiation between the diverse claims that came to the fore, we co-create a desired future map of the neighbourhood together with local organisations and inhabitants as a way to engage them in the development of a new spatial development plan for the area. This mapping initiative set up a new ‘common’ goal or concern, as a first step to bridge the gap that we experienced between different sociocultural groups, bottom-up and top-down initiatives and between professionals and non-professionals. An atlas of elements (materials), an atlas of actors with different roles and an atlas of ways of cooperation and organisation form the work and building material of the future neighbourhood map, assembled in two co-creation sessions. Firstly, we will consider how the mapping procedures articulate the plurality of claims and agendas. Secondly, we will elaborate upon how social relations and spatialities are negotiated and reproduced during the different steps of the map making. Thirdly, we will reflect on the role of the rules, format, and structure of the mapping process in moderating negotiations between much divided claims. To conclude, we will discuss the challenges of visualizing the different steps of mapping process as a strategy to moderate tense negotiations in a more constructive direction in the context of spatial development processes.

Keywords: conflict, documentation, participatory mapping, residential subdivision

Procedia PDF Downloads 180
89 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 170
88 Harvesting Value-added Products Through Anodic Electrocatalytic Upgrading Intermediate Compounds Utilizing Biomass to Accelerating Hydrogen Evolution

Authors: Mehran Nozari-Asbemarz, Italo Pisano, Simin Arshi, Edmond Magner, James J. Leahy

Abstract:

Integrating electrolytic synthesis with renewable energy makes it feasible to address urgent environmental and energy challenges. Conventional water electrolyzers concurrently produce H₂ and O₂, demanding additional procedures in gas separation to prevent contamination of H₂ with O₂. Moreover, the oxygen evolution reaction (OER), which is sluggish and has a low overall energy conversion efficiency, does not deliver a significant value product on the electrode surface. Compared to conventional water electrolysis, integrating electrolytic hydrogen generation from water with thermodynamically more advantageous aqueous organic oxidation processes can increase energy conversion efficiency and create value-added compounds instead of oxygen at the anode. One strategy is to use renewable and sustainable carbon sources from biomass, which has a large annual production capacity and presents a significant opportunity to supplement carbon sourced from fossil fuels. Numerous catalytic techniques have been researched in order to utilize biomass economically. Because of its safe operating conditions, excellent energy efficiency, and reasonable control over production rate and selectivity using electrochemical parameters, electrocatalytic upgrading stands out as an appealing choice among the numerous biomass refinery technologies. Therefore, we propose a broad framework for coupling H2 generation from water splitting with oxidative biomass upgrading processes. Four representative biomass targets were considered for oxidative upgrading that used a hierarchically porous CoFe-MOF/LDH @ Graphite Paper bifunctional electrocatalyst, including glucose, ethanol, benzyl, furfural, and 5-hydroxymethylfurfural (HMF). The potential required to support 50 mA cm-2 is considerably lower than (~ 380 mV) the potential for OER. All four compounds can be oxidized to yield liquid byproducts with economic benefit. The electrocatalytic oxidation of glucose to the value-added products, gluconic acid, glucuronic acid, and glucaric acid, was examined in detail. The cell potential for combined H₂ production and glucose oxidation was substantially lower than for water splitting (1.44 V(RHE) vs. 1.82 V(RHE) for 50 mA cm-2). In contrast, the oxidation byproduct at the anode was significantly more valuable than O₂, taking advantage of the more favorable glucose oxidation in comparison to the OER. Overall, such a combination of HER and oxidative biomass valorization using electrocatalysts prevents the production of potentially explosive H₂/O₂mixtures and produces high-value products at both electrodes with lower voltage input, thereby increasing the efficiency and activity of electrocatalytic conversion.

Keywords: biomass, electrocatalytic, glucose oxidation, hydrogen evolution

Procedia PDF Downloads 71