Search results for: simultaneous perturbation stochastic approximation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1746

Search results for: simultaneous perturbation stochastic approximation

456 Estimation of Seismic Ground Motion and Shaking Parameters Based on Microtremor Measurements at Palu City, Central Sulawesi Province, Indonesia

Authors: P. S. Thein, S. Pramumijoyo, K. S. Brotopuspito, J. Kiyono, W. Wilopo, A. Furukawa, A. Setianto

Abstract:

In this study, we estimated the seismic ground motion parameters based on microtremor measurements at Palu City. Several earthquakes have struck along the Palu-Koro Fault during recent years. The USGS epicenter, magnitude Mw 6.3 event that occurred on January 23, 2005 caused several casualties. We conducted a microtremor survey to estimate the strong ground motion distribution during the earthquake. From this survey we produced a map of the peak ground acceleration, velocity, seismic vulnerability index and ground shear strain maps in Palu City. We performed single observations of microtremor at 151 sites in Palu City. We also conducted 8-site microtremors array investigation to gain a representative determination of the soil condition of subsurface structures in Palu City. From the array observations, Palu City corresponds to relatively soil condition with Vs ≤ 300 m/s, the predominant periods due to horizontal vertical ratios (HVSRs) are in the range of 0.4 to 1.8 s and the frequency are in the range of 0.7 to 3.3 Hz. Strong ground motions of the Palu area were predicted based on the empirical stochastic green’s function method. Peak ground acceleration and velocity becomes more than 400 gal and 30 kine in some areas, which causes severe damage for buildings in high probability. Microtremor survey results showed that in hilly areas had low seismic vulnerability index and ground shear strain, whereas in coastal alluvium was composed of material having a high seismic vulnerability and ground shear strain indication.

Keywords: Palu-Koro fault, microtremor, peak ground acceleration, peak ground velocity, seismic vulnerability index

Procedia PDF Downloads 401
455 Correlation between Polysaccharides Molecular Weight Changes and Pectinases Gene Expression during Papaya Ripening

Authors: Samira B. R. Prado, Paulo R. Melfi, Beatriz T. Minguzzi, João P. Fabi

Abstract:

Fruit softening is the main change that occurs during papaya (Carica papaya L.) ripening. It is characterized by the depolymerization of cell wall polysaccharides, especially the pectic fractions, which causes cell wall disassembling. However, it is uncertain how the modification of the two main pectin polysaccharides fractions (water-soluble – WSF, and oxalate-soluble fractions - OSF) accounts for fruit softening. The aim of this work was to correlate molecular weight changes of WSF and OSF with the gene expression of pectin-solubilizing enzymes (pectinases) during papaya ripening. Papaya fruits obtained from a producer were harvest and storage under specific conditions. The fruits were divided in five groups according to days after harvesting. Cell walls from all groups of papaya pulp were isolated and fractionated (WSF and OSF). Expression profiles of pectinase genes were achieved according to the MIQE guidelines (Minimum Information for publication of Quantitative real-time PCR Experiments). The results showed an increased yield and a decreased molecular weight throughout ripening for WSF and OSF. Gene expression data support that papaya softening is achieved by polygalacturonases (PGs) up-regulation, in which their actions might have been facilitated by the constant action of pectinesterases (PMEs). Moreover, BGAL1 gene was up-regulated during ripening with a simultaneous galactose release, suggesting that galactosidases (GALs) could also account for pulp softening. The data suggest that a solubilization of galacturonans and a depolymerization of cell wall components were caused mainly by the action of PGs and GALs.

Keywords: carica papaya, fruit ripening, galactosidases, plant cell wall, polygalacturonases

Procedia PDF Downloads 414
454 A Comparative Study Mechanical Properties of Polytetrafluoroethylene Materials Synthesized by Non-Conventional and Conventional Techniques

Authors: H. Lahlali F. El Haouzi, A.M.Al-Baradi, I. El Aboudi, M. El Azhari, A. Mdarhri

Abstract:

Polytetrafluoroethylene (PTFE) is a high performance thermoplastic polymer with exceptional physical and chemical properties, such as a high melting temperature, high thermal stability, and very good chemical resistance. Nevertheless, manufacturing PTFE is problematic due to its high melt viscosity (10 12 Pa.s). In practice, it is by now well established that this property presents a serious problem when the classical methods are used to synthesized the dense PTFE materials in particularly hot pressing, high temperature extrusion. In this framework, we use here a new process namely spark plasma sintering (SPS) to elaborate PTFE samples from the micro metric particles powder. It consists in applying simultaneous electric current and pressure directly on the sample powder. By controlling the processing parameters of this technique, a series of PTFE samples are easy obtained and associated to remarkably short time as is reported in an early work. Our central goal in the present study is to understand how the non conventional SPS affects the mechanical properties at room temperature. For this end, a second commercially series of PTFE synthesized by using the extrusion method is investigated. The first data according to the tensile mechanical properties are found to be superior for the first set samples (SPS). However, this trend is not observed for the results obtained from the compression testing. The observed macro-behaviors are correlated to some physical properties of the two series of samples such as their crystallinity or density. Upon a close examination of these properties, we believe the SPS technique can be seen as a promising way to elaborate the polymer having high molecular mass without compromising their mechanical properties.

Keywords: PTFE, extrusion, Spark Plasma Sintering, physical properties, mechanical behavior

Procedia PDF Downloads 299
453 Sustainable Resource Use as a Means of Preserving the Integrity of the Eco-System and Environment

Authors: N. Hedayat, E. Karamifar

Abstract:

Sustainable food and fiber production is emerging as an irresistible option in agrarian planning. Although one should not underestimate the successes of the Green Revolution in enhancing crop production, its adverse environmental and ecosystem consequences have also been remarkable. The aim of this paper is to identify ways of improving crop production to ensure agricultural sustainability and environmental integrity. Systematic observations are used for data collection on intensive farming, deforestation and the environmental implications of industrial pollutants on agricultural sustainability at national and international levels. These were achieved within a comparative analytical model of data interpretation. Results show that while multiple factors enhance yield, they have a simultaneous effect in undermining the ecosystem and environmental integrity. Results show that application of excessive agrichemical have been one of the major cause of polluting the surface and underground water bodies as well as soil layers in affected croplands. Results consider rapid deforestation in the tropical regions has been the underlying cause of impairing the integrity of biodiversity and oxygen-generation regime. These, coupled with production of greenhouse gasses, have contributed to global warming and hydrological irregularities. Continuous production of pollutants and effluents has affected marine and land biodiversity arising from acid rains generated by modern farming and deforestation. Continuous production of greenhouse gases has also been instrumental in affecting climatic behavior manifested in recurring draughts and contraction of lakes and ponds as well as emergence of potential flooding of waterways and floodplains in the future.

Keywords: agricultural sustainability, environmental integrity, pollution, eco-system

Procedia PDF Downloads 392
452 Promoting Local Products through One Village One Product and Customer Satisfaction

Authors: Wardoyo, Humairoh

Abstract:

In global competition nowadays, the world economy heavily depends upon high technology and capital intensive industries that are mainly owned by well-established economic and developed countries, such as United States of America, United Kingdom, Japan, and South Korea. Indonesia as a developing country is building its economic activities towards industrial country as well, although a slightly different approach was implemented. For example, similar to the concept of one village one product (OVOP) implemented in Japan, Indonesia also adopted this concept by promoting local traditional products to improve incomes of village people and to enhance local economic activities. Analysis on how OVOP program increase local people’s income and influence customer satisfaction were the objective of this paper. Behavioral intention to purchase and re-purchase, customer satisfaction and promotion are key factors for local products to play significant roles in improving local income and economy of the region. The concepts of OVOP and key factors that influence economic activities of local people and the region will be described and explained in the paper. Results of research, in a case study based on 300 respondents, customers of a local restaurant at Tangerang City, Banten Province of Indonesia, indicated that local product, service quality and behavioral intention individually have significant influence to customer satisfaction; whereas simultaneous tests to the variables indicated positive and significant influence to the behavioral intention through customer satisfaction as the intervening variable.

Keywords: behavioral intention, customer satisfaction, local products, one village one product (OVOP)

Procedia PDF Downloads 286
451 The Log S-fbm Nested Factor Model

Authors: Othmane Zarhali, Cécilia Aubrun, Emmanuel Bacry, Jean-Philippe Bouchaud, Jean-François Muzy

Abstract:

The Nested factor model was introduced by Bouchaud and al., where the asset return fluctuations are explained by common factors representing the market economic sectors and residuals (noises) sharing with the factors a common dominant volatility mode in addition to the idiosyncratic mode proper to each residual. This construction infers that the factors-residuals log volatilities are correlated. Here, we consider the case of a single factor where the only dominant common mode is a S-fbm process (introduced by Peng, Bacry and Muzy) with Hurst exponent H around 0.11 and the residuals having in addition to the previous common mode idiosyncratic components with Hurst exponents H around 0. The reason for considering this configuration is twofold: preserve the Nested factor model’s characteristics introduced by Bouchaud and al. and propose a framework through which the stylized fact reported by Peng and al. is reproduced, where it has been observed that the Hurst exponents of stock indices are large as compared to those of individual stocks. In this work, we show that the Log S-fbm Nested factor model’s construction leads to a Hurst exponent of single stocks being the ones of the idiosyncratic volatility modes and the Hurst exponent of the index being the one of the common volatility modes. Furthermore, we propose a statistical procedure to estimate the Hurst factor exponent from the stock returns dynamics together with theoretical guarantees, with good results in the limit where the number of stocks N goes to infinity. Last but not least, we show that the factor can be seen as an index constructed from the single stocks weighted by specific coefficients.

Keywords: hurst exponent, log S-fbm model, nested factor model, small intermittency approximation

Procedia PDF Downloads 32
450 Comparison of the Curvizigzag Incision with Transverse Stewart Incision in Women Undergoing Modified Radical Mastectomy for Carcinoma Breast

Authors: John Joseph S. Martis, Rohanchandra R. Gatty, Aaron Jose Fernandes, Rahul P. Nambiar

Abstract:

Introduction: Surgery for breast cancer is either mastectomy or breast conservation surgery. The most commonly used incision for modified radical mastectomy is the transverse Stewart incision. But this incision may have the disadvantage of causing disparity between the closure lines of superior and inferior skin flaps in mastectomy and can cause overhanging of soft tissue below and behind the axilla. The curvizigzag incision, on principle, may help in this regard and can prevent scar migration beyond the anterior axillary line. This study aims to compare the two incisions in this regard. Methods: 100 patients with cancer of breast were included in the study after satisfying inclusion and exclusion criteria. They underwent surgery at Father Muller Medical College, Mangalore, India, between November 2019 to September 2021. The patients were divided into two groups. Group A patients were subjected to modified radical mastectomy with curvizigzag incision and group B patients with transverse Stewart incision. Results: Seroma on postoperative day1, day 2 was 0% in both the groups. Seroma on postoperative day 30 was present in 14% of patients in group B. 60% of patients in group B had sag of soft tissue below and behind the axilla, and none of the patients in group A had this problem. In 64% of the patients in group B, the incision crossed the anterior axillary fold, 64% of the patients in group B had tension in the incision site while approximation of the skin flaps. Conclusion: Curvizigzag incision is statistically better with lesser complications when compared to the transverse Stewart incision for modified radical mastectomy for carcinoma breast.

Keywords: breast cancer, curvizigzag incision, transverse Stewart incision, seroma, modified radical mastectomy

Procedia PDF Downloads 86
449 A Physiological Approach for Early Detection of Hemorrhage

Authors: Rabie Fadil, Parshuram Aarotale, Shubha Majumder, Bijay Guargain

Abstract:

Hemorrhage is the loss of blood from the circulatory system and leading cause of battlefield and postpartum related deaths. Early detection of hemorrhage remains the most effective strategy to reduce mortality rate caused by traumatic injuries. In this study, we investigated the physiological changes via non-invasive cardiac signals at rest and under different hemorrhage conditions simulated through graded lower-body negative pressure (LBNP). Simultaneous electrocardiogram (ECG), photoplethysmogram (PPG), blood pressure (BP), impedance cardiogram (ICG), and phonocardiogram (PCG) were acquired from 10 participants (age:28 ± 6 year, weight:73 ± 11 kg, height:172 ± 8 cm). The LBNP protocol consisted of applying -20, -30, -40, -50, and -60 mmHg pressure to the lower half of the body. Beat-to-beat heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean aerial pressure (MAP) were extracted from ECG and blood pressure. Systolic amplitude (SA), systolic time (ST), diastolic time (DT), and left ventricle Ejection time (LVET) were extracted from PPG during each stage. Preliminary results showed that the application of -40 mmHg i.e. moderate stage simulated hemorrhage resulted significant changes in HR (85±4 bpm vs 68 ± 5bpm, p < 0.01), ST (191 ± 10 ms vs 253 ± 31 ms, p < 0.05), LVET (350 ± 14 ms vs 479 ± 47 ms, p < 0.05) and DT (551 ± 22 ms vs 683 ± 59 ms, p < 0.05) compared to rest, while no change was observed in SA (p > 0.05) as a consequence of LBNP application. These findings demonstrated the potential of cardiac signals in detecting moderate hemorrhage. In future, we will analyze all the LBNP stages and investigate the feasibility of other physiological signals to develop a predictive machine learning model for early detection of hemorrhage.

Keywords: blood pressure, hemorrhage, lower-body negative pressure, LBNP, machine learning

Procedia PDF Downloads 160
448 Heuristics for Optimizing Power Consumption in the Smart Grid

Authors: Zaid Jamal Saeed Almahmoud

Abstract:

Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.

Keywords: heuristics, optimization, smart grid, peak demand, power supply

Procedia PDF Downloads 82
447 Electron-Ion Recombination for Photoionized and Collisionally Ionized Plasmas

Authors: Shahin A. Abdel-Naby, Asad T. Hassan

Abstract:

Astrophysical plasma environments can be classified into collisionally ionized (CP) and photoionizedplasmas (PP). In the PP, ionization is caused by an external radiation field, while it is caused by electron collision in the CP. Accurate and reliable laboratory astrophysical data for electron-ion recombination is needed for plasma modeling for low and high-temperatures. Dielectronic recombination (DR) is the dominant recombination process for the CP for most of the ions. When a free electron is captured by an ion with simultaneous excitation of its core, a doubly-exited intermediate state may be formed. The doubly excited state relaxes either by electron emission (autoionization) or by radiative decay (photon emission). DR process takes place when the relaxation occurs to a bound state by a photon emission. DR calculations at low-temperatures are problematic and challenging since small uncertaintiesin the low-energy DR resonance positions can produce huge uncertainties in DR rate coefficients.DR rate coefficients for N²⁺ and O³⁺ ions are calculated using state-of-the-art multi-configurationBreit-Pauli atomic structure AUTOSTRUCTURE collisional package within the generalized collisional-radiative framework. Level-resolved calculations for RR and DR rate coefficients from the ground and metastable initial states are produced in an intermediate coupling scheme associated withn = 0 and n = 1 core-excitations. DR cross sections for these ions are convoluted with the experimental electron-cooler temperatures to produce DR rate coefficients. Good agreements are foundbetween these rate coefficients and theexperimental measurements performed at CRYRING heavy-ionstorage ring for both ions.

Keywords: atomic data, atomic process, electron-ion collision, plasmas

Procedia PDF Downloads 84
446 Mg and MgN₃ Cluster in Diamond: Quantum Mechanical Studies

Authors: T. S. Almutairi, Paul May, Neil Allan

Abstract:

The geometrical, electronic and magnetic properties of the neutral Mg center and MgN₃ cluster in diamond have been studied theoretically in detail by means of an HSE06 Hamiltonian that includes a fraction of the exact exchange term; this is important for a satisfactory picture of the electronic states of open-shell systems. Another batch of the calculations by GGA functionals have also been included for comparison, and these support the results from HSE06. The local perturbations in the lattice by introduced Mg defect are restricted in the first and second shell of atoms before eliminated. The formation energy calculated with HSE06 and GGA of single Mg agrees with the previous result. We found the triplet state with C₃ᵥ is the ground state of Mg center with energy lower than the singlet with C₂ᵥ by ~ 0.1 eV. The recent experimental ZPL (557.4 nm) of Mg center in diamond has been discussed in the view of present work. The analysis of the band-structure of the MgN₃ cluster confirms that the MgN₃ defect introduces a shallow donor level in the gap lying within the conduction band edge. This observation is supported by the EMM that produces n-type levels shallower than the P donor level. The formation energy of MgN₂ calculated from a 2NV defect (~ 3.6 eV) is a promising value from which to engineer MgN₃ defects inside the diamond. Ion-implantation followed by heating to about 1200-1600°C might induce migration of N related defects to the localized Mg center. Temperature control is needed for this process to restore the damage and ensure the mobilities of V and N, which demands a more precise experimental study.

Keywords: empirical marker method, generalised gradient approximation, Heyd–Scuseria–Ernzerhof screened hybrid functional, zero phono line

Procedia PDF Downloads 105
445 Co-Culture of Neonate Mouse Spermatogonial Stem Cells with Sertoli Cells: Inductive Role of Melatonin following Transplantation: Adult Azoospermia Mouse Model

Authors: Mehdi Abbasi, Shadan Navid, Mohammad Pourahmadi, M. Majidi Zolbin

Abstract:

We have recently reported that melatonin as antioxidant enhances the efficacy of colonization of spermatogonial stem cells (SSCs). Melatonin as an antioxidant plays a vital role in the development of SSCs in vitro. This study aimed to investigate evaluation of sertoli cells and melatonin simultaneously on SSC proliferation following transplantation to testis of adult mouse busulfan-treated azoospermia model. SSCs and sertoli cells were isolated from the testes of three to six-day old male mice.To determine the purity, Flow cytometry technique using PLZF antibody were evaluated. Isolated testicular cells were cultured in αMEM medium in the absence (control group) or presence (experimental group) of sertoli cells and melatonin extract for 2 weeks. We then transplanted SSCs by injection into the azoospermia mice model. Higher viability, proliferation, and Id4, Plzf, expression were observed in the presence of simultaneous sertoli cells and melatonin in vitro. Moreover, immunocytochemistry results showed higher Oct4 expression in this group. Eight weeks after transplantation, injected cells were localized at the base of seminiferous tubules in the recipient testes. The number of spermatogonia and the weight of testis were higher in the experimental group relative to control group. The results of our study suggest that this new protocol can increase the transplantation of these cells can be useful in the treatment of male infertility.

Keywords: colonization, melatonin, spermatogonial stem cell, transplantation

Procedia PDF Downloads 164
444 pH-Responsive Carrier Based on Polymer Particle

Authors: Florin G. Borcan, Ramona C. Albulescu, Adela Chirita-Emandi

Abstract:

pH-responsive drug delivery systems are gaining more importance because these systems deliver the drug at a specific time in regards to pathophysiological necessity, resulting in improved patient therapeutic efficacy and compliance. Polyurethane materials are well-known for industrial applications (elastomers and foams used in different insulations and automotive), but they are versatile biocompatible materials with many applications in medicine, as artificial skin for the premature neonate, membrane in the hybrid artificial pancreas, prosthetic heart valves, etc. This study aimed to obtain the physico-chemical characterization of a drug delivery system based on polyurethane microparticles. The synthesis is based on a polyaddition reaction between an aqueous phase (mixture of polyethylene-glycol M=200, 1,4-butanediol and Tween® 20) and an organic phase (lysin-diisocyanate in acetone) combined with simultaneous emulsification. Different active agents (omeprazole, amoxicillin, metoclopramide) were used to verify the release profile of the macromolecular particles in different pH mediums. Zetasizer measurements were performed using an instrument based on two modules: a Vasco size analyzer and a Wallis Zeta potential analyzer (Cordouan Technol., France) in samples that were kept in various solutions with different pH and the maximum absorbance in UV-Vis spectra were collected on a UVi Line 9,400 Spectrophotometer (SI Analytics, Germany). The results of this investigation have revealed that these particles are proper for a prolonged release in gastric medium where they can assure an almost constant concentration of the active agents for 1-2 weeks, while they can be disassembled faster in a medium with neutral pHs, such as the intestinal fluid.

Keywords: lysin-diisocyanate, nanostructures, polyurethane, Zetasizer

Procedia PDF Downloads 175
443 High Temperature Deformation Behavior of Al0.2CoCrFeNiMo0.5 High Entropy alloy

Authors: Yasam Palguna, Rajesh Korla

Abstract:

The efficiency of thermally operated systems can be improved by increasing the operating temperature, thereby decreasing the fuel consumption and carbon footprint. Hence, there is a continuous need for replacing the existing materials with new alloys with higher temperature working capabilities. During the last decade, multi principal element alloys, commonly known as high entropy alloys are getting more attention because of their superior high temperature strength along with good high temperature corrosion and oxidation resistance, The present work focused on the microstructure and high temperature tensile behavior of Al0.2CoCrFeNiMo0.5 high entropy alloy (HEA). Wrought Al0.2CoCrFeNiMo0.5 high entropy alloy, produced by vacuum induction melting followed by thermomechanical processing, is tested in the temperature range of 200 to 900oC. It is exhibiting very good resistance to softening with increasing temperature up to 700oC, and thereafter there is a rapid decrease in the strength, especially beyond 800oC, which may be due to simultaneous occurrence of recrystallization and precipitate coarsening. Further, it is exhibiting superplastic kind of behavior with a uniform elongation of ~ 275 % at 900 oC temperature and 1 x 10-3 s-1 strain rate, which may be due to the presence of fine stable equi-axed grains. Strain rate sensitivity of 0.3 was observed, suggesting that solute drag dislocation glide might be the active mechanism during superplastic kind of deformation. Post deformation microstructure suggesting that cavitation at the sigma phase-matrix interface is the failure mechanism during high temperature deformation. Finally, high temperature properties of the present alloy will be compared with the contemporary high temperature materials such as ferritic, austenitic steels, and superalloys.

Keywords: high entropy alloy, high temperature deformation, super plasticity, post-deformation microstructures

Procedia PDF Downloads 157
442 High Titer Cellulosic Ethanol Production Achieved by Fed-Batch Prehydrolysis Simultaneous Enzymatic Saccharification and Fermentation of Sulfite Pretreated Softwood

Authors: Chengyu Dong, Shao-Yuan Leu

Abstract:

Cellulosic ethanol production from lignocellulosic biomass can reduce our reliance on fossil fuel, mitigate climate change, and stimulate rural economic development. The relative low ethanol production (60 g/L) limits the economic viable of lignocellulose-based biorefinery. The ethanol production can be increased up to 80 g/L by removing nearly all the non-cellulosic materials, while the capital of the pretreatment process increased significantly. In this study, a fed-batch prehydrolysis simultaneously saccharification and fermentation process (PSSF) was designed to converse the sulfite pretreated softwood (~30% residual lignin) to high concentrations of ethanol (80 g/L). The liquefaction time of hydrolysis process was shortened down to 24 h by employing the fed-batch strategy. Washing out the spent liquor with water could eliminate the inhibition of the pretreatment spent liquor. However, the ethanol yield of lignocellulose was reduced as the fermentable sugars were also lost during the process. Fed-batch prehydrolyzing the while slurry (i.e. liquid plus solid fraction) pretreated softwood for 24 h followed by simultaneously saccharification and fermentation process at 28 °C can generate 80 g/L ethanol production. Fed-batch strategy is very effectively to eliminate the “solid effect” of the high gravity saccharification, so concentrating the cellulose to nearly 90% by the pretreatment process is not a necessary step to get high ethanol production. Detoxification of the pretreatment spent liquor caused the loss of sugar and reduced the ethanol yield consequently. The tolerance of yeast to inhibitors was better at 28 °C, therefore, reducing the temperature of the following fermentation process is a simple and valid method to produce high ethanol production.

Keywords: cellulosic ethanol, sulfite pretreatment, Fed batch PSSF, temperature

Procedia PDF Downloads 360
441 Combustion Improvements by C4/C5 Bio-Alcohol Isomer Blended Fuels Combined with Supercharging and EGR in a Diesel Engine

Authors: Yasufumi Yoshimoto, Enkhjargal Tserenochir, Eiji Kinoshita, Takeshi Otaka

Abstract:

Next generation bio-alcohols produced from non-food based sources like cellulosic biomass are promising renewable energy sources. The present study investigates engine performance, combustion characteristics, and emissions of a small single cylinder direct injection diesel engine fueled by four kinds of next generation bio-alcohol isomer and diesel fuel blends with a constant blending ratio of 3:7 (mass). The tested bio-alcohol isomers here are n-butanol and iso-butanol (C4 alcohol), and n-pentanol and iso-pentanol (C5 alcohol). To obtain simultaneous reductions in NOx and smoke emissions, the experiments employed supercharging combined with EGR (Exhaust Gas Recirculation). The boost pressures were fixed at two conditions, 100 kPa (naturally aspirated operation) and 120 kPa (supercharged operation) provided with a roots blower type supercharger. The EGR rates were varied from 0 to 25% using a cooled EGR technique. The results showed that both with and without supercharging, all the bio-alcohol blended diesel fuels improved the trade-off relation between NOx and smoke emissions at all EGR rates while maintaining good engine performance, when compared with diesel fuel operation. It was also found that regardless of boost pressure and EGR rate, the ignition delays of the tested bio-alcohol isomer blends are in the order of iso-butanol > n-butanol > iso-pentanol > n-pentanol. Overall, it was concluded that, except for the changes in the ignition delays the influence of bio-alcohol isomer blends on the engine performance, combustion characteristics, and emissions are relatively small.

Keywords: alternative fuel, butanol, diesel engine, EGR (Exhaust Gas Recirculation), next generation bio-alcohol isomer blended fuel, pentanol, supercharging

Procedia PDF Downloads 156
440 Accurate Binding Energy of Ytterbium Dimer from Ab Initio Calculations and Ultracold Photoassociation Spectroscopy

Authors: Giorgio Visentin, Alexei A. Buchachenko

Abstract:

Recent proposals to use Yb dimer as an optical clock and as a sensor for non-Newtonian gravity imply the knowledge of its interaction potential. Here, the ground-state Born-Oppenheimer Yb₂ potential energy curve is represented by a semi-analytical function, consisting of short- and long-range contributions. For the former, the systematic ab initio all-electron exact 2-component scalar-relativistic CCSD(T) calculations are carried out. Special care is taken to saturate diffuse basis set component with the atom- and bond-centered primitives and reach the complete basis set limit through n = D, T, Q sequence of the correlation-consistent polarized n-zeta basis sets. Similar approaches are used to the long-range dipole and quadrupole dispersion terms by implementing the CCSD(3) polarization propagator method for dynamic polarizabilities. Dispersion coefficients are then computed through Casimir-Polder integration. The semiclassical constraint on the number of the bound vibrational levels known for the ¹⁷⁴Yb isotope is used to scale the potential function. The scaling, based on the most accurate ab initio results, bounds the interaction energy of two Yb atoms within the narrow 734 ± 4 cm⁻¹ range, in reasonable agreement with the previous ab initio-based estimations. The resulting potentials can be used as the reference for more sophisticated models that go beyond the Born-Oppenheimer approximation and provide the means of their uncertainty estimations. The work is supported by Russian Science Foundation grant # 17-13-01466.

Keywords: ab initio coupled cluster methods, interaction potential, semi-analytical function, ytterbium dimer

Procedia PDF Downloads 143
439 Isotope Effects on Inhibitors Binding to HIV Reverse Transcriptase

Authors: Agnieszka Krzemińska, Katarzyna Świderek, Vicente Molinier, Piotr Paneth

Abstract:

In order to understand in details the interactions between ligands and the enzyme isotope effects were studied between clinically used drugs that bind in the active site of Human Immunodeficiency Virus Reverse Transcriptase, HIV-1 RT, as well as triazole-based inhibitor that binds in the allosteric pocket of this enzyme. The magnitudes and origins of the resulting binding isotope effects were analyzed. Subsequently, binding isotope effect of the same triazole-based inhibitor bound in the active site were analyzed and compared. Together, these results show differences in binding origins in two sites of the enzyme and allow to analyze binding mode and place of newly synthesized inhibitors. Typical protocol is described below on the example of triazole ligand in the allosteric pocket. Triazole was docked into allosteric cavity of HIV-1 RT with Glide using extra-precision mode as implemented in Schroedinger software. The structure of HIV-1 RT was obtained from Protein Data Bank as structure of PDB ID 2RKI. The pKa for titratable amino acids was calculated using PROPKA software, and in order to neutralize the system 15 Cl- were added using tLEaP package implemented in AMBERTools ver.1.5. Also N-terminals and C-terminals were build using tLEaP. The system was placed in 144x160x144Å3 orthorhombic box of water molecules using NAMD program. Missing parameters for triazole were obtained at the AM1 level using Antechamber software implemented in AMBERTools. The energy minimizations were carried out by means of a conjugate gradient algorithm using NAMD. Then system was heated from 0 to 300 K with temperature increment 0.001 K. Subsequently 2 ns Langevin−Verlet (NVT) MM MD simulation with AMBER force field implemented in NAMD was carried out. Periodic Boundary Conditions and cut-offs for the nonbonding interactions, range radius from 14.5 to 16 Å, are used. After 2 ns relaxation 200 ps of QM/MM MD at 300 K were simulated. The triazole was treated quantum mechanically at the AM1 level, protein was described using AMBER and water molecules were described using TIP3P, as implemented in fDynamo library. Molecules 20 Å apart from the triazole were kept frozen, with cut-offs established on range radius from 14.5 to 16 Å. In order to describe interactions between triazole and RT free energy of binding using Free Energy Perturbation method was done. The change in frequencies from ligand in solution to ligand bounded in enzyme was used to calculate binding isotope effects.

Keywords: binding isotope effects, molecular dynamics, HIV, reverse transcriptase

Procedia PDF Downloads 424
438 Modelling Patient Condition-Based Demand for Managing Hospital Inventory

Authors: Esha Saha, Pradip Kumar Ray

Abstract:

A hospital inventory comprises of a large number and great variety of items for the proper treatment and care of patients, such as pharmaceuticals, medical equipment, surgical items, etc. Improper management of these items, i.e. stockouts, may lead to delay in treatment or other fatal consequences, even death of the patient. So, generally the hospitals tend to overstock items to avoid the risk of stockout which leads to unnecessary investment of money, difficulty in storing, more expiration and wastage, etc. Thus, in such challenging environment, it is necessary for hospitals to follow an inventory policy considering the stochasticity of demand in a hospital. Statistical analysis captures the correlation of patient condition based on bed occupancy with the patient demand which changes stochastically. Due to the dependency on bed occupancy, the markov model is developed that helps to map the changes in demand of hospital inventory based on the changes in the patient condition represented by the movements of bed occupancy states (acute care state, rehabilitative state and long-care state) during the length-of-stay of patient in a hospital. An inventory policy is developed for a hospital based on the fulfillment of patient demand with the objective of minimizing the frequency and quantity of placement of orders of inventoried items. The analytical structure of the model based on probability calculation is provided to show the optimal inventory-related decisions. A case-study is illustrated in this paper for the development of hospital inventory model based on patient demand for multiple inpatient pharmaceutical items. A sensitivity analysis is conducted to investigate the impact of inventory-related parameters on the developed optimal inventory policy. Therefore, the developed model and solution approach may help the hospital managers and pharmacists in managing the hospital inventory in case of stochastic demand of inpatient pharmaceutical items.

Keywords: bed occupancy, hospital inventory, markov model, patient condition, pharmaceutical items

Procedia PDF Downloads 314
437 Research on Level Adjusting Mechanism System of Large Space Environment Simulator

Authors: Han Xiao, Zhang Lei, Huang Hai, Lv Shizeng

Abstract:

Space environment simulator is a device for spacecraft test. KM8 large space environment simulator built in Tianjing Space City is the largest as well as the most advanced space environment simulator in China. Large deviation of spacecraft level will lead to abnormally work of the thermal control device in spacecraft during the thermal vacuum test. In order to avoid thermal vacuum test failure, level adjusting mechanism system is developed in the KM8 large space environment simulator as one of the most important subsystems. According to the level adjusting requirements of spacecraft’s thermal vacuum tests, the four fulcrums adjusting model is established. By means of collecting level instruments and displacement sensors data, stepping motors controlled by PLC drive four supporting legs simultaneous movement. In addition, a PID algorithm is used to control the temperature of supporting legs and level instruments which long time work under the vacuum cold and black environment in KM8 large space environment simulator during thermal vacuum tests. Based on the above methods, the data acquisition and processing, the analysis and calculation, real time adjustment and fault alarming of the level adjusting mechanism system are implemented. The level adjusting accuracy reaches 1mm/m, and carrying capacity is 20 tons. Debugging showed that the level adjusting mechanism system of KM8 large space environment simulator can meet the thermal vacuum test requirement of the new generation spacecraft. The performance and technical indicators of the level adjusting mechanism system which provides important support for the development of spacecraft in China have been ahead of similar equipment in the world.

Keywords: space environment simulator, thermal vacuum test, level adjusting, spacecraft, parallel mechanism

Procedia PDF Downloads 237
436 Kinetic Study of Physical Quality Changes on Jumbo Squid (Dosidicus gigas) Slices during Application High-Pressure Impregnation

Authors: Mario Perez-Won, Roberto Lemus-Mondaca, Fernanda Marin, Constanza Olivares

Abstract:

This study presents the simultaneous application of high hydrostatic pressure (HHP) and osmotic dehydration of jumbo squid (Dosidicus gigas) slice. Diffusion coefficients for both components water and solids were improved by the process pressure, being influenced by pressure level. The working conditions were different pressures such as 100, 250, 400 MPa and pressure atmospheric (0.1 MPa) for time intervals from 30 to 300 seconds and a 15% NaCl concentration. The mathematical expressions used for mass transfer simulations both water and salt were those corresponding to Newton, Henderson and Pabis, Page and Weibull models, where the Weibull and Henderson-Pabis models presented the best fitted to the water and salt experimental data, respectively. The values for water diffusivity coefficients varied from 1.62 to 8.10x10⁻⁹ m²/s whereas that for salt varied among 14.18 to 36.07x10⁻⁹ m²/s for selected conditions. Finally, as to quality parameters studied under the range of experimental conditions studied, the treatment at 250 MPa yielded on the samples a minimum hardness, whereas springiness, cohesiveness and chewiness at 100, 250 and 400 MPa treatments presented statistical differences regarding to unpressurized samples. The colour parameters L* (lightness) increased, however, but b* (yellowish) and a* (reddish) parameters decreased when increasing pressure level. This way, samples presented a brighter aspect and a mildly cooked appearance. The results presented in this study can support the enormous potential of hydrostatic pressure application as a technique important for compounds impregnation under high pressure.

Keywords: colour, diffusivity, high pressure, jumbo squid, modelling, texture

Procedia PDF Downloads 334
435 Design of Microwave Building Block by Using Numerical Search Algorithm

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

With the development of technology, countries gradually allocated more and more frequency spectrums for civilization and commercial usage, especially those high radio frequency bands indicating high information capacity. The field effect becomes more and more prominent in microwave components as frequency increases, which invalidates the transmission line theory and complicate the design of microwave components. Here a modeling approach based on numerical search algorithm is proposed to design various building blocks for microwave circuits to avoid complicated impedance matching and equivalent electrical circuit approximation. Concretely, a microwave component is discretized to a set of segments along the microwave propagation path. Each of the segment is initialized with random dimensions, which constructs a multiple-dimension parameter space. Then numerical searching algorithms (e.g. Pattern search algorithm) are used to find out the ideal geometrical parameters. The optimal parameter set is achieved by evaluating the fitness of S parameters after a number of iterations. We had adopted this approach in our current projects and designed many microwave components including sharp bends, T-branches, Y-branches, microstrip-to-stripline converters and etc. For example, a stripline 90° bend was designed in 2.54 mm x 2.54 mm space for dual-band operation (Ka band and Ku band) with < 0.18 dB insertion loss and < -55 dB reflection. We expect that this approach can enrich the tool kits for microwave designers.

Keywords: microwave component, microstrip and stripline, bend, power division, the numerical search algorithm.

Procedia PDF Downloads 370
434 A Preparatory Method for Building Construction Implemented in a Case Study in Brazil

Authors: Aline Valverde Arroteia, Tatiana Gondim do Amaral, Silvio Burrattino Melhado

Abstract:

During the last twenty years, the construction field in Brazil has evolved significantly in response to its market growing and competitiveness. However, this evolving path has faced many obstacles such as cultural barriers and the lack of efforts to achieve quality at the construction site. At the same time, the greatest amount of information generated on the designing or construction phases is lost due to the lack of an effective coordination of these activities. Face this problem, the aim of this research was to implement a French method named PEO which means preparation for building construction (in Portuguese) seeking to understand the design management process and its interface with the building construction phase. The research method applied was qualitative, and it was carried out through two case studies in the city of Goiania, in Goias, Brazil. The research was divided into two stages called pilot study at Company A and implementation of PEO at Company B. After the implementation; the results demonstrated the PEO method's effectiveness and feasibility while a booster on the quality improvement of design management. The analysis showed that the method has a purpose to improve the design and allow the reduction of failures, errors and rework commonly found in the production of buildings. Therefore, it can be concluded that the PEO is feasible to be applied to real estate and building companies. But, companies need to believe in the contribution they can make to the discovery of design failures in conjunction with other stakeholders forming a construction team. The result of PEO can be maximized when adopting the principles of simultaneous engineering and insertion of new computer technologies, which use a three-dimensional model of the building with BIM process.

Keywords: communication, design and construction interface management, preparation for building construction (PEO), proactive coordination (CPA)

Procedia PDF Downloads 150
433 Alternative General Formula to Estimate and Test Influences of Early Diagnosis on Cancer Survival

Authors: Li Yin, Xiaoqin Wang

Abstract:

Background and purpose: Cancer diagnosis is part of a complex stochastic process, in which patients' personal and social characteristics influence the choice of diagnosing methods, diagnosing methods, in turn, influence the initial assessment of cancer stage, the initial assessment, in turn, influences the choice of treating methods, and treating methods in turn influence cancer outcomes such as cancer survival. To evaluate diagnosing methods, one needs to estimate and test the causal effect of a regime of cancer diagnosis and treatments. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to estimate and test these causal effects via point effects. The purpose of the work is to estimate and test causal effects under various regimes of cancer diagnosis and treatments via point effects. Challenges and solutions: The cancer stage has influences from earlier diagnosis as well as on subsequent treatments. As a consequence, it is highly difficult to estimate and test the causal effects via standard parameters, that is, the conditional survival given all stationary covariates, diagnosing methods, cancer stage and prognosis factors, treating methods. Instead of standard parameters, we use the point effects of cancer diagnosis and treatments to estimate and test causal effects under various regimes of cancer diagnosis and treatments. We are able to use familiar methods in the framework of single-point causal inference to accomplish the task. Achievements: we have applied this method to stomach cancer survival from a clinical study in Sweden. We have studied causal effects under various regimes, including the optimal regime of diagnosis and treatments and the effect moderation of the causal effect by age and gender.

Keywords: cancer diagnosis, causal effect, point effect, G-formula, sequential causal effect

Procedia PDF Downloads 188
432 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 193
431 Quantification and Identification of the Main Components of the Biomass of the Microalgae Scenedesmus SP. – Prospection of Molecules of Commercial Interest

Authors: Carolina V. Viegas, Monique Gonçalves, Gisel Chenard Diaz, Yordanka Reyes Cruz, Donato Alexandre Gomes Aranda

Abstract:

To develop the massive cultivation of microalgae, it is necessary to isolate and characterize the species, improving genetic tools in search of specific characteristics. Therefore, the detection, identification and quantification of the compounds that compose the Scenedesmus sp. were prerequisites to verify the potential of these microalgae. The main objective of this work was to carry out the characterization of Scenedesmus sp. as to the content of ash, carbohydrates, proteins and lipids as well as the determination of the composition of their lipid classes and main fatty acids. The biomass of Scenedesmus sp, showed 15,29 ± 0,23 % of ash and CaO (36,17 %) was the main component of this fraction, The total protein and carbohydrate content of the biomass was 40,74 ± 1,01 % and 23,37 ± 0,95 %, respectively, proving to be a potential source of proteins as well as carbohydrates for the production of ethanol via fermentation, The lipid contents extracted via Bligh & Dyer and in situ saponification were 8,18 ± 0,13 % and 4,11 ± 0,11 %, respectively. In the lipid extracts obtained via Bligh & Dyer, approximately 50 % of the composition of this fraction consists of fatty compounds, while the other half is composed of an unsaponifiable fraction composed mainly of chlorophylls, phytosterols and carotenes. From the lowest yield, it was possible to obtain a selectivity of 92,14 % for fatty components (fatty acids and fatty esters) confirmed through the infrared spectroscopy technique. The presence of polyunsaturated acids (~45 %) in the lipid extracts indicated the potential of this fraction as a source of nutraceuticals. The results indicate that the biomass of Scenedesmus sp, can become a promising potential source for obtaining polyunsaturated fatty acids, carotenoids and proteins as well as the simultaneous obtainment of different compounds of high commercial value.

Keywords: microalgae, Desmodesmus, lipid classes, fatty acid profile, proteins, carbohydrates

Procedia PDF Downloads 83
430 Alumina Supported Copper-manganese Catalysts for Combustion of Exhaust Gases: Catalysts Characterization

Authors: Krasimir I. Ivanov, Elitsa N. Kolentsova, Dimitar Y. Dimitrov, Georgi V. Avdeev, Tatyana T. Tabakova

Abstract:

In recent research copper and manganese systems were found to be the most active in CO and organic compounds oxidation among the base catalysts. The mixed copper manganese oxide has been widely studied in oxidation reactions because of their higher activity at low temperatures in comparison with single oxide catalysts. The results showed that the formation of spinel CuxMn3−xO4 in the oxidized catalyst is responsible for the activity even at room temperature. That is why most of the investigations are focused on the hopcalite catalyst (CuMn2O4) as the best copper-manganese catalyst. Now it’s known that this is true only for CO oxidation, but not for mixture of CO and VOCs. The purpose of this study is to investigate the alumina supported copper-manganese catalysts with different Cu/Mn molar ratio in terms of oxidation of CO, methanol and dimethyl ether. The catalysts were prepared by impregnation of γ-Al2O3 with copper and manganese nitrates and the catalytic activity measurements were carried out in continuous flow equipment with a four-channel isothermal stainless steel reactor. Gas mixtures on the input and output of the reactor were analyzed with a gas chromatograph, equipped with FID and TCD detectors. The texture characteristics were determined by low-temperature (- 196 oС) nitrogen adsorption in a Quantachrome Instruments NOVA 1200e (USA) specific surface area&pore analyzer. Thermal, XRD and TPR analyses were performed. It was established that the active component of the mixed Cu-Mn/γ–alumina catalysts strongly depends on the Cu/Mn molar ratio. Highly active alumina supported Cu-Mn catalysts for CO, methanol and DME oxidation were synthesized. While the hopcalite is the best catalyst for CO oxidation, the best compromise for simultaneous oxidation of all components is the catalyst with Cu/Mn molar ratio 1:5.

Keywords: supported copper-manganese catalysts, CO, VOCs oxidation, combustion of exhaust gases

Procedia PDF Downloads 277
429 The Heating Prosumer: Optimal Simultaneous Use of Heat-Pumps and Solar Panels

Authors: Youssef El Makhrout, Aude Pommeret, Tunç Durmaz

Abstract:

This paper analyses the consequences of a heat pump on the optimal behavior of a prosumer. A theoretical microeconomic model is developed for household heating and electricity consumption to analyze the profitability of installing a solar PV system with a heat pump, battery storage, and grid use. The aim is to present the optimal scenario of investment in renewable energy equipment to cover domestic and heating needs. Simulation data of a French house of 170m² in Chambery are used in this paper. The house is divided into 5 zones with 3 heated zones of 89.4 m² occupied by two people. The analysis is based on hourly data for one year, from 00:00 01/01/2021 to 23:00 31/12/2021. Results indicate that without taking the cost of materials and no financial aid, the most profitable scenario for a household is when he owns solar panels, a heat pump, and battery storage. However, with the costs and financial aid of the French government for energy renovation, the net economic surplus change and the profitability during 20 years are important when the household decides to add a heat pump to existing solar panels. In this scenario, the household can realize 35.84% as a surplus change improvement, but this cannot cover all installation costs. The household can get benefits and cover all installation costs after exploiting financial support in the case of adopting a heat pump. The investment in a battery is still not profitable because of its high cost and the lack of financial aid. Some public policy recommendations are proposed, especially for solar panels and battery storage.

Keywords: household’s heating, prosumer, electricity consumption, renewable energy, welfare gain, comfort, solar PV, heat pumps, storage

Procedia PDF Downloads 61
428 Extreme Value Theory Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. In this paper, the results for the reliability of diesel generator fans were calculated through Extreme Value Theory. The Extreme Value Theory is not widely used in the engineering field. Its usage is well known in other areas such as hydrology, meteorology, finance. The significance of this theory is in the fact that unlike the other statistical methods it is focused on rare and extreme values, and not on average. It should be noted that this theory is not designed exclusively for extreme events, but for extreme values in any event. Therefore, this is a great opportunity to apply the theory and test if it could be applied in this situation. The significance of the work is the calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know the time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. The results achieved in this method will show the approximation of time for which the fans will work as they should, and the percentage of probability of fans working more than certain estimated time. Extreme Value Theory can be applied not only for rare and extreme events, but for any event that has values which we can consider as extreme.

Keywords: extreme value theory, lifetime, reliability analysis, statistic, time to failure

Procedia PDF Downloads 320
427 Graphene-reinforced Metal-organic Framework Derived Cobalt Sulfide/Carbon Nanocomposites as Efficient Multifunctional Electrocatalysts

Authors: Yongde Xia, Laicong Deng, Zhuxian Yang

Abstract:

Developing cost-effective electrocatalysts for oxygen reduction reaction (ORR), oxygen evolution reaction (OER) and hydrogen evolution reaction (HER) is vital in energy conversion and storage applications. Herein, we report a simple method for the synthesis of graphene-reinforced cobalt sulfide/carbon nanocomposites and the evaluation of their electrocatalytic performance for typical electrocatalytic reactions. Nanocomposites of cobalt sulfide embedded in N, S co-doped porous carbon and graphene (CoS@C/Graphene) were generated via simultaneous sulfurization and carbonization of one-pot synthesized graphite oxide-ZIF-67 precursors. The obtained CoS@C/Graphene nanocomposite was characterized by X-ray diffraction, Raman spectroscopy, Thermogravimetric analysis-Mass spectroscopy, Scanning electronic microscopy, Transmission electronic microscopy, X-ray photoelectron spectroscopy and gas sorption. It was found that cobalt sulfide nanoparticles were homogenously dispersed in the in-situ formed N, S co-doped porous carbon/Graphene matrix. The CoS@C/10Graphene composite not only shows excellent electrocatalytic activity toward ORR with high onset potential of 0.89 V, four-electron pathway and superior durability of maintaining 98% current after continuously running for around 5 hours, but also exhibits good performance for OER and HER, due to the improved electrical conductivity, increased catalytic active sites and connectivity between the electrocatalytic active cobalt sulfide and the carbon matrix. This work offers a new approach for the development of novel multifunctional nanocomposites for the next generation of energy conversion and storage applications.

Keywords: MOF derivative, graphene, electrocatalyst, oxygen reduction reaction, oxygen evolution reaction, hydrogen evolution reaction

Procedia PDF Downloads 42