Search results for: lighting energy efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13305

Search results for: lighting energy efficiency

375 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 218
374 Natural Monopolies and Their Regulation in Georgia

Authors: Marina Chavleishvili

Abstract:

Introduction: Today, the study of monopolies, including natural monopolies, is topical. In real life, pure monopolies are natural monopolies. Natural monopolies are used widely and are regulated by the state. In particular, the prices and rates are regulated. The paper considers the problems associated with the operation of natural monopolies in Georgia, in particular, their microeconomic analysis, pricing mechanisms, and legal mechanisms of their operation. The analysis was carried out on the example of the power industry. The rates of natural monopolies in Georgia are controlled by the Georgian National Energy and Water Supply Regulation Commission. The paper analyzes the positive role and importance of the regulatory body and the issues of improving the legislative base that will support the efficient operation of the branch. Methodology: In order to highlight natural monopolies market tendencies, the domestic and international markets are studied. An analysis of monopolies is carried out based on the endogenous and exogenous factors that determine the condition of companies, as well as the strategies chosen by firms to increase the market share. According to the productivity-based competitiveness assessment scheme, the segmentation opportunities, business environment, resources, and geographical location of monopolist companies are revealed. Main Findings: As a result of the analysis, certain assessments and conclusions were made. Natural monopolies are quite a complex and versatile economic element, and it is important to specify and duly control their frame conditions. It is important to determine the pricing policy of natural monopolies. The rates should be transparent, should show the level of life in the country, and should correspond to the incomes. The analysis confirmed the significance of the role of the Antimonopoly Service in the efficient management of natural monopolies. The law should adapt to reality and should be applied only to regulate the market. The present-day differential electricity tariffs varying depending on the consumed electrical power need revision. The effects of the electricity price discrimination are important, segmentation in different seasons in particular. Consumers use more electricity in winter than in summer, which is associated with extra capacities and maintenance costs. If the price of electricity in winter is higher than in summer, the electricity consumption will decrease in winter. The consumers will start to consume the electricity more economically, what will allow reducing extra capacities. Conclusion: Thus, the practical realization of the views given in the paper will contribute to the efficient operation of natural monopolies. Consequently, their activity will be oriented not on the reduction but on the increase of increments of the consumers or producers. Overall, the optimal management of the given fields will allow for improving the well-being throughout the country. In the article, conclusions are made, and the recommendations are developed to deliver effective policies and regulations toward the natural monopolies in Georgia.

Keywords: monopolies, natural monopolies, regulation, antimonopoly service

Procedia PDF Downloads 86
373 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer

Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz

Abstract:

Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.

Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions

Procedia PDF Downloads 145
372 Experimental Characterisation of Composite Panels for Railway Flooring

Authors: F. Pedro, S. Dias, A. Tadeu, J. António, Ó. López, A. Coelho

Abstract:

Railway transportation is considered the most economical and sustainable way to travel. However, future mobility brings important challenges to railway operators. The main target is to develop solutions that stimulate sustainable mobility. The research and innovation goals for this domain are efficient solutions, ensuring an increased level of safety and reliability, improved resource efficiency, high availability of the means (train), and satisfied passengers with the travel comfort level. These requirements are in line with the European Strategic Agenda for the 2020 rail sector, promoted by the European Rail Research Advisory Council (ERRAC). All these aspects involve redesigning current equipment and, in particular, the interior of the carriages. Recent studies have shown that two of the most important requirements for passengers are reasonable ticket prices and comfortable interiors. Passengers tend to use their travel time to rest or to work, so train interiors and their systems need to incorporate features that meet these requirements. Among the various systems that integrate train interiors, the flooring system is one of the systems with the greatest impact on passenger safety and comfort. It is also one of the systems that takes more time to install on the train, and which contributes seriously to the weight (mass) of all interior systems. Additionally, it presents a strong impact on manufacturing costs. The design of railway floor, in the development phase, is usually made relying on a design software that allows to draw and calculate several solutions in a short period of time. After obtaining the best solution, considering the goals previously defined, experimental data is always necessary and required. This experimental phase has such great significance, that its outcome can provoke the revision of the designed solution. This paper presents the methodology and some of the results of an experimental characterisation of composite panels for railway application. The mechanical tests were made for unaged specimens and for specimens that suffered some type of aging, i.e. heat, cold and humidity cycles or freezing/thawing cycles. These conditionings aim to simulate not only the time effect, but also the impact of severe environmental conditions. Both full solutions and separated components/materials were tested. For the full solution, (panel) these were: four-point bending tests, tensile shear strength, tensile strength perpendicular to the plane, determination of the spreading of water, and impact tests. For individual characterisation of the components, more specifically for the covering, the following tests were made: determination of the tensile stress-strain properties, determination of flexibility, determination of tear strength, peel test, tensile shear strength test, adhesion resistance test and dimensional stability. The main conclusions were that experimental characterisation brings a huge contribution to understand the behaviour of the materials both individually and assembled. This knowledge contributes to the increase the quality and improvements of premium solutions. This research work was framed within the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through the COMPETE 2020.

Keywords: durability, experimental characterization, mechanical tests, railway flooring system

Procedia PDF Downloads 155
371 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model

Authors: S.I.Mukhin, S. Seidov, A. Mukherjee

Abstract:

The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.

Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity

Procedia PDF Downloads 134
370 Regulatory and Economic Challenges of AI Integration in Cyber Insurance

Authors: Shreyas Kumar, Mili Shangari

Abstract:

Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.

Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware

Procedia PDF Downloads 33
369 Polypyrrole as Bifunctional Materials for Advanced Li-S Batteries

Authors: Fang Li, Jiazhao Wang, Jianmin Ma

Abstract:

The practical application of Li-S batteries is hampered due to poor cycling stability caused by electrolyte-dissolved lithium polysulfides. Dual functionalities such as strong chemical adsorption stability and high conductivity are highly desired for an ideal host material for a sulfur-based cathode. Polypyrrole (PPy), as a conductive polymer, was widely studied as matrixes for sulfur cathode due to its high conductivity and strong chemical interaction with soluble polysulfides. Thus, a novel cathode structure consisting of a free-standing sulfur-polypyrrole cathode and a polypyrrole coated separator was designed for flexible Li-S batteries. The PPy materials show strong interaction with dissoluble polysulfides, which could suppress the shuttle effect and improve the cycling stability. In addition, the synthesized PPy film with a rough surface acts as a current collector, which improves the adhesion of sulfur materials and restrain the volume expansion, enhancing the structural stability during the cycling process. For further enhancing the cycling stability, a PPy coated separator was also applied, which could make polysulfides into the cathode side to alleviate the shuttle effect. Moreover, the PPy layer coated on commercial separator is much lighter than other reported interlayers. A soft-packaged flexible Li-S battery has been designed and fabricated for testing the practical application of the designed cathode and separator, which could power a device consisting of 24 light-emitting diode (LED) lights. Moreover, the soft-packaged flexible battery can still show relatively stable cycling performance after repeated bending, indicating the potential application in flexible batteries. A novel vapor phase deposition method was also applied to prepare uniform polypyrrole layer coated sulfur/graphene aerogel composite. The polypyrrole layer simultaneously acts as host and adsorbent for efficient suppression of polysulfides dissolution through strong chemical interaction. The density functional theory (DFT) calculations reveal that the polypyrrole could trap lithium polysulfides through stronger bonding energy. In addition, the deflation of sulfur/graphene hydrogel during the vapor phase deposition process enhances the contact of sulfur with matrixes, resulting in high sulfur utilization and good rate capability. As a result, the synthesized polypyrrole coated sulfur/graphene aerogel composite delivers a specific discharge capacity of 1167 mAh g⁻¹ and 409.1 mAh g⁻¹ at 0.2 C and 5 C respectively. The capacity can maintain at 698 mAh g⁻¹ at 0.5 C after 500 cycles, showing an ultra-slow decay rate of 0.03% per cycle.

Keywords: polypyrrole, strong chemical interaction, long-term stability, Li-S batteries

Procedia PDF Downloads 140
368 An Investigation on MgAl₂O₄ Based Mould System in Investment Casting Titanium Alloy

Authors: Chen Yuan, Nick Green, Stuart Blackburn

Abstract:

The investment casting process offers a great freedom of design combined with the economic advantage of near net shape manufacturing. It is widely used for the production of high value precision cast parts in particularly in the aerospace sector. Various combinations of materials have been used to produce the ceramic moulds, but most investment foundries use a silica based binder system in conjunction with fused silica, zircon, and alumino-silicate refractories as both filler and coarse stucco materials. However, in the context of advancing alloy technologies, silica based systems are struggling to keep pace, especially when net-shape casting titanium alloys. Study has shown that the casting of titanium based alloys presents considerable problems, including the extensive interactions between the metal and refractory, and the majority of metal-mould interaction is due to reduction of silica, present as binder and filler phases, by titanium in the molten state. Cleaner, more refractory systems are being devised to accommodate these changes. Although yttria has excellent chemical inertness to titanium alloy, it is not very practical in a production environment combining high material cost, short slurry life, and poor sintering properties. There needs to be a cost effective solution to these issues. With limited options for using pure oxides, in this work, a silica-free magnesia spinel MgAl₂O₄ was used as a primary coat filler and alumina as a binder material to produce facecoat in the investment casting mould. A comparison system was also studied with a fraction of the rare earth oxide Y₂O₃ adding into the filler to increase the inertness. The stability of the MgAl₂O₄/Al₂O₃ and MgAl₂O₄/Y₂O₃/Al₂O₃ slurries was assessed by tests, including pH, viscosity, zeta-potential and plate weight measurement, and mould properties such as friability were also measured. The interaction between the face coat and titanium alloy was studied by both a flash re-melting technique and a centrifugal investment casting method. The interaction products between metal and mould were characterized using x-ray diffraction (XRD), scanning electron microscopy (SEM) and Energy Dispersive X-Ray Spectroscopy (EDS). The depth of the oxygen hardened layer was evaluated by micro hardness measurement. Results reveal that introducing a fraction of Y₂O₃ into magnesia spinel can significantly increase the slurry life and reduce the thickness of hardened layer during centrifugal casting.

Keywords: titanium alloy, mould, MgAl₂O₄, Y₂O₃, interaction, investment casting

Procedia PDF Downloads 113
367 (Re)Processing of ND-Fe-B Permanent Magnets Using Electrochemical and Physical Approaches

Authors: Kristina Zuzek, Xuan Xu, Awais Ikram, Richard Sheridan, Allan Walton, Saso Sturm

Abstract:

Recycling of end-of-life REEs based Nd-Fe-B magnets is an important strategy for reducing the environmental dangers associated with rare-earth mining and overcoming the well-documented supply risks related to the REEs. However, challenges on their reprocessing still remain. We report on the possibility of direct electrochemical recycling and reprocessing of Nd-Fe(B)-based magnets. In this investigation, we were able first to electrochemically leach the end-of-life NdFeB magnet and to electrodeposit Nd–Fe using a 1-ethyl-3-methyl imidazolium dicyanamide ([EMIM][DCA]) ionic liquid-based electrolyte. We observed that Nd(III) could not be reduced independently. However, it can be co-deposited on a substrate with the addition of Fe(II). Using advanced TEM techniques of electron-energy-loss spectroscopy (EELS) it was shown that Nd(III) is reduced to Nd(0) during the electrodeposition process. This gave a new insight into determining the Nd oxidation state, as X-ray photoelectron spectroscopy (XPS) has certain limitations. This is because the binding energies of metallic Nd (Nd0) and neodymium oxide (Nd₂O₃) are very close, i. e., 980.5-981.5 eV and 981.7-982.3 eV, respectively, making it almost impossible to differentiate between the two states. These new insights into the electrodeposition process represent an important step closer to efficient recycling of rare piles of earth in metallic form at mild temperatures, thus providing an alternative to high-temperature molten-salt electrolysis and a step closer to deposit Nd-Fe-based magnetic materials. Further, we propose a new concept of recycling the sintered Nd-Fe-B magnets by direct recovering the 2:14:1 matrix phase. Via an electrochemical etching method, we are able to recover pure individual 2:14:1 grains that can be re-used for new types of magnet production. In the frame of physical reprocessing, we have successfully synthesized new magnets out of hydrogen (HDDR)-recycled stocks with a contemporary technique of pulsed electric current sintering (PECS). The optimal PECS conditions yielded fully dense Nd-Fe-B magnets with the coercivity Hc = 1060 kA/m, which was boosted to 1160 kA/m after the post-PECS thermal treatment. The Br and Hc were tackled further and increased applied pressures of 100 – 150 MPa resulted in Br = 1.01 T. We showed that with a fine tune of the PECS and post-annealing it is possible to revitalize the Nd-Fe-B end-of-life magnets. By applying advanced TEM, i.e. atomic-scale Z-contrast STEM combined with EDXS and EELS, the resulting magnetic properties were critically assessed against various types of structural and compositional discontinuities down to atomic-scale, which we believe control the microstructure evolution during the PECS processing route.

Keywords: electrochemistry, Nd-Fe-B, pulsed electric current sintering, recycling, reprocessing

Procedia PDF Downloads 156
366 Institutional and Economic Determinants of Foreign Direct Investment: Comparative Analysis of Three Clusters of Countries

Authors: Ismatilla Mardanov

Abstract:

There are three types of countries, the first of which is willing to attract foreign direct investment (FDI) in enormous amounts and do whatever it takes to make this happen. Therefore, FDI pours into such countries. In the second cluster of countries, even if the country is suffering tremendously from the shortage of investments, the governments are hesitant to attract investments because they are at the hands of local oligarchs/cartels. Therefore, FDI inflows are moderate to low in such countries. The third type is countries whose companies prefer investing in the most efficient locations globally and are hesitant to invest in the homeland. Sorting countries into such clusters, the present study examines the essential institutions and economic factors that make these countries different. Past literature has discussed various determinants of FDI in all kinds of countries. However, it did not classify countries based on government motivation, institutional setup, and economic factors. A specific approach to each target country is vital for corporate foreign direct investment risk analysis and decisions. The research questions are 1. What specific institutional and economic factors paint the pictures of the three clusters; 2. What specific institutional and economic factors are determinants of FDI; 3. Which of the determinants are endogenous and exogenous variables? 4. How can institutions and economic and political variables impact corporate investment decisions Hypothesis 1: In the first type, country institutions and economic factors will be favorable for FDI. Hypothesis 2: In the second type, even if country economic factors favor FDI, institutions will not. Hypothesis 3: In the third type, even if country institutions favorFDI, economic factors will not favor domestic investments. Therefore, FDI outflows occur in large amounts. Methods: Data come from open sources of the World Bank, the Fraser Institute, the Heritage Foundation, and other reliable sources. The dependent variable is FDI inflows. The independent variables are institutions (economic and political freedom indices) and economic factors (natural, material, and labor resources, government consumption, infrastructure, minimum wage, education, unemployment, tax rates, consumer price index, inflation, and others), the endogeneity or exogeneity of which are tested in the instrumental variable estimation. Political rights and civil liberties are used as instrumental variables. Results indicate that in the first type, both country institutions and economic factors, specifically labor and logistics/infrastructure/energy intensity, are favorable for potential investors. In the second category of countries, the risk of loss of assets is very high due to governmentshijacked by local oligarchs/cartels/special interest groups. In the third category of countries, the local economic factors are unfavorable for domestic investment even if the institutions are well acceptable. Cluster analysis and instrumental variable estimation were used to reveal cause-effect patterns in each of the clusters.

Keywords: foreign direct investment, economy, institutions, instrumental variable estimation

Procedia PDF Downloads 159
365 Selective Conversion of Biodiesel Derived Glycerol to 1,2-Propanediol over Highly Efficient γ-Al2O3 Supported Bimetallic Cu-Ni Catalyst

Authors: Smita Mondal, Dinesh Kumar Pandey, Prakash Biswas

Abstract:

During past two decades, considerable attention has been given to the value addition of biodiesel derived glycerol (~10wt.%) to make the biodiesel industry economically viable. Among the various glycerol value-addition methods, hydrogenolysis of glycerol to 1,2-propanediol is one of the attractive and promising routes. In this study, highly active and selective γ-Al₂O₃ supported bimetallic Cu-Ni catalyst was developed for selective hydrogenolysis of glycerol to 1,2-propanediol in the liquid phase. The catalytic performance was evaluated in a high-pressure autoclave reactor. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. Experimental results demonstrated that bimetallic copper-nickel catalyst was more active and selective to 1,2-PDO as compared to monometallic catalysts due to bifunctional behavior. To verify the effect of calcination temperature on the formation of Cu-Ni mixed oxide phase, the calcination temperature of 20wt.% Cu:Ni(1:1)/Al₂O₃ catalyst was varied from 300°C-550°C. The physicochemical properties of the catalysts were characterized by various techniques such as specific surface area (BET), X-ray diffraction study (XRD), temperature programmed reduction (TPR), and temperature programmed desorption (TPD). The BET surface area and pore volume of the catalysts were in the range of 71-78 m²g⁻¹, and 0.12-0.15 cm³g⁻¹, respectively. The peaks at the 2θ range of 43.3°-45.5° and 50.4°-52°, was corresponded to the copper-nickel mixed oxidephase [JCPDS: 78-1602]. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. The crystallite size decreased with increasing the calcination temperature up to 450°C. Further, the crystallite size was increased due to agglomeration. Smaller crystallite size of 16.5 nm was obtained for the catalyst calcined at 400°C. Total acidic sites of the catalysts were determined by NH₃-TPD, and the maximum total acidic of 0.609 mmol NH₃ gcat⁻¹ was obtained over the catalyst calcined at 400°C. TPR data suggested the maximum of 75% degree of reduction of catalyst calcined at 400°C among all others. Further, 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst calcined at 400°C exhibited highest catalytic activity ( > 70%) and 1,2-PDO selectivity ( > 85%) at mild reaction condition due to highest acidity, highest degree of reduction, smallest crystallite size. Further, the modified Power law kinetic model was developed to understand the true kinetic behaviour of hydrogenolysis of glycerol over 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst. Rate equations obtained from the model was solved by ode23 using MATLAB coupled with Genetic Algorithm. Results demonstrated that the model predicted data were very well fitted with the experimental data. The activation energy of the formation of 1,2-PDO was found to be 45 kJ mol⁻¹.

Keywords: glycerol, 1, 2-PDO, calcination, kinetic

Procedia PDF Downloads 144
364 Forest Degradation and Implications for Rural Livelihood in Kaimur Reserve Forest of Bihar, India

Authors: Shashi Bhushan, Sucharita Sen

Abstract:

In India, forest and people are inextricably linked since millions of people live adjacent to or within protected areas and harvest forest products. Indian forest has their own legacy to sustain by its own climatic nature with several social, economic and cultural activities. People surrounding forest areas are not only dependent on this resource for their livelihoods but also for the other source, like religious ceremonies, social customs and herbal medicines, which are determined by the forest like agricultural land, groundwater level, and soil fertility. The assumption that fuelwood and fodder extraction, which is the part of local livelihood leads to deforestation, has so far been the dominant mainstream views in deforestation discourses. Given the occupational division across social groups in Kaimur reserve forest, the differential nature of dependence of forest resources is important to understand. This paper attempts to assess the nature of dependence and impact of forest degradation on rural households across various social groups. Also, an additional element that is added to the enquiry is the way degradation of forests leading to scarcity of forest-based resources impacts the patterns of dependence across various social groups. Change in forest area calculated through land use land cover analysis using remote sensing technique and examination of different economic activities carried out by the households that are forest-based was collected by primary survey in Kaimur reserve forest of state of Bihar in India. The general finding indicates that the Scheduled Tribe and Scheduled Caste communities, the most socially and economically deprived sections of the rural society are involved in a significant way in collection of fuelwood, fodder, and fruits, both for self-consumption and sale in the market while other groups of society uses fuelwood, fruit, and fodder for self-use only. Depending on the local forest resources for fuelwood consumption was the primary need for all social groups due to easy accessibility and lack of alternative energy source. In last four decades, degradation of forest made a direct impact on rural community mediated through the socio-economic structure, resulting in a shift from forest-based occupations to cultivation and manual labour in agricultural and non-agricultural activities. Thus there is a need to review the policies with respect to the ‘community forest management’ since this study clearly throws up the fact that engagement with and dependence on forest resources is socially differentiated. Thus tying the degree of dependence and forest management becomes extremely important from the view of ‘sustainable’ forest resource management. The statization of forest resources also has to keep in view the intrinsic way in which the forest-dependent population interacts with the forest.

Keywords: forest degradation, livelihood, social groups, tribal community

Procedia PDF Downloads 169
363 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine

Authors: Sahar Heidary

Abstract:

Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.

Keywords: radiology, radiotherapy, medical imaging, cancer treatment

Procedia PDF Downloads 69
362 Assessment of Serum Osteopontin, Osteoprotegerin and Bone-Specific Alp as Markers of Bone Turnover in Patients with Disorders of Thyroid Function in Nigeria, Sub-Saharan Africa

Authors: Oluwabori Emmanuel Olukoyejo, Ogra Victor Ogra, Bosede Amodu, Tewogbade Adeoye Adedeji

Abstract:

Background: Disorders of thyroid function are the second most common endocrine disorders worldwide, with a direct relationship with metabolic bone diseases. These metabolic bone complications are often subtle but manifest as bone pains and an increased risk of fractures. The gold standard for diagnosis, Dual Energy X-ray Absorptiometry (DEXA), is limited in this environment due to unavailability, cumbersomeness and cost. However, bone biomarkers have shown prospects in assessing alterations in bone remodeling, which has not been studied in this environment. Aim: This study evaluates serum levels of bone-specific alkaline phosphatase (bone-specific ALP), osteopontin and osteoprotegerin biomarkers of bone turnover in patients with disorders of thyroid function. Methods: This is a cross-sectional study carried out over a period of one and a half years. Forty patients with thyroid dysfunctions, aged 20 to 50 years, and thirty-eight age and sex-matched healthy euthyroid controls were included in this study. Patients were further stratified into hyperthyroid and hypothyroid groups. Bone-specific ALP, osteopontin, and osteoprotegerin, alongside serum total calcium, ionized calcium and inorganic phosphate, were assayed for all patients and controls. A self-administered questionnaire was used to obtain data on sociodemographic and medical history. Then, 5 ml of blood was collected in a plain bottle and serum was harvested following clotting and centrifugation. Serum samples were assayed for B-ALP, osteopontin, and osteoprotegerin using the ELISA technique. Total calcium and ionized calcium were assayed using an ion-selective electrode, while the inorganic phosphate was assayed with automated photometry. Results: The hyperthyroid and hypothyroid patient groups had significantly increased median serum B-ALP (30.40 and 26.50) ng/ml and significantly lower median OPG (0.80 and 0.80) ng/ml than the controls (10.81 and 1.30) ng/ml respectively, p < 0.05. However, serum osteopontin in the hyperthyroid group was significantly higher and significantly lower in the hypothyroid group when compared with the controls (11.00 and 2.10 vs 3.70) ng/ml, respectively, p < 0.05. Both hyperthyroid and hypothyroid groups had significantly higher mean serum total calcium, ionized calcium and inorganic phosphate than the controls (2.49 ± 0.28, 1.27 ± 0.14 and 1.33 ± 0.33) mmol/l and (2.41 ± 0.04, 1.20 ± 0.04 and 1.15 ± 0.16) mmol/l vs (2.27 ± 0.11, 1.17 ± 0.06 and 1.08 ± 0.16) mmol/l respectively, p < 0.05. Conclusion: Patients with disorders of thyroid function have metabolic imbalances of all the studied bone markers, suggesting a higher bone turnover. The routine bone markers will be an invaluable tool for monitoring bone health in patients with thyroid dysfunctions, while the less readily available markers can be introduced as supplementary tools. Moreover, bone-specific ALP, osteopontin and osteoprotegerin were found to be the strongest independent predictors of metabolic bone markers’ derangements in patients with thyroid dysfunctions.

Keywords: metabolic bone diseases, biomarker, bone turnover, hyperthyroid, hypothyroid, euthyroid

Procedia PDF Downloads 36
361 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme

Authors: Eleanor Nel

Abstract:

Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.

Keywords: evaluation, framework, impact, research output

Procedia PDF Downloads 76
360 Nanostructured Pt/MnO2 Catalysts and Their Performance for Oxygen Reduction Reaction in Air Cathode Microbial Fuel Cell

Authors: Maksudur Rahman Khan, Kar Min Chan, Huei Ruey Ong, Chin Kui Cheng, Wasikur Rahman

Abstract:

Microbial fuel cells (MFCs) represent a promising technology for simultaneous bioelectricity generation and wastewater treatment. Catalysts are significant portions of the cost of microbial fuel cell cathodes. Many materials have been tested as aqueous cathodes, but air-cathodes are needed to avoid energy demands for water aeration. The sluggish oxygen reduction reaction (ORR) rate at air cathode necessitates efficient electrocatalyst such as carbon supported platinum catalyst (Pt/C) which is very costly. Manganese oxide (MnO2) was a representative metal oxide which has been studied as a promising alternative electrocatalyst for ORR and has been tested in air-cathode MFCs. However, the single MnO2 has poor electric conductivity and low stability. In the present work, the MnO2 catalyst has been modified by doping Pt nanoparticle. The goal of the work was to improve the performance of the MFC with minimum Pt loading. MnO2 and Pt nanoparticles were prepared by hydrothermal and sol-gel methods, respectively. Wet impregnation method was used to synthesize Pt/MnO2 catalyst. The catalysts were further used as cathode catalysts in air-cathode cubic MFCs, in which anaerobic sludge was inoculated as biocatalysts and palm oil mill effluent (POME) was used as the substrate in the anode chamber. The as-prepared Pt/MnO2 was characterized comprehensively through field emission scanning electron microscope (FESEM), X-Ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and cyclic voltammetry (CV) where its surface morphology, crystallinity, oxidation state and electrochemical activity were examined, respectively. XPS revealed Mn (IV) oxidation state and Pt (0) nanoparticle metal, indicating the presence of MnO2 and Pt. Morphology of Pt/MnO2 observed from FESEM shows that the doping of Pt did not cause change in needle-like shape of MnO2 which provides large contacting surface area. The electrochemical active area of the Pt/MnO2 catalysts has been increased from 276 to 617 m2/g with the increase in Pt loading from 0.2 to 0.8 wt%. The CV results in O2 saturated neutral Na2SO4 solution showed that MnO2 and Pt/MnO2 catalysts could catalyze ORR with different catalytic activities. MFC with Pt/MnO2 (0.4 wt% Pt) as air cathode catalyst generates a maximum power density of 165 mW/m3, which is higher than that of MFC with MnO2 catalyst (95 mW/m3). The open circuit voltage (OCV) of the MFC operated with MnO2 cathode gradually decreased during 14 days of operation, whereas the MFC with Pt/MnO2 cathode remained almost constant throughout the operation suggesting the higher stability of the Pt/MnO2 catalyst. Therefore, Pt/MnO2 with 0.4 wt% Pt successfully demonstrated as an efficient and low cost electrocatalyst for ORR in air cathode MFC with higher electrochemical activity, stability and hence enhanced performance.

Keywords: microbial fuel cell, oxygen reduction reaction, Pt/MnO2, palm oil mill effluent, polarization curve

Procedia PDF Downloads 555
359 Utilizing Fly Ash Cenosphere and Aerogel for Lightweight Thermal Insulating Cement-Based Composites

Authors: Asad Hanif, Pavithra Parthasarathy, Zongjin Li

Abstract:

Thermal insulating composites help to reduce the total power consumption in a building by creating a barrier between external and internal environment. Such composites can be used in the roofing tiles or wall panels for exterior surfaces. This study purposes to develop lightweight cement-based composites for thermal insulating applications. Waste materials like silica fume (an industrial by-product) and fly ash cenosphere (FAC) (hollow micro-spherical shells obtained as a waste residue from coal fired power plants) were used as partial replacement of cement and lightweight filler, respectively. Moreover, aerogel, a nano-porous material made of silica, was also used in different dosages for improved thermal insulating behavior, while poly vinyl alcohol (PVA) fibers were added for enhanced toughness. The raw materials including binders and fillers were characterized by X-Ray Diffraction (XRD), X-Ray Fluorescence spectroscopy (XRF), and Brunauer–Emmett–Teller (BET) analysis techniques in which various physical and chemical properties of the raw materials were evaluated like specific surface area, chemical composition (oxide form), and pore size distribution (if any). Ultra-lightweight cementitious composites were developed by varying the amounts of FAC and aerogel with 28-day unit weight ranging from 1551.28 kg/m3 to 1027.85 kg/m3. Excellent mechanical and thermal insulating properties of the resulting composites were obtained ranging from 53.62 MPa to 8.66 MPa compressive strength, 9.77 MPa to 3.98 MPa flexural strength, and 0.3025 W/m-K to 0.2009 W/m-K as thermal conductivity coefficient (QTM-500). The composites were also tested for peak temperature difference between outer and inner surfaces when subjected to heating (in a specially designed experimental set-up) by a 275W infrared lamp. The temperature difference up to 16.78 oC was achieved, which indicated outstanding properties of the developed composites to act as a thermal barrier for building envelopes. Microstructural studies were carried out by Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS) for characterizing the inner structure of the composite specimen. Also, the hydration products were quantified using the surface area mapping and line scale technique in EDS. The microstructural analyses indicated excellent bonding of FAC and aerogel in the cementitious system. Also, selective reactivity of FAC was ascertained from the SEM imagery where the partially consumed FAC shells were observed. All in all, the lightweight fillers, FAC, and aerogel helped to produce the lightweight composites due to their physical characteristics, while exceptional mechanical properties, owing to FAC partial reactivity, were achieved.

Keywords: aerogel, cement-based, composite, fly ash cenosphere, lightweight, sustainable development, thermal conductivity

Procedia PDF Downloads 223
358 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology

Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal

Abstract:

Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.

Keywords: chloramine decay, modelling, response surface methodology, water quality parameters

Procedia PDF Downloads 224
357 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 322
356 Neighborhood Sustainability Assessment Tools: A Conceptual Framework for Their Use in Building Adaptive Capacity to Climate Change

Authors: Sally Naji, Julie Gwilliam

Abstract:

Climate change remains a challenging matter for the human and the built environment in the 21st century, where the need to consider adaptation to climate change in the development process is paramount. However, there remains a lack of information regarding how we should prepare responses to this issue, such as through developing organized and sophisticated tools enabling the adaptation process. This study aims to build a systematic framework approach to investigate the potentials that Neighborhood Sustainability Assessment tools (NSA) might offer in enabling both the analysis of the emerging adaptive capacity to climate change. The analysis of the framework presented in this paper aims to discuss this issue in three main phases. The first part attempts to link sustainability and climate change, in the context of adaptive capacity. It is argued that in deciding to promote sustainability in the context of climate change, both the resilience and vulnerability processes become central. However, there is still a gap in the current literature regarding how the sustainable development process can respond to climate change. As well as how the resilience of practical strategies might be evaluated. It is suggested that the integration of the sustainability assessment processes with both the resilience thinking process, and vulnerability might provide important components for addressing the adaptive capacity to climate change. A critical review of existing literature is presented illustrating the current lack of work in this field, integrating these three concepts in the context of addressing the adaptive capacity to climate change. The second part aims to identify the most appropriate scale at which to address the built environment for the climate change adaptation. It is suggested that the neighborhood scale can be considered as more suitable than either the building or urban scales. It then presents the example of NSAs, and discusses the need to explore their potential role in promoting the adaptive capacity to climate change. The third part of the framework presents a comparison among three example NSAs, BREEAM Communities, LEED-ND, and CASBEE-UD. These three tools have been selected as the most developed and comprehensive assessment tools that are currently available for the neighborhood scale. This study concludes that NSAs are likely to present the basis for an organized framework to address the practical process for analyzing and yet promoting Adaptive Capacity to Climate Change. It is further argued that vulnerability (exposure & sensitivity) and resilience (Interdependence & Recovery) form essential aspects to be addressed in the future assessment of NSA’s capability to adapt to both short and long term climate change impacts. Finally, it is acknowledged that further work is now required to understand impact assessment in terms of the range of physical sectors (Water, Energy, Transportation, Building, Land Use and Ecosystems), Actor and stakeholder engagement as well as a detailed evaluation of the NSA indicators, together with a barriers diagnosis process.

Keywords: adaptive capacity, climate change, NSA tools, resilience, sustainability

Procedia PDF Downloads 381
355 Photosynthesis Metabolism Affects Yield Potentials in Jatropha curcas L.: A Transcriptomic and Physiological Data Analysis

Authors: Nisha Govender, Siju Senan, Zeti-Azura Hussein, Wickneswari Ratnam

Abstract:

Jatropha curcas, a well-described bioenergy crop has been extensively accepted as future fuel need especially in tropical regions. Ideal planting material required for large-scale plantation is still lacking. Breeding programmes for improved J. curcas varieties are rendered difficult due to limitations in genetic diversity. Using a combined transcriptome and physiological data, we investigated the molecular and physiological differences in high and low yielding Jatropha curcas to address plausible heritable variations underpinning these differences, in regard to photosynthesis, a key metabolism affecting yield potentials. A total of 6 individual Jatropha plant from 4 accessions described as high and low yielding planting materials were selected from the Experimental Plot A, Universiti Kebangsaan Malaysia (UKM), Bangi. The inflorescence and shoots were collected for transcriptome study. For the physiological study, each individual plant (n=10) from the high and low yielding populations were screened for agronomic traits, chlorophyll content and stomatal patterning. The J. curcas transcriptomes are available under BioProject PRJNA338924 and BioSample SAMN05827448-65, respectively Each transcriptome was subjected to functional annotation analysis of sequence datasets using the BLAST2Go suite; BLASTing, mapping, annotation, statistical analysis and visualization Large-scale phenotyping of the number of fruits per plant (NFPP) and fruits per inflorescence (FPI) classified the high yielding Jatropha accessions with average NFPP =60 and FPI > 10, whereas the low yielding accessions yielded an average NFPP=10 and FPI < 5. Next generation sequencing revealed genes with differential expressions in the high yielding Jatropha relative to the low yielding plants. Distinct differences were observed in transcript level associated to photosynthesis metabolism. DEGs collection in the low yielding population showed comparable CAM photosynthetic metabolism and photorespiration, evident as followings: phosphoenolpyruvate phosphate translocator chloroplastic like isoform with 2.5 fold change (FC) and malate dehydrogenase (2.03 FC). Green leaves have the most pronounced photosynthetic activity in a plant body due to significant accumulation of chloroplast. In most plants, the leaf is always the dominant photosynthesizing heart of the plant body. Large number of the DEGS in the high-yielding population were found attributable to chloroplast and chloroplast associated events; STAY-GREEN chloroplastic, Chlorophyllase-1-like (5.08 FC), beta-amylase (3.66 FC), chlorophyllase-chloroplastic-like (3.1 FC), thiamine thiazole chloroplastic like (2.8 FC), 1-4, alpha glucan branching enzyme chloroplastic amyliplastic (2.6FC), photosynthetic NDH subunit (2.1 FC) and protochlorophyllide chloroplastic (2 FC). The results were parallel to a significant increase in chlorophyll a content in the high yielding population. In addition to the chloroplast associated transcript abundance, the TOO MANY MOUTHS (TMM) at 2.9 FC, which code for distant stomatal distribution and patterning in the high-yielding population may explain high concentration of CO2. The results were in agreement with the role of TMM. Clustered stomata causes back diffusion in the presence of gaps localized closely to one another. We conclude that high yielding Jatropha population corresponds to a collective function of C3 metabolism with a low degree of CAM photosynthetic fixation. From the physiological descriptions, high chlorophyll a content and even distribution of stomata in the leaf contribute to better photosynthetic efficiency in the high yielding Jatropha compared to the low yielding population.

Keywords: chlorophyll, gene expression, genetic variation, stomata

Procedia PDF Downloads 238
354 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology

Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao

Abstract:

With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.

Keywords: optimisation, plate, sensor effectiveness, vibration control

Procedia PDF Downloads 231
353 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 76
352 Investigation of the Usability of Biochars Obtained from Olive Pomace and Smashed Olive Seeds as Additives for Bituminous Binders

Authors: Muhammed Ertugrul Celoglu, Beyza Furtana, Mehmet Yilmaz, Baha Vural Kok

Abstract:

Biomass, which is considered to be one of the largest renewable energy sources in the world, has a potential to be utilized as a bitumen additive after it is processed by a wide variety of thermochemical methods. Furthermore, biomasses are renewable in short amounts of time, and they possess a hydrocarbon structure. These characteristics of biomass promote their usability as additives. One of the most common ways to create materials with significant economic values from biomasses is the processes of pyrolysis. Pyrolysis is defined as the process of an organic matter’s thermochemical degradation (carbonization) at a high temperature and in an anaerobic environment. The resultant liquid substance at the end of the pyrolysis is defined as bio-oil, whereas the resultant solid substance is defined as biochar. Olive pomace is the resultant mildly oily pulp with seeds after olive is pressed and its oil is extracted. It is a significant source of biomass as the waste of olive oil factories. Because olive pomace is waste material, it could create problems just as other waste unless there are appropriate and acceptable areas of utilization. The waste material, which is generated in large amounts, is generally used as fuel and fertilizer. Generally, additive materials are used in order to improve the properties of bituminous binders, and these are usually expensive materials, which are produced chemically. The aim of this study is to investigate the usability of biochars obtained after subjecting olive pomace and smashed olive seeds, which are considered as waste materials, to pyrolysis as additives in bitumen modification. In this way, various ways of use will be provided for waste material, providing both economic and environmental benefits. In this study, olive pomace and smashed olive seeds were used as sources of biomass. Initially, both materials were ground and processed through a No.50 sieve. Both of the sieved materials were subjected to pyrolysis (carbonization) at 400 ℃. Following the process of pyrolysis, bio-oil and biochar were obtained. The obtained biochars were added to B160/220 grade pure bitumen at 10% and 15% rates and modified bitumens were obtained by mixing them in high shear mixtures at 180 ℃ for 1 hour at 2000 rpm. Pure bitumen and four different types of bitumen obtained as a result of the modifications were tested with penetration, softening point, rotational viscometer, and dynamic shear rheometer, evaluating the effects of additives and the ratios of additives. According to the test results obtained, both biochar modifications at both ratios provided improvements in the performance of pure bitumen. In the comparison of the test results of the binders modified with the biochars of olive pomace and smashed olive seed, it was revealed that there was no notable difference in their performances.

Keywords: bituminous binders, biochar, biomass, olive pomace, pomace, pyrolysis

Procedia PDF Downloads 132
351 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques

Authors: Chandu Rathnayake, Isuri Anuradha

Abstract:

Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.

Keywords: CNN, random forest, decision tree, machine learning, deep learning

Procedia PDF Downloads 73
350 The Coaching on Lifestyle Intervention (CooL): Preliminary Results and Implementation Process

Authors: Celeste E. van Rinsum, Sanne M. P. L. Gerards, Geert M. Rutten, Ien A. M. van de Goor, Stef P. J. Kremers

Abstract:

Combined lifestyle interventions have shown to be effective in changing and maintaining behavioral lifestyle changes and reducing overweight and obesity. A lifestyle coach is expected to promote lifestyle changes in adults related to physical activity and diet. The present Coaching on Lifestyle (CooL) study examined participants’ physical activity level, dietary behavioral, and motivational changes immediately after the intervention and at 1.5 years after baseline. In CooL intervention a lifestyle coach coaches individuals from eighteen years and older with (a high risk of) obesity in group and individual sessions. In addition a process evaluation was conducted in order to examine the implementation process and to be able to interpret the changes within the participants. This action-oriented research has a pre-post design. Participants of the CooL intervention (N = 200) completed three questionnaires: at baseline, immediately after the intervention (on average after 44 weeks), and at 1.5 years after baseline. T-tests and linear regressions were conducted to test self-reported changes in physical activity (IPAQ), dietary behaviors, their quality of motivation for physical activity (BREQ-3) and for diet (REBS), body mass index (BMI), and quality of life (EQ-5D-3L). For the process evaluation, we used individual and group interviews, observations and document analyses to gain insight in the implementation process (e.g. the recruitment) and how the intervention was valued by the participants, lifestyle coaches, and referrers. The study is currently ongoing and therefore the results presented here are preliminary. On average, the participants that finished the intervention and those that have completed the long-term measurement improved their level of vigorous-intense physical activity, sedentary behavior, sugar-sweetened beverage consumption and BMI. Mixed results were observed in motivational regulation for physical activity and nutrition. Moreover, an improvement on the quality of life dimension anxiety/depression was found, also in the long-term. All the other constructs did not show significant change over time. The results of the process evaluation have shown that recruitment of clients was difficult. Participants evaluated the intervention positively and the lifestyle coaches have continuously adapted the structure and contents of the intervention throughout the study period, based on their experiences and feedback from research. Preliminary results indicate that the CooL-intervention may have beneficial effects on overweight and obese participants in terms of energy balance-related behaviors, weight reduction, and quality of life. Recruitment of participants and embedding the position of the lifestyle coach in traditional care structures is challenging.

Keywords: combined lifestyle intervention, effect evaluation, lifestyle coaching, process evaluation, overweight, the Netherlands

Procedia PDF Downloads 229
349 Governance Challenges for the Management of Water Resources in Agriculture: The Italian Way

Authors: Silvia Baralla, Raffaella Zucaro, Romina Lorenzetti

Abstract:

Water management needs to cope with economic, societal, and environmental changes. This could be guaranteed through 'shifting from government to governance'. In the last decades, it was applied in Europe through and within important legislative pillars (Water Framework Directive and Common Agricultural Policy) and their measures focused on resilience and adaptation to climate change, with particular attention to the creation of synergies among policies and all the actors involved at different levels. Within the climate change context, the agricultural sector can play, through sustainable water management, a leading role for climate-resilient growth and environmental integrity. A recent analysis on the water management governance of different countries identified some common gaps dealing with administrative, policy, information, capacity building, funding, objective, and accountability. The ability of a country to fill these gaps is an essential requirement to make some of the changes requested by Europe, in particular the improvement of the agro-ecosystem resilience to the effect of climatic change, supporting green and digital transitions, and sustainable water use. This research aims to contribute in sharing examples of water governances and related advantages useful to fill the highlighted gaps. Italy has developed a strong and exhaustive model of water governance in order to react with strategic and synergic actions since it is one of the European countries most threatened by climate change and its extreme events (drought, floods). In particular, the Italian water governance model was able to overcome several gaps, specifically as concerns the water use in agriculture, adopting strategies as a systemic/integrated approach, the stakeholder engagement, capacity building, the improvement of planning and monitoring ability, and an adaptive/resilient strategy for funding activities. They were carried out, putting in place regulatory, structural, and management actions. Regulatory actions include both the institution of technical committees grouping together water decision-makers and the elaboration of operative manuals and guidelines by means of a participative and cross-cutting approach. Structural actions deal with the funding of interventions within European and national funds according to the principles of coherence and complementarity. Finally, management actions regard the introduction of operational tools to support decision-makers in order to improve planning and monitoring ability. In particular, two cross-functional and interoperable web databases were introduced: SIGRIAN (National Information System for Water Resources Management in Agriculture) and DANIA (National Database of Investments for Irrigation and the Environment). Their interconnection allows to support sustainable investments, taking into account the compliance about irrigation volumes quantified in SIGRIAN, ensuring a high level of attention on water saving, and monitoring the efficiency of funding. Main positive results from the Italian water governance model deal with a synergic and coordinated work at the national, regional, and local level among institutions, the transparency on water use in agriculture, a deeper understanding from the stakeholder side of the importance of their roles and of their own potential benefits and the capacity to guarantee continuity to this model, through a sensitization process and the combined use of management operational tools.

Keywords: agricultural sustainability, governance model, water management, water policies

Procedia PDF Downloads 117
348 Implications of Agricultural Subsidies Since Green Revolution: A Case Study of Indian Punjab

Authors: Kriti Jain, Sucha Singh Gill

Abstract:

Subsidies have been a major part of agricultural policies around the world, and more extensively since the green revolution in developing countries, for the sake of attaining higher agricultural productivity and achieving food security. But entrenched subsidies lead to distorted incentives and promote inefficiencies in the agricultural sector, threatening the viability of these very subsidies and sustainability of the agricultural production systems, posing a threat to the livelihood of farmers and laborers dependent on it. This paper analyzes the economic and ecological sustainability implications of prolonged input and output subsidies in agriculture by studying the case of Indian Punjab, an agriculturally developed state responsible for ensuring food security in the country when it was facing a major food crisis. The paper focuses specifically on the environmentally unsustainable cropping pattern changes as a result of Minimum Support Price (MSP) and assured procurement and on the resource use efficiency and cost implications of power subsidy for irrigation in Punjab. The study is based on an analysis of both secondary and primary data sources. Using secondary data, a time series analysis was done to capture the changes in Punjab’s cropping pattern, water table depth, fertilizer consumption, and electrification of agriculture. This has been done to examine the role of price and output support adopted to encourage the adoption of green revolution technology in changing the cropping structure of the state, resulting in increased input use intensities (especially groundwater and fertilizers), which harms the ecological balance and decreases factor productivity. Evaluation of electrification of Punjab agriculture helped evaluate the trend in electricity productivity of agriculture and how free power imposed further pressure on the extant agricultural ecosystem. Using data collected from a primary survey of 320 farmers in Punjab, the extent of wasteful application of groundwater irrigation, water productivity of output, electricity usage, and cost of irrigation driven electricity subsidy to the exchequer were estimated for the dominant cropping pattern amongst farmers. The main findings of the study revealed how because of a subsidy has driven agricultural framework, Punjab has lost area under agro climatically suitable and staple crops and moved towards a paddy-wheat cropping system, that is gnawing away the state’s natural resources like water table has been declining at a significant rate of 25 cms per year since 1975-76, and excessive and imbalanced fertilizer usage has led to declining soil fertility in the state. With electricity-driven tubewells as the major source of irrigation within a regime of free electricity and water-intensive crop cultivation, there is both wasteful application of irrigation water and electricity in the cultivation of paddy crops, burning an unproductive hole in the exchequer’s pocket. There is limited access to both agricultural extension services and water-conserving technology, along with policy imbalance, keeping farmers in an intensive and unsustainable production system. Punjab agriculture is witnessing diminishing returns to factor, which under the business-as-usual scenario, will soon enter the phase of negative returns to factor.

Keywords: cropping pattern, electrification, subsidy, sustainability

Procedia PDF Downloads 185
347 Damage-Based Seismic Design and Evaluation of Reinforced Concrete Bridges

Authors: Ping-Hsiung Wang, Kuo-Chun Chang

Abstract:

There has been a common trend worldwide in the seismic design and evaluation of bridges towards the performance-based method where the lateral displacement or the displacement ductility of bridge column is regarded as an important indicator for performance assessment. However, the seismic response of a bridge to an earthquake is a combined result of cyclic displacements and accumulated energy dissipation, causing damage to the bridge, and hence the lateral displacement (ductility) alone is insufficient to tell its actual seismic performance. This study aims to propose a damage-based seismic design and evaluation method for reinforced concrete bridges on the basis of the newly developed capacity-based inelastic displacement spectra. The capacity-based inelastic displacement spectra that comprise an inelastic displacement ratio spectrum and a corresponding damage state spectrum was constructed by using a series of nonlinear time history analyses and a versatile, smooth hysteresis model. The smooth model could take into account the effects of various design parameters of RC bridge columns and correlates the column’s strength deterioration with the Park and Ang’s damage index. It was proved that the damage index not only can be used to accurately predict the onset of strength deterioration, but also can be a good indicator for assessing the actual visible damage condition of column regardless of its loading history (i.e., similar damage index corresponds to similar actual damage condition for the same designed columns subjected to very different cyclic loading protocols as well as earthquake loading), providing a better insight into the seismic performance of bridges. Besides, the computed spectra show that the inelastic displacement ratio for far-field ground motions approximately conforms to the equal displacement rule when structural period is larger than around 0.8 s, but that for near-fault ground motions departs from the rule in the whole considered spectral regions. Furthermore, the near-fault ground motions would lead to significantly greater inelastic displacement ratio and damage index than far-field ground motions and most of the practical design scenarios cannot survive the considered near-fault ground motion when the strength reduction factor of bridge is not less than 5.0. Finally, the spectrum formula is presented as a function of structural period, strength reduction factor, and various column design parameters for far-field and near-fault ground motions by means of the regression analysis of the computed spectra. And based on the developed spectrum formula, a design example of a bridge is presented to illustrate the proposed damage-based seismic design and evaluation method where the damage state of the bridge is used as the performance objective.

Keywords: damage index, far-field, near-fault, reinforced concrete bridge, seismic design and evaluation

Procedia PDF Downloads 125
346 Mechanical Properties of Diamond Reinforced Ni Nanocomposite Coatings Made by Co-Electrodeposition with Glycine as Additive

Authors: Yanheng Zhang, Lu Feng, Yilan Kang, Donghui Fu, Qian Zhang, Qiu Li, Wei Qiu

Abstract:

Diamond-reinforced Ni matrix composite has been widely applied in engineering for coating large-area structural parts owing to its high hardness, good wear resistance and corrosion resistance compared with those features of pure nickel. The mechanical properties of Ni-diamond composite coating can be promoted by the high incorporation and uniform distribution of diamond particles in the nickel matrix, while the distribution features of particles are affected by electrodeposition process parameters, especially the additives in the plating bath. Glycine has been utilized as an organic additive during the preparation of pure nickel coating, which can effectively increase the coating hardness. Nevertheless, to author’s best knowledge, no research about the effects of glycine on the Ni-diamond co-deposition has been reported. In this work, the diamond reinforced Ni nanocomposite coatings were fabricated by a co-electrodeposition technique from a modified Watt’s type bath in the presence of glycine. After preparation, the SEM morphology of the composite coatings was observed combined with energy dispersive X-ray spectrometer, and the diamond incorporation was analyzed. The surface morphology and roughness were obtained by a three-dimensional profile instrument. 3D-Debye rings formed by XRD were analyzed to characterize the nickel grain size and orientation in the coatings. The average coating thickness was measured by a digital micrometer to deduce the deposition rate. The microhardness was tested by automatic microhardness tester. The friction coefficient and wear volume were measured by reciprocating wear tester to characterize the coating wear resistance and cutting performance. The experimental results confirmed that the presence of glycine effectively improved the surface morphology and roughness of the composite coatings. By optimizing the glycine concentration, the incorporation of diamond particles was increased, while the nickel grain size decreased with increasing glycine. The hardness of the composite coatings was increased as the glycine concentration increased. The friction and wear properties were evaluated as the glycine concentration was optimized, showing a decrease in the wear volume. The wear resistance of the composite coatings increased as the glycine content was increased to an optimum value, beyond which the wear resistance decreased. Glycine complexation contributed to the nickel grain refinement and improved the diamond dispersion in the coatings, both of which made a positive contribution to the amount and uniformity of embedded diamond particles, thus enhancing the microhardness, reducing the friction coefficient, and hence increasing the wear resistance of the composite coatings. Therefore, additive glycine can be used during the co-deposition process to improve the mechanical properties of protective coatings.

Keywords: co-electrodeposition, glycine, mechanical properties, Ni-diamond nanocomposite coatings

Procedia PDF Downloads 125