Search results for: cooling and heating capacities
236 Modeling of Geotechnical Data Using GIS and Matlab for Eastern Ahmedabad City, Gujarat
Authors: Rahul Patel, S. P. Dave, M. V Shah
Abstract:
Ahmedabad is a rapidly growing city in western India that is experiencing significant urbanization and industrialization. With projections indicating that it will become a metropolitan city in the near future, various construction activities are taking place, making soil testing a crucial requirement before construction can commence. To achieve this, construction companies and contractors need to periodically conduct soil testing. This study focuses on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical Geo-database involves three essential steps. Firstly, borehole data is collected from reputable sources. Secondly, the accuracy and redundancy of the data are verified. Finally, the geotechnical information is standardized and organized for integration into the database. Once the Geo-database is complete, it is integrated with GIS. This integration allows users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. The GIS map generated by this study enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This approach highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers. The information generated by this study can be utilized by engineers to make informed decisions during construction activities. For instance, they can use the data to optimize foundation designs and improve site selection. In conclusion, the rapid growth experienced by Ahmedabad requires extensive construction activities, necessitating soil testing. This study focused on the process of creating a comprehensive geotechnical database integrated with GIS. The database was developed by collecting borehole data from reputable sources, verifying its accuracy and redundancy, and organizing the information for integration. The GIS map generated by this study is an efficient solution that offers greater accuracy and generates valuable information that can be used as input for correlation analysis. It also serves as a decision support tool for geotechnical engineers, allowing them to make informed decisions during construction activities.Keywords: arcGIS, borehole data, geographic information system (GIS), geo-database, interpolation, SPT N-value, soil classification, φ-value, bearing capacity
Procedia PDF Downloads 73235 Modelling of Phase Transformation Kinetics in Post Heat-Treated Resistance Spot Weld of AISI 1010 Mild Steel
Authors: B. V. Feujofack Kemda, N. Barka, M. Jahazi, D. Osmani
Abstract:
Automobile manufacturers are constantly seeking means to reduce the weight of car bodies. The usage of several steel grades in auto body assembling has been found to be a good technique to enlighten vehicles weight. This few years, the usage of dual phase (DP) steels, transformation induced plasticity (TRIP) steels and boron steels in some parts of the auto body have become a necessity because of their lightweight. However, these steels are martensitic, when they undergo a fast heat treatment, the resultant microstructure is essential, made of martensite. Resistance spot welding (RSW), one of the most used techniques in assembling auto bodies, becomes problematic in the case of these steels. RSW being indeed a process were steel is heated and cooled in a very short period of time, the resulting weld nugget is mostly fully martensitic, especially in the case of DP, TRIP and boron steels but that also holds for plain carbon steels as AISI 1010 grade which is extensively used in auto body inner parts. Martensite in its turn must be avoided as most as possible when welding steel because it is the principal source of brittleness and it weakens weld nugget. Thus, this work aims to find a mean to reduce martensite fraction in weld nugget when using RSW for assembling. The prediction of phase transformation kinetics during RSW has been done. That phase transformation kinetics prediction has been made possible through the modelling of the whole welding process, and a technique called post weld heat treatment (PWHT) have been applied in order to reduce martensite fraction in the weld nugget. Simulation has been performed for AISI 1010 grade, and results show that the application of PWHT leads to the formation of not only martensite but also ferrite, bainite and pearlite during the cooling of weld nugget. Welding experiments have been done in parallel and micrographic analyses show the presence of several phases in the weld nugget. Experimental weld geometry and phase proportions are in good agreement with simulation results, showing here the validity of the model.Keywords: resistance spot welding, AISI 1010, modeling, post weld heat treatment, phase transformation, kinetics
Procedia PDF Downloads 120234 Plasma Technology for Hazardous Biomedical Waste Treatment
Authors: V. E. Messerle, A. L. Mosse, O. A. Lavrichshev, A. N. Nikonchuk, A. B. Ustimenko
Abstract:
One of the most serious environmental problems today is pollution by biomedical waste (BMW), which in most cases has undesirable properties such as toxicity, carcinogenicity, mutagenicity, fire. Sanitary and hygienic survey of typical solid BMW, made in Belarus, Kazakhstan, Russia and other countries shows that their risk to the environment is significantly higher than that of most chemical wastes. Utilization of toxic BMW requires use of the most universal methods to ensure disinfection and disposal of any of their components. Such technology is a plasma technology of BMW processing. To implement this technology a thermodynamic analysis of the plasma processing of BMW was fulfilled and plasma-box furnace was developed. The studies have been conducted on the example of the processing of bone. To perform thermodynamic calculations software package Terra was used. Calculations were carried out in the temperature range 300 - 3000 K and a pressure of 0.1 MPa. It is shown that the final products do not contain toxic substances. From the organic mass of BMW synthesis gas containing combustible components 77.4-84.6% was basically produced, and mineral part consists mainly of calcium oxide and contains no carbon. Degree of gasification of carbon reaches 100% by the temperature 1250 K. Specific power consumption for BMW processing increases with the temperature throughout its range and reaches 1 kWh/kg. To realize plasma processing of BMW experimental installation with DC plasma torch of 30 kW power was developed. The experiments allowed verifying the thermodynamic calculations. Wastes are packed in boxes weighing 5-7 kg. They are placed in the box furnace. Under the influence of air plasma flame average temperature in the box reaches 1800 OC, the organic part of the waste is gasified and inorganic part of the waste is melted. The resulting synthesis gas is continuously withdrawn from the unit through the cooling and cleaning system. Molten mineral part of the waste is removed from the furnace after it has been stopped. Experimental studies allowed determining operating modes of the plasma box furnace, the exhaust gases was analyzed, samples of condensed products were assembled and their chemical composition was determined. Gas at the outlet of the plasma box furnace has the following composition (vol.%): CO - 63.4, H2 - 6.2, N2 - 29.6, S - 0.8. The total concentration of synthesis gas (CO + H2) is 69.6%, which agrees well with the thermodynamic calculation. Experiments confirmed absence of the toxic substances in the final products.Keywords: biomedical waste, box furnace, plasma torch, processing, synthesis gas
Procedia PDF Downloads 526233 Biodsorption as an Efficient Technology for the Removal of Phosphate, Nitrate and Sulphate Anions in Industrial Wastewater
Authors: Angel Villabona-Ortíz, Candelaria Tejada-Tovar, Andrea Viera-Devoz
Abstract:
Wastewater treatment is an issue of vital importance in these times where the impacts of human activities are most evident, which have become essential tasks for the normal functioning of society. However, they put entire ecosystems at risk by time destroying the possibility of sustainable development. Various conventional technologies are used to remove pollutants from water. Agroindustrial waste is the product with the potential to be used as a renewable raw material for the production of energy and chemical products, and their use is beneficial since products with added value are generated from materials that were not used before. Considering the benefits that the use of residual biomass brings, this project proposes the use of agro-industrial residues from corn crops for the production of natural adsorbents whose purpose is aimed at the remediation of contaminated water bodies with large loads of nutrients. The adsorption capacity of two biomaterials obtained from the processing of corn stalks was evaluated by batch system tests. Biochar impregnated with sulfuric acid and thermally activated was synthesized. On the other hand, the cellulose was extracted from the corn stalks and chemically modified with cetyltrimethylammonium chloride in order to quaternize the surface of the adsorbent. The adsorbents obtained were characterized by thermogravimetric analysis (TGA), scanning electron microscopy (SEM), infrared spectrometry with Fourier Transform (FTIR), analysis by Brunauer, Emmett and Teller method (BET) and X-ray Diffraction analysis ( XRD), which showed favorable characteristics for the cellulose extraction process. Higher adsorption capacities of the nutrients were obtained with the use of biochar, with phosphate being the anion with the best removal percentages. The effect of the initial adsorbate concentration was evaluated, with which it was shown that the Freundlich isotherm better describes the adsorption process in most systems. The adsorbent-phosphate / nitrate systems fit better to the Pseudo Primer Order kinetic model, while the adsorbent-sulfate systems showed a better fit to the Pseudo second-order model, which indicates that there are both physical and chemical interactions in the process. Multicomponent adsorption tests revealed that phosphate anions have a higher affinity for both adsorbents. On the other hand, the thermodynamic parameters standard enthalpy (ΔH °) and standard entropy (ΔS °) with negative results indicate the exothermic nature of the process, whereas the ascending values of standard Gibbs free energy (ΔG °). The adsorption process of anions with biocarbon and modified cellulose is spontaneous and exothermic. The use of the evaluated biomateriles is recommended for the treatment of industrial effluents contaminated with sulfate, nitrate and phosphate anions.Keywords: adsorption, biochar, modified cellulose, corn stalks
Procedia PDF Downloads 184232 Hot Carrier Photocurrent as a Candidate for an Intrinsic Loss in a Single Junction Solar Cell
Authors: Jonas Gradauskas, Oleksandr Masalskyi, Ihor Zharchenko
Abstract:
The advancement in improving the efficiency of conventional solar cells toward the Shockley-Queisser limit seems to be slowing down or reaching a point of saturation. The challenges hindering the reduction of this efficiency gap can be categorized into extrinsic and intrinsic losses, with the former being theoretically avoidable. Among the five intrinsic losses, two — the below-Eg loss (resulting from non-absorption of photons with energy below the semiconductor bandgap) and thermalization loss —contribute to approximately 55% of the overall lost fraction of solar radiation at energy bandgap values corresponding to silicon and gallium arsenide. Efforts to minimize the disparity between theoretically predicted and experimentally achieved efficiencies in solar cells necessitate the integration of innovative physical concepts. Hot carriers (HC) present a contemporary approach to addressing this challenge. The significance of hot carriers in photovoltaics is not fully understood. Although their excessive energy is thought to indirectly impact a cell's performance through thermalization loss — where the excess energy heats the lattice, leading to efficiency loss — evidence suggests the presence of hot carriers in solar cells. Despite their exceptionally brief lifespan, tangible benefits arise from their existence. The study highlights direct experimental evidence of hot carrier effect induced by both below- and above-bandgap radiation in a singlejunction solar cell. Photocurrent flowing across silicon and GaAs p-n junctions is analyzed. The photoresponse consists, on the whole, of three components caused by electron-hole pair generation, hot carriers, and lattice heating. The last two components counteract the conventional electron-hole generation-caused current required for successful solar cell operation. Also, a model of the temperature coefficient of the voltage change of the current–voltage characteristic is used to obtain the hot carrier temperature. The distribution of cold and hot carriers is analyzed with regard to the potential barrier height of the p-n junction. These discoveries contribute to a better understanding of hot carrier phenomena in photovoltaic devices and are likely to prompt a reevaluation of intrinsic losses in solar cells.Keywords: solar cell, hot carriers, intrinsic losses, efficiency, photocurrent
Procedia PDF Downloads 72231 Collaborative Management Approach for Logistics Flow Management of Cuban Medicine Supply Chain
Authors: Ana Julia Acevedo Urquiaga, Jose A. Acevedo Suarez, Ana Julia Urquiaga Rodriguez, Neyfe Sablon Cossio
Abstract:
Despite the progress made in logistics and supply chains fields, it is unavoidable the development of business models that use efficiently information to facilitate the integrated logistics flows management between partners. Collaborative management is an important tool for materializing the cooperation between companies, as a way to achieve the supply chain efficiency and effectiveness. The first face of this research was a comprehensive analysis of the collaborative planning on the Cuban companies. It is evident that they have difficulties in supply chains planning where production, supplies and replenishment planning are independent tasks, as well as logistics and distribution operations. Large inventories generate serious financial and organizational problems for entities, demanding increasing levels of working capital that cannot be financed. Problems were found in the efficient application of Information and Communication Technology on business management. The general objective of this work is to develop a methodology that allows the deployment of a planning and control system in a coordinated way on the medicine’s logistics system in Cuba. To achieve these objectives, several mechanisms of supply chain coordination, mathematical programming models, and other management techniques were analyzed to meet the requirements of collaborative logistics management in Cuba. One of the findings is the practical and theoretical inadequacies of the studied models to solve the current situation of the Cuban logistics systems management. To contribute to the tactical-operative management of logistics, the Collaborative Logistics Flow Management Model (CLFMM) is proposed as a tool for the balance of cycles, capacities, and inventories, always to meet the final customers’ demands in correspondence with the service level expected by these. The CLFMM has as center the supply chain planning and control system as a unique information system, which acts on the processes network. The development of the model is based on the empirical methods of analysis-synthesis and the study cases. Other finding is the demonstration of the use of a single information system to support the supply chain logistics management, allows determining the deadlines and quantities required in each process. This ensures that medications are always available to patients and there are no faults that put the population's health at risk. The simulation of planning and control with the CLFMM in medicines such as dipyrone and chlordiazepoxide, during 5 months of 2017, permitted to take measures to adjust the logistic flow, eliminate delayed processes and avoid shortages of the medicines studied. As a result, the logistics cycle efficiency can be increased to 91%, the inventory rotation would increase, and this results in a release of financial resources.Keywords: collaborative management, medicine logistic system, supply chain planning, tactical-operative planning
Procedia PDF Downloads 180230 Kinetic Study of Municipal Plastic Waste
Authors: Laura Salvia Diaz Silvarrey, Anh Phan
Abstract:
Municipal Plastic Waste (MPW) comprises a mixture of thermoplastics such as high and low density polyethylene (HDPE and LDPE), polypropylene (PP), polystyrene (PS) and polyethylene terephthalate (PET). Recycling rate of these plastics is low, e.g. only 27% in 2013. The remains were incinerated or disposed in landfills. As MPW generation increases approximately 5% per annum, MPW management technologies have to be developed to comply with legislation . Pyrolysis, thermochemical decomposition, provides an excellent alternative to convert MPW into valuable resources like fuels and chemicals. Most studies on waste plastic kinetics only focused on HDPE and LDPE with a simple assumption of first order decomposition, which is not the real reaction mechanism. The aim of this study was to develop a kinetic study for each of the polymers in the MPW mixture using thermogravimetric analysis (TGA) over a range of heating rates (5, 10, 20 and 40°C/min) in N2 atmosphere and sample size of 1 – 4mm. A model-free kinetic method was applied to quantify the activation energy at each level of conversion. Kissinger–Akahira–Sunose (KAS) and Flynn–Wall–Ozawa (FWO) equations jointly with Master Plots confirmed that the activation energy was not constant along all the reaction for all the five plastic studied, showing that MPW decomposed through a complex mechanism and not by first-order kinetics. Master plots confirmed that MPW decomposed following a random scission mechanism at conversions above 40%. According to the random scission mechanism, different radicals are formed along the backbone producing the cleavage of bonds by chain scission into molecules of different lengths. The cleavage of bonds during random scission follows first-order kinetics and it is related with the conversion. When a bond is broken one part of the initial molecule becomes an unsaturated one and the other a terminal free radical. The latter can react with hydrogen from and adjacent carbon releasing another free radical and a saturated molecule or reacting with another free radical and forming an alkane. Not every time a bonds is broken a molecule is evaporated. At early stages of the reaction (conversion and temperature below 40% and 300°C), most products are not short enough to evaporate. Only at higher degrees of conversion most of cleavage of bonds releases molecules small enough to evaporate.Keywords: kinetic, municipal plastic waste, pyrolysis, random scission
Procedia PDF Downloads 356229 Model Organic Ranikin Cycle Power Plant for Waste Heat Recovery in Olkaria-I Geothermal Power Plant
Authors: Haile Araya Nigusse, Hiram M. Ndiritu, Robert Kiplimo
Abstract:
Energy consumption is an indispensable component for the continued development of the human population. The global energy demand increases with development and population rise. The increase in energy demand, high cost of fossil fuels and the link between energy utilization and environmental impacts have resulted in the need for a sustainable approach to the utilization of the low grade energy resources. The Organic Rankine Cycle (ORC) power plant is an advantageous technology that can be applied in generation of power from low temperature brine of geothermal reservoirs. The power plant utilizes a low boiling organic working fluid such as a refrigerant or a hydrocarbon. Researches indicated that the performance of ORC power plant is highly dependent upon factors such as proper organic working fluid selection, types of heat exchangers (condenser and evaporator) and turbine used. Despite a high pressure drop, shell-tube heat exchangers have satisfactory performance for ORC power plants. This study involved the design, fabrication and performance assessment of the components of a model Organic Rankine Cycle power plant to utilize the low grade geothermal brine. Two shell and tube heat exchangers (evaporator and condenser) and a single stage impulse turbine have been designed, fabricated and the performance assessment of each component has been conducted. Pentane was used as a working fluid and hot water simulating the geothermal brine. The results of the experiment indicated that the increase in mass flow rate of hot water by 0.08 kg/s caused a rise in overall heat transfer coefficient of the evaporator by 17.33% and the heat transferred was increased by 6.74%. In the condenser, the increase of cooling water flow rate from 0.15 kg/s to 0.35 kg/s increased the overall heat transfer coefficient by 1.21% and heat transferred was increased by 4.26%. The shaft speed varied from 1585 to 4590 rpm as inlet pressure was varied from 0.5 to 5.0 bar and power generated was varying from 4.34 to 14.46W. The results of the experiments indicated that the performance of each component of the model Organic Rankine Cycle power plant operating at low temperature heat resources was satisfactory.Keywords: brine, heat exchanger, ORC, turbine
Procedia PDF Downloads 653228 Energy Efficient Refrigerator
Authors: Jagannath Koravadi, Archith Gupta
Abstract:
In a world with constantly growing energy prices, and growing concerns about the global climate changes caused by increased energy consumption, it is becoming more and more essential to save energy wherever possible. Refrigeration systems are one of the major and bulk energy consuming systems now-a-days in industrial sectors, residential sectors and household environment. Refrigeration systems with considerable cooling requirements consume a large amount of electricity and thereby contribute greatly to the running costs. Therefore, a great deal of attention is being paid towards improvement of the performance of the refrigeration systems in this regard throughout the world. The Coefficient of Performance (COP) of a refrigeration system is used for determining the system's overall efficiency. The operating cost to the consumer and the overall environmental impact of a refrigeration system in turn depends on the COP or efficiency of the system. The COP of a refrigeration system should therefore be as high as possible. Slight modifications in the technical elements of the modern refrigeration systems have the potential to reduce the energy consumption, and improvements in simple operational practices with minimal expenses can have beneficial impact on COP of the system. Thus, the challenge is to determine the changes that can be made in a refrigeration system in order to improve its performance, reduce operating costs and power requirement, improve environmental outcomes, and achieve a higher COP. The opportunity here, and a better solution to this challenge, will be to incorporate modifications in conventional refrigeration systems for saving energy. Energy efficiency, in addition to improvement of COP, can deliver a range of savings such as reduced operation and maintenance costs, improved system reliability, improved safety, increased productivity, better matching of refrigeration load and equipment capacity, reduced resource consumption and greenhouse gas emissions, better working environment, and reduced energy costs. The present work aims at fabricating a working model of a refrigerator that will provide for effective heat recovery from superheated refrigerant with the help of an efficient de-superheater. The temperature of the refrigerant and water in the de-super heater at different intervals of time are measured to determine the quantity of waste heat recovered. It is found that the COP of the system improves by about 6% with the de-superheater and the power input to the compressor decreases by 4 % and also the refrigeration capacity increases by 4%.Keywords: coefficiency of performance, de-superheater, refrigerant, refrigeration capacity, heat recovery
Procedia PDF Downloads 322227 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 223226 Results of Three-Year Operation of 220kV Pilot Superconducting Fault Current Limiter in Moscow Power Grid
Authors: M. Moyzykh, I. Klichuk, L. Sabirov, D. Kolomentseva, E. Magommedov
Abstract:
Modern city electrical grids are forced to increase their density due to the increasing number of customers and requirements for reliability and resiliency. However, progress in this direction is often limited by the capabilities of existing network equipment. New energy sources or grid connections increase the level of short-circuit currents in the adjacent network, which can exceed the maximum rating of equipment–breaking capacity of circuit breakers, thermal and dynamic current withstand qualities of disconnectors, cables, and transformers. Superconducting fault current limiter (SFCL) is a modern solution designed to deal with the increasing fault current levels in power grids. The key feature of this device is its instant (less than 2 ms) limitation of the current level due to the nature of the superconductor. In 2019 Moscow utilities installed SuperOx SFCL in the city power grid to test the capabilities of this novel technology. The SFCL became the first SFCL in the Russian energy system and is currently the most powerful SFCL in the world. Modern SFCL uses second-generation high-temperature superconductor (2G HTS). Despite its name, HTS still requires low temperatures of liquid nitrogen for operation. As a result, Moscow SFCL is built with a cryogenic system to provide cooling to the superconductor. The cryogenic system consists of three cryostats that contain a superconductor part and are filled with liquid nitrogen (three phases), three cryocoolers, one water chiller, three cryopumps, and pressure builders. All these components are controlled by an automatic control system. SFCL has been continuously operating on the city grid for over three years. During that period of operation, numerous faults occurred, including cryocooler failure, chiller failure, pump failure, and others (like a cryogenic system power outage). All these faults were eliminated without an SFCL shut down due to the specially designed cryogenic system backups and quick responses of grid operator utilities and the SuperOx crew. The paper will describe in detail the results of SFCL operation and cryogenic system maintenance and what measures were taken to solve and prevent similar faults in the future.Keywords: superconductivity, current limiter, SFCL, HTS, utilities, cryogenics
Procedia PDF Downloads 86225 Chemical, Structural and Mechanical Optimization of Zr-Based Bulk Metallic Glass for Biomedical Applications
Authors: Eliott Guérin, Remi Daudin, Georges Kalepsi, Alexis Lenain, Sebastien Gravier, Benoit Ter-Ovanessian, Damien Fabregue, Jean-Jacques Blandin
Abstract:
Due to interesting compromise between mechanical and corrosion properties, Zr-based BMGs are attractive for biomedical applications. However, the enhancement of their glass forming ability (GFA) is often achieved by addition of toxic elements like Ni or Be, which is of course a problem for such applications. Consequently, the development of Ni-free Be-free Zr-based BMGs is of great interest. We have developed a Zr-based (Ni and Be-free) amorphous metallic alloy with an elastic limit twice the one of Ti-6Al-4V. The Zr56Co28Al16 composition exhibits a yield strength close to 2 GPa and low Young’s modulus (close to 90 GPa) [1-2]. In this work, we investigated Niobium (Nb) addition through substitution of Zr up to 8 at%. Cobalt substitution has already been reported [3], but we chose Zr substitution to preserve the glass forming ability. In this case, we show that the glass forming ability for 5 mm diameters rods is maintained up to 3 at% of Nb substitution using suction casting in cooper moulds. Concerning the thermal stability, we measure a strong compositional dependence on the glass transition (Tg). Using DSC analysis (heating rate 20 K/min), we show that the Tg rises from 752 K for 0 at% of Nb to 759 K for 3 at% of Nb. Yet, the thermal range between Tg and the crystallisation temperature (Tx) remains almost unchanged from 33 K to 35 K. Uniaxial compression tests on 2 mm diameter pillars and 3 points bending (3PB) tests on 1 mm thick plates are performed to study the Nb addition on the mechanical properties and the plastic behaviour. With these tests, an optimal Nb concentration is found, improving both plasticity and fatigue resistance. Through interpretations of DSC measurements, an attempt is made to correlate the modifications of the mechanical properties with the structural changes. The optimized chemical, structural and mechanical properties through Nb addition are encouraging to develop the potential of this BMG alloy for biomedical applications. For this purpose, we performed polarisation, immersion and cytotoxicity tests. The figure illustrates the polarisation response of Zr56Co28Al16, Zr54Co28Al16Nb2 and TA6V as a reference after 2h of open circuit potential. The results show that the substitution of Zr by a small amount of Nb significantly improves the corrosion resistance of the alloy.Keywords: metallic glasses, amorphous metal, medical, mechanical resistance, biocompatibility
Procedia PDF Downloads 154224 Glycerol-Based Bio-Solvents for Organic Synthesis
Authors: Dorith Tavor, Adi Wolfson
Abstract:
In the past two decades a variety of green solvents have been proposed, including water, ionic liquids, fluorous solvents, and supercritical fluids. However, their implementation in industrial processes is still limited due to their tedious and non-sustainable synthesis, lack of experimental data and familiarity, as well as operational restrictions and high cost. Several years ago we presented, for the first time, the use of glycerol-based solvents as alternative sustainable reaction mediums in both catalytic and non-catalytic organic synthesis. Glycerol is the main by-product from the conversion of oils and fats in oleochemical production. Moreover, in the past decade, its price has substantially decreased due to an increase in supply from the production and use of fatty acid derivatives in the food, cosmetics, and drugs industries and in biofuel synthesis, i.e., biodiesel. The renewable origin, beneficial physicochemical properties and reusability of glycerol-based solvents, enabled improved product yield and selectivity as well as easy product separation and catalyst recycling. Furthermore, their high boiling point and polarity make them perfect candidates for non-conventional heating and mixing techniques such as ultrasound- and microwave-assisted reactions. Finally, in some reactions, such as catalytic transfer-hydrogenation or transesterification, they can also be used simultaneously as both solvent and reactant. In our ongoing efforts to design a viable protocol that will facilitate the acceptance of glycerol and its derivatives as sustainable solvents, pure glycerol and glycerol triacetate (triacetin) as well as various glycerol-triacetin mixtures were tested as sustainable solvents in several representative organic reactions, such as nucleophilic substitution of benzyl chloride to benzyl acetate, Suzuki-Miyaura cross-coupling of iodobenzene and phenylboronic acid, baker’s yeast reduction of ketones, and transfer hydrogenation of olefins. It was found that reaction performance was affected by the glycerol to triacetin ratio, as the solubility of the substrates in the solvent determined product yield. Thereby, employing optimal glycerol to triacetin ratio resulted in maximum product yield. In addition, using glycerol-based solvents enabled easy and successful separation of the products and recycling of the catalysts.Keywords: glycerol, green chemistry, sustainability, catalysis
Procedia PDF Downloads 626223 Development of a Framework for Assessment of Market Penetration of Oil Sands Energy Technologies in Mining Sector
Authors: Saeidreza Radpour, Md. Ahiduzzaman, Amit Kumar
Abstract:
Alberta’s mining sector consumed 871.3 PJ in 2012, which is 67.1% of the energy consumed in the industry sector and about 40% of all the energy consumed in the province of Alberta. Natural gas, petroleum products, and electricity supplied 55.9%, 20.8%, and 7.7%, respectively, of the total energy use in this sector. Oil sands mining and upgrading to crude oil make up most of the mining energy sector activities in Alberta. Crude oil is produced from the oil sands either by in situ methods or by the mining and extraction of bitumen from oil sands ore. In this research, the factors affecting oil sands production have been assessed and a framework has been developed for market penetration of new efficient technologies in this sector. Oil sands production amount is a complex function of many different factors, broadly categorized into technical, economic, political, and global clusters. The results of developed and implemented statistical analysis in this research show that the importance of key factors affecting on oil sands production in Alberta is ranked as: Global energy consumption (94% consistency), Global crude oil price (86% consistency), and Crude oil export (80% consistency). A framework for modeling oil sands energy technologies’ market penetration (OSETMP) has been developed to cover related technical, economic and environmental factors in this sector. It has been assumed that the impact of political and social constraints is reflected in the model by changes of global oil price or crude oil price in Canada. The market share of novel in situ mining technologies with low energy and water use are assessed and calculated in the market penetration framework include: 1) Partial upgrading, 2) Liquid addition to steam to enhance recovery (LASER), 3) Solvent-assisted process (SAP), also called solvent-cyclic steam-assisted gravity drainage (SC-SAGD), 4) Cyclic solvent, 5) Heated solvent, 6) Wedge well, 7) Enhanced modified steam and Gas push (emsagp), 8) Electro-thermal dynamic stripping process (ET-DSP), 9) Harris electro-magnetic heating applications (EMHA), 10) Paraffin froth separation. The results of the study will show the penetration profile of these technologies over a long term planning horizon.Keywords: appliances efficiency improvement, diffusion models, market penetration, residential sector
Procedia PDF Downloads 335222 Experimental Study of Nucleate Pool Boiling Heat Transfer Characteristics on Laser-Processed Copper Surfaces of Different Patterns
Authors: Luvindran Sugumaran, Mohd Nashrul Mohd Zubir, Kazi Md Salim Newaz, Tuan Zaharinie Tuan Zahari, Suazlan Mt Aznam, Aiman Mohd Halil
Abstract:
With the fast growth of integrated circuits and the trend towards making electronic devices smaller, the heat dissipation load of electronic devices has continued to go over the limit. The high heat flux element would not only harm the operation and lifetime of the equipment but would also impede the performance upgrade brought about by the iteration of technological updates, which would have a direct negative impact on the economic and production cost benefits of rising industries. Hence, in high-tech industries like radar, information and communication, electromagnetic power, and aerospace, the development and implementation of effective heat dissipation technologies were urgently required. Pool boiling is favored over other cooling methods because of its capacity to dissipate a high heat flux at a low wall superheat without the usage of mechanical components. Enhancing the pool boiling performance by increasing the heat transfer coefficient via surface modification techniques has received a lot of attention. There are several surface modification methods feasible today, but the stability and durability of surface modification are the greatest priority. Thus, laser machining is an interesting choice for surface modification due to its low production cost, high scalability, and repeatability. In this study, different patterns of laser-processed copper surfaces are fabricated to investigate the nucleate pool boiling heat transfer performance of distilled water. The investigation showed that there is a significant enhancement in the pool boiling heat transfer performance of the laser-processed surface compared to the reference surface due to the notable increase in nucleation frequency and nucleation site density. It was discovered that the heat transfer coefficients increased when both the surface area ratio and the ratio of peak-to-valley height of the microstructure were raised. It is believed that the development of microstructures on the surface as a result of laser processing is the primary factor in the enhancement of heat transfer performance.Keywords: heat transfer coefficient, laser processing, micro structured surface, pool boiling
Procedia PDF Downloads 93221 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures
Authors: Haytam Kasem
Abstract:
The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model
Procedia PDF Downloads 241220 Shaped Crystal Growth of Fe-Ga and Fe-Al Alloy Plates by the Micro Pulling down Method
Authors: Kei Kamada, Rikito Murakami, Masahiko Ito, Mototaka Arakawa, Yasuhiro Shoji, Toshiyuki Ueno, Masao Yoshino, Akihiro Yamaji, Shunsuke Kurosawa, Yuui Yokota, Yuji Ohashi, Akira Yoshikawa
Abstract:
Techniques of energy harvesting y have been widely developed in recent years, due to high demand on the power supply for ‘Internet of things’ devices such as wireless sensor nodes. In these applications, conversion technique of mechanical vibration energy into electrical energy using magnetostrictive materials n have been brought to attention. Among the magnetostrictive materials, Fe-Ga and Fe-Al alloys are attractive materials due to the figure of merits such price, mechanical strength, high magnetostrictive constant. Up to now, bulk crystals of these alloys are produced by the Bridgman–Stockbarger method or the Czochralski method. Using these method big bulk crystal up to 2~3 inch diameter can be grown. However, non-uniformity of chemical composition along to the crystal growth direction cannot be avoid, which results in non-uniformity of magnetostriction constant and reduction of the production yield. The micro-pulling down (μ-PD) method has been developed as a shaped crystal growth technique. Our group have reported shaped crystal growth of oxide, fluoride single crystals with different shape such rod, plate tube, thin fiber, etc. Advantages of this method is low segregation due to high growth rate and small diffusion of melt at the solid-liquid interface, and small kerf loss due to near net shape crystal. In this presentation, we report the shaped long plate crystal growth of Fe-Ga and Fe-Al alloys using the μ-PD method. Alloy crystals were grown by the μ-PD method using calcium oxide crucible and induction heating system under the nitrogen atmosphere. The bottom hole of crucibles was 5 x 1mm² size. A <100> oriented iron-based alloy was used as a seed crystal. 5 x 1 x 320 mm³ alloy crystal plates were successfully grown. The results of crystal growth, chemical composition analysis, magnetostrictive properties and a prototype vibration energy harvester are reported. Furthermore, continuous crystal growth using powder supply system will be reported to minimize the chemical composition non-uniformity along the growth direction.Keywords: crystal growth, micro-pulling-down method, Fe-Ga, Fe-Al
Procedia PDF Downloads 337219 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico
Authors: Ismene Ithai Bras-Ruiz
Abstract:
Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise
Procedia PDF Downloads 130218 Research Project on Learning Rationality in Strategic Behaviors: Interdisciplinary Educational Activities in Italian High Schools
Authors: Giovanna Bimonte, Luigi Senatore, Francesco Saverio Tortoriello, Ilaria Veronesi
Abstract:
The education process considers capabilities not only to be seen as a means to a certain end but rather as an effective purpose. Sen's capability approach challenges human capital theory, which sees education as an ordinary investment undertaken by individuals. A complex reality requires complex thinking capable of interpreting the dynamics of society's changes to be able to make decisions that can be rational for private, ethical and social contexts. Education is not something removed from the cultural and social context; it exists and is structured within it. In Italy, the "Mathematical High School Project" is a didactic research project is based on additional laboratory courses in extracurricular hours where mathematics intends to bring itself in a dialectical relationship with other disciplines as a cultural bridge between the two cultures, the humanistic and the scientific ones, with interdisciplinary educational modules on themes of strong impact in younger life. This interdisciplinary mathematics presents topics related to the most advanced technologies and contemporary socio-economic frameworks to demonstrate how mathematics is not only a key to reading but also a key to resolving complex problems. The recent developments in mathematics provide the potential for profound and highly beneficial changes in mathematics education at all levels, such as in socio-economic decisions. The research project is built to investigate whether repeated interactions can successfully promote cooperation among students as rational choice and if the skill, the context and the school background can influence the strategies choice and the rationality. A Laboratory on Game Theory as mathematical theory was conducted in the 4th year of the Mathematical High Schools and in an ordinary scientific high school of the Scientific degree program. Students played two simultaneous games of repeated Prisoner's Dilemma with an indefinite horizon, with two different competitors in each round; even though the competitors in each round will remain the same for the duration of the game. The results highlight that most of the students in the two classes used the two games with an immunization strategy against the risk of losing: in one of the games, they started by playing Cooperate, and in the other by the strategy of Compete. In the literature, theoretical models and experiments show that in the case of repeated interactions with the same adversary, the optimal cooperation strategy can be achieved by tit-for-tat mechanisms. In higher education, individual capacities cannot be examined independently, as conceptual framework presupposes a social construction of individuals interacting and competing, making individual and collective choices. The paper will outline all the results of the experimentation and the future development of the research.Keywords: game theory, interdisciplinarity, mathematics education, mathematical high school
Procedia PDF Downloads 76217 The Political Economy of Media Privatisation in Egypt: State Mechanisms and Continued Control
Authors: Mohamed Elmeshad
Abstract:
During the mid-1990's Egypt had become obliged to implement the Economic Reform and Structural Adjustment Program that included broad economic liberalization, expansion of the private sector and a contraction the size of government spending. This coincided as well with attempts to appear more democratic and open to liberalizing public space and discourse. At the same time, economic pressures and the proliferation of social media access and activism had led to increased pressure to open a mediascape and remove it from the clutches of the government, which had monopolized print and broadcast mass media for over 4 decades by that point. However, the mechanisms that governed the privatization of mass media allowed for sustained government control, even through the prism of ostensibly privately owned newspapers and television stations. These mechanisms involve barriers to entry from a financial and security perspective, as well as operational capacities of distribution and access to means of production. The power dynamics between mass media establishments and the state were moulded during this period in a novel way. Power dynamics within media establishments had also formed under such circumstances. The changes in the country's political economy itself somehow mirrored these developments. This paper will examine these dynamics and shed light on the political economy of Egypt's newly privatized mass media in the early 2000's especially. Methodology: This study will rely on semi-structured interviews from individuals involved with these changes from the perspective of the media organizations. It also will map out the process of media privatization by looking at the administrative, operative and legislative institutions and contexts in order to attempt to draw conclusions on methods of control and the role of the state during the process of privatization. Finally, a brief discourse analysis will be necessary in order to aptly convey how these factors ultimately reflected on media output. Findings and conclusion: The development of Egyptian private, “independent” mirrored the trajectory of transitions in the country’s political economy. Liberalization of the economy meant that a growing class of business owners would explore opportunities that such new markets would offer. However the regime’s attempts to control access to certain forms of capital, especially in sectors such as the media affected the structure of print and broadcast media, as well as the institutions that would govern them. Like the process of liberalisation, much of the regime’s manoeuvring with regards to privatization of media had been haphazardly used to indirectly expand the regime and its ruling party’s ability to retain influence, while creating a believable façade of openness. In this paper, we will attempt to uncover these mechanisms and analyse our findings in ways that explain how the manifestations prevalent in the context of a privatizing media space in a transitional Egypt provide evidence of both the intentions of this transition, and the ways in which it was being held back.Keywords: business, mass media, political economy, power, privatisation
Procedia PDF Downloads 230216 Knowledge, Attitudes, and Practices of Army Soldiers on Prehospital Trauma Care in Matara District
Authors: Hatharasinghe Liyanage Saneetha Chathaurika, Shreenika De Silva Weliange
Abstract:
Background and Significance of the Study: Natural and human-induced disasters have become more common due to rapid development and climate change. Therefore hospitalization due to injuries has increased in the midst of advancement in medicine. Prehospital trauma care is critical in reducing morbidity and mortality following injury. Army soldiers are one of the first responder categories after a major disaster causing injury. Thus, basic life support measures taken by trained lay first responders is life-saving, it is important to build up their capacities by updating their knowledge and practices while cultivating positive attitudes toward it. Objective: To describe knowledge, attitudes and practices on prehospital trauma care among army soldiers in Matara District. Methodology: A descriptive cross sectional study was carried out among army soldiers in Matara district. The whole population was studied belonging to the above group during the study period. Self-administered questionnaire was used as the study instrument. Cross tabulations were done to identify the possible associations using chi square statistics. Knowledge and practices were categorized in to two groups as “Poor” and “Good” taking 50% as the cut off. Results: The study population consists of 266 participants (response rate 97.79%).The overall level of knowledge on prehospital trauma care is poor (78.6%) while knowledge on golden hour of trauma (77.1%), triage system (74.4%), cardio pulmonary resuscitation (92.5%) and transportation of patients with spinal cord injury (69.2%) was markedly poor. Good knowledge is significantly associated with advance age, higher income and higher level of education whereas it has no significant association with work duration. More than 80% of them had positive attitudes on most aspects of prehospital trauma care while majority thinks it is good to have knowledge on this topic and they would have performed better in disaster situations if they were trained on pre-hospital trauma care. With regard to the practice, majority (62.8%) is included in the group of poor level of practice. They lack practice on first-aid, cardiopulmonary resuscitation and safe transportation of the patients. Moreover, they had less opportunity to participate in drills/simulation programs done on disaster events. Good practice is significantly associated with advance age and higher level of education but not associated with level of income and working duration of army soldiers. Highly significant association was observed between the level of knowledge and level of practice on prehospital trauma care of army soldiers. It is observed that higher the knowledge practices become better. Conclusion: A higher proportion of army soldiers had poor knowledge and practice on prehospital trauma care while majority had positive attitudes regarding it. Majority lacks knowledge and practice in first-aid and cardiopulmonary resuscitation. Due to significant association observed between knowledge and practice it can be recommended to include a training session on prehospital trauma care in the basic military curriculum which will enhance the ability to act as first responders effectively. Further research is needed in this area of prehospital trauma care to enhance the qualitative outcome.Keywords: disaster, prehospital trauma care, first responders, army soldiers
Procedia PDF Downloads 235215 R&D Diffusion and Productivity in a Globalized World: Country Capabilities in an MRIO Framework
Authors: S. Jimenez, R.Duarte, J.Sanchez-Choliz, I. Villanua
Abstract:
There is a certain consensus in economic literature about the factors that have influenced in historical differences in growth rates observed between developed and developing countries. However, it is less clear what elements have marked different paths of growth in developed economies in recent decades. R&D has always been seen as one of the major sources of technological progress, and productivity growth, which is directly influenced by technological developments. Following recent literature, we can say that ‘innovation pushes the technological frontier forward’ as well as encourage future innovation through the creation of externalities. In other words, productivity benefits from innovation are not fully appropriated by innovators, but it also spread through the rest of the economies encouraging absorptive capacities, what have become especially important in a context of increasing fragmentation of production This paper aims to contribute to this literature in two ways, first, exploring alternative indexes of R&D flows embodied in inter-country, inter-sectorial flows of good and services (as approximation to technology spillovers) capturing structural and technological characteristic of countries and, second, analyzing the impact of direct and embodied R&D on the evolution of labor productivity at the country/sector level in recent decades. The traditional way of calculation through a multiregional input-output framework assumes that all countries have the same capabilities to absorb technology, but it is not, each one has different structural features and, this implies, different capabilities as part of literature, claim. In order to capture these differences, we propose to use a weight based on specialization structure indexes; one related with the specialization of countries in high-tech sectors and the other one based on a dispersion index. We propose these two measures because, as far as we understood, country capabilities can be captured through different ways; countries specialization in knowledge-intensive sectors, such as Chemicals or Electrical Equipment, or an intermediate technology effort across different sectors. Results suggest the increasing importance of country capabilities while increasing the trade openness. Besides, if we focus in the country rankings, we can observe that with high-tech weighted R&D embodied countries as China, Taiwan and Germany arose the top five despite not having the highest intensities of R&D expenditure, showing the importance of country capabilities. Additionally, through a fixed effects panel data model we show that, in fact, R&D embodied is important to explain labor productivity increases, in fact, even more that direct R&D investments. This is reflecting that globalization is more important than has been said until now. However, it is true that almost all analysis done in relation with that consider the effect of t-1 direct R&D intensity over economic growth. Nevertheless, from our point of view R&D evolve as a delayed flow and it is necessary some time to be able to see its effects on the economy, as some authors have already claimed. Our estimations tend to corroborate this hypothesis obtaining a gap between 4-5 years.Keywords: economic growth, embodied, input-output, technology
Procedia PDF Downloads 126214 Observation of Inverse Blech Length Effect during Electromigration of Cu Thin Film
Authors: Nalla Somaiah, Praveen Kumar
Abstract:
Scaling of transistors and, hence, interconnects is very important for the enhanced performance of microelectronic devices. Scaling of devices creates significant complexity, especially in the multilevel interconnect architectures, wherein current crowding occurs at the corners of interconnects. Such a current crowding creates hot-spots at the respective corners, resulting in non-uniform temperature distribution in the interconnect as well. This non-uniform temperature distribution, which is exuberated with continued scaling of devices, creates a temperature gradient in the interconnect. In particular, the increased current density at corners and the associated temperature rise due to Joule heating accelerate the electromigration induced failures in interconnects, especially at corners. This has been the classic reliability issue associated with metallic interconnects. Herein, it is generally understood that electromigration induced damages can be avoided if the length of interconnect is smaller than a critical length, often termed as Blech length. Interestingly, the effect of non-negligible temperature gradients generated at these corners in terms of thermomigration and electromigration-thermomigration coupling has not attracted enough attention. Accordingly, in this work, the interplay between the electromigration and temperature gradient induced mass transport was studied using standard Blech structure. In this particular sample structure, the majority of the current is forcefully directed into the low resistivity metallic film from a high resistivity underlayer film, resulting in current crowding at the edges of the metallic film. In this study, 150 nm thick Cu metallic film was deposited on 30 nm thick W underlayer film in the configuration of Blech structure. Series of Cu thin strips, with lengths of 10, 20, 50, 100, 150 and 200 μm, were fabricated. Current density of ≈ 4 × 1010 A/m² was passed through Cu and W films at a temperature of 250ºC. Herein, along with expected forward migration of Cu atoms from the cathode to the anode at the cathode end of the Cu film, backward migration from the anode towards the center of Cu film was also observed. Interestingly, smaller length samples consistently showed enhanced migration at the cathode end, thus indicating the existence of inverse Blech length effect in presence of temperature gradient. A finite element based model showing the interplay between electromigration and thermomigration driving forces has been developed to explain this observation.Keywords: Blech structure, electromigration, temperature gradient, thin films
Procedia PDF Downloads 260213 Measuring Organizational Resiliency for Flood Response in Thailand
Authors: Sudha Arlikatti, Laura Siebeneck, Simon A. Andrew
Abstract:
The objective of this research is to measure organizational resiliency through five attributes namely, rapidity, redundancy, resourcefulness, and robustness and to provide recommendations for resiliency building in flood risk communities. The research was conducted in Thailand following the severe floods of 2011 triggered by Tropical Storm Nock-ten. The floods lasted over eight months starting in June 2011 affecting 65 of the country’s 76 provinces and over 12 million people. Funding from a US National Science Foundation grant was used to collect ephemeral data in rural (Ayutthaya), suburban (Pathum Thani), and urban (Bangkok) provinces of Thailand. Semi-structured face-to-face interviews were conducted in Thai with 44 contacts from public, private, and non-profit organizations including universities, schools, automobile companies, vendors, tourist agencies, monks from temples, faith based organizations, and government agencies. Multiple triangulations were used to analyze the data by identifying selective themes from the qualitative data, validated with quantitative data and news media reports. This helped to obtain a more comprehensive view of how organizations in different geographic settings varied in their understanding of what enhanced or hindered their resilience and consequently their speed and capacities to respond. The findings suggest that the urban province of Bangkok scored highest in resourcefulness, rapidity of response, robustness, and ability to rebound. This is not surprising considering that it is the country’s capital and the seat of government, economic, military and tourism sectors. However, contrary to expectations all 44 respondents noted that the rural province of Ayutthaya was the fastest to recover amongst the three. Its organizations scored high on redundancy and rapidity of response due to the strength of social networks, a flood disaster sub-culture due to annual flooding, and the help provided by monks from and faith based organizations. Organizations in the suburban community of Pathum Thani scored lowest on rapidity of response and resourcefulness due to limited and ambiguous warnings, lack of prior flood experience and controversies that government flood protection works like sandbagging favored the capital city of Bangkok over them. Such a micro-level examination of organizational resilience in rural, suburban and urban areas in a country through mixed methods studies has its merits in getting a nuanced understanding of the importance of disaster subcultures and religious norms for resilience. This can help refocus attention on the strengths of social networks and social capital, for flood mitigation.Keywords: disaster subculture, flood response, organizational resilience, Thailand floods, religious beliefs and response, social capital and disasters
Procedia PDF Downloads 161212 FEM and Experimental Modal Analysis of Computer Mount
Authors: Vishwajit Ghatge, David Looper
Abstract:
Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration
Procedia PDF Downloads 324211 Spectral Mapping of Hydrothermal Alteration Minerals for Geothermal Exploration Using Advanced Spaceborne Thermal Emission and Reflection Radiometer Short Wave Infrared Data
Authors: Aliyu J. Abubakar, Mazlan Hashim, Amin B. Pour
Abstract:
Exploiting geothermal resources for either power, home heating, Spa, greenhouses, industrial or tourism requires an initial identification of suitable areas. This can be done cost-effectively using remote sensing satellite imagery which has synoptic capabilities of covering large areas in real time and by identifying possible areas of hydrothermal alteration and minerals related to Geothermal systems. Earth features and minerals are known to have unique diagnostic spectral reflectance characteristics that can be used to discriminate them. The focus of this paper is to investigate the applicability of mapping hydrothermal alteration in relation to geothermal systems (thermal springs) at Yankari Park Northeastern Nigeria, using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) satellite data for resource exploration. The ASTER Short Wave Infrared (SWIR) bands are used to highlight and discriminate alteration areas by employing sophisticated digital image processing techniques including image transformations and spectral mapping methods. Field verifications are conducted at the Yankari Park using hand held Global Positioning System (GPS) monterra to identify locations of hydrothermal alteration and rock samples obtained at the vicinity and surrounding areas of the ‘Mawulgo’ and ‘Wikki’ thermal springs. X-Ray Diffraction (XRD) results of rock samples obtained from the field validated hydrothermal alteration by the presence of indicator minerals including; Dickite, Kaolinite, Hematite and Quart. The study indicated the applicability of mapping geothermal anomalies for resource exploration in unmapped sparsely vegetated savanna environment characterized by subtle surface manifestations such as thermal springs. The results could have implication for geothermal resource exploration especially at the prefeasibility stages by narrowing targets for comprehensive surveys and in unexplored savanna regions where expensive airborne surveys are unaffordable.Keywords: geothermal exploration, image enhancement, minerals, spectral mapping
Procedia PDF Downloads 364210 Preparedness is Overrated: Community Responses to Floods in a Context of (Perceived) Low Probability
Authors: Kim Anema, Matthias Max, Chris Zevenbergen
Abstract:
For any flood risk manager the 'safety paradox' has to be a familiar concept: low probability leads to a sense of safety, which leads to more investments in the area, which leads to higher potential consequences: keeping the aggregated risk (probability*consequences) at the same level. Therefore, it is important to mitigate potential consequences apart from probability. However, when the (perceived) probability is so low that there is no recognizable trend for society to adapt to, addressing the potential consequences will always be the lagging point on the agenda. Preparedness programs fail because of lack of interest and urgency, policy makers are distracted by their day to day business and there's always a more urgent issue to spend the taxpayer's money on. The leading question in this study was how to address the social consequences of flooding in a context of (perceived) low probability. Disruptions of everyday urban life, large or small, can be caused by a variety of (un)expected things - of which flooding is only one possibility. Variability like this is typically addressed with resilience - and we used the concept of Community Resilience as the framework for this study. Drawing on face to face interviews, an extensive questionnaire and publicly available statistical data we explored the 'whole society response' to two recent urban flood events; the Brisbane Floods (AUS) in 2011 and the Dresden Floods (GE) in 2013. In Brisbane, we studied how the societal impacts of the floods were counteracted by both authorities and the public, and in Dresden we were able to validate our findings. A large part of the reactions, both public as institutional, to these two urban flood events were not fuelled by preparedness or proper planning. Instead, more important success factors in counteracting social impacts like demographic changes in neighborhoods and (non-)economic losses were dynamics like community action, flexibility and creativity from authorities, leadership, informal connections and a shared narrative. These proved to be the determining factors for the quality and speed of recovery in both cities. The resilience of the community in Brisbane was good, due to (i) the approachability of (local) authorities, (ii) a big group of ‘secondary victims’ and (iii) clear leadership. All three of these elements were amplified by the use of social media and/ or web 2.0 by both the communities and the authorities involved. The numerous contacts and social connections made through the web were fast, need driven and, in their own way, orderly. Similarly in Dresden large groups of 'unprepared', ad hoc organized citizens managed to work together with authorities in a way that was effective and speeded up recovery. The concept of community resilience is better fitted than 'social adaptation' to deal with the potential consequences of an (im)probable flood. Community resilience is built on capacities and dynamics that are part of everyday life and which can be invested in pre-event to minimize the social impact of urban flooding. Investing in these might even have beneficial trade-offs in other policy fields.Keywords: community resilience, disaster response, social consequences, preparedness
Procedia PDF Downloads 353209 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus
Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo
Abstract:
The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning
Procedia PDF Downloads 159208 Development of Ferric Citrate Complex Draw Solute and Its Application for Liquid Product Enrichment through Forward Osmosis
Abstract:
Forward osmosis is an emerging technology for separation and has great potential in the concentration of liquid products such as protein, pharmaceutical, and natural products. In pharmacy industry, one of the very tough talks is to concentrate the product in a gentle way since some of the key components may lose bioactivity when exposed to heating or pressurization. Therefore, forward osmosis (FO), which uses inherently existed osmosis pressure instead of externally applied hydraulic pressure, is attractive for pharmaceutical enrichments in a much efficient and energy-saving way. Recently, coordination complexes have been explored as the new class of draw solutes in FO processes due to their bulky configuration and excellent performance in terms of high water flux and low reverse solute flux. Among these coordination complexes, ferric citrate complex with lots of hydrophilic groups and ionic species which make them good solubility and high osmotic pressure in aqueous solution, as well as its low toxicity, has received much attention. However, the chemistry of ferric complexation by citrate is complicated, and disagreement prevails in the literature, especially for the structure of the ferric citrate. In this study, we investigated the chemical reaction with various molar ratio of iron and citrate. It was observed that the ferric citrate complex (Fe-CA2) with molar ratio of 1:1 for iron and citrate formed at the beginning of the reaction, then Fecit would convert to ferric citrate complex at the molar ratio of 1:2 with the proper excess of citrate in the base solution. The structures of the ferric citrate complexes synthesized were systematically characterized by X-ray diffraction (XRD), UV-vis spectroscopy, X-ray photoelectron spectroscopy (XPS), Fourier transform infrared spectroscopy (FT-IR) and Thermogravimetric analysis (TGA). Fe-CA2 solutions exhibit osmotic pressures more than twice of that for NaCl solutions at the same concentrations. Higher osmotic pressure means higher driving force, and this is preferable for the FO process. Fe-CA2 and NaCl draw solutions were prepared with the same osmotic pressure and used in FO process for BSA protein concentration. Within 180 min, BSA concentration was enriched from 0.2 to 0.27 L using Fe-CA draw solutions. However, it was only increased from 0.20 to 0.22 g/L using NaCl draw solutions. A reverse flux of 11 g/m²h was observed for NaCl draw solutes while it was only 0.1 g/m²h for Fe-CA2 draw solutes. It is safe to conclude that Fe-CA2 is much better than NaCl as draw solute and it is suitable for the enrichment of liquid product.Keywords: draw solutes, ferric citrate complex, forward osmosis, protein enrichment
Procedia PDF Downloads 155207 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator
Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya
Abstract:
Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.Keywords: DTL, focusing, PMQ, proton, rate earth magnets
Procedia PDF Downloads 476