Search results for: paper pots
592 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: Customer value, Huff's Gravity Model, POS, retailer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618591 Review of Strategies for Hybrid Energy Storage Management System in Electric Vehicle Application
Authors: Kayode A. Olaniyi, Adeola A. Ogunleye, Tola M. Osifeko
Abstract:
Electric Vehicles (EV) appear to be gaining increasing patronage as a feasible alternative to Internal Combustion Engine Vehicles (ICEVs) for having low emission and high operation efficiency. The EV energy storage systems are required to handle high energy and power density capacity constrained by limited space, operating temperature, weight and cost. The choice of strategies for energy storage evaluation, monitoring and control remains a challenging task. This paper presents review of various energy storage technologies and recent researches in battery evaluation techniques used in EV applications. It also underscores strategies for the hybrid energy storage management and control schemes for the improvement of EV stability and reliability. The study reveals that despite the advances recorded in battery technologies there is still no cell which possess both the optimum power and energy densities among other requirements, for EV application. However combination of two or more energy storages as hybrid and allowing the advantageous attributes from each device to be utilized is a promising solution. The review also reveals that State-of-Charge (SoC) is the most crucial method for battery estimation. The conventional method of SoC measurement is however questioned in the literature and adaptive algorithms that include all model of disturbances are being proposed. The review further suggests that heuristic-based approach is commonly adopted in the development of strategies for hybrid energy storage system management. The alternative approach which is optimization-based is found to be more accurate but is memory and computational intensive and as such not recommended in most real-time applications.
Keywords: Hybrid electric vehicle, hybrid energy storage, battery state estimation, ate of charge, state of health.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1058590 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modeling and Solving
Authors: Yasin Tadayonrad, Alassane Ballé Ndiaye
Abstract:
Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading/unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is the loading/unloading capacity in each source/destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods (FMCG) industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on Python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.
Keywords: Supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 539589 Improving the Software Homologation Process through Peer Review: An Experience Report on Android Development Environment
Authors: Camila Bernardon, Diana Lemos, Mario Garcia, Thiago Souto, Bruno Bonifacio
Abstract:
In the current technological market environment, ensuring the quality of new products has become a complex challenge. In this scenario, companies have been investing in solutions that aim to reduce the execution time of software testing and lead to cost efficiency. However, companies that have a complex and specialized testing environment usually face barriers related to costly testing processes, especially in distributed settings. Sidia Institute of Technology works on research and development for the Android platform for mobile devices in Latin America. As we work in a global software development (GSD) scope, we have faced barriers caused by failures detected lately that have caused delays in the homologation release process on Android projects. Thus, we adopt an Internal Review process, using as an alternative to reduce these failures. In this paper it was presented the experience of a homologation team adopting an Internal Review process in order to increase the performance through of improving test efficiency. Using this approach, it was possible to realize a substantial improvement in quality, reliability and timeliness of our deliveries. Through the quantitative analyses, it was possible identify a positive growth in homologation efficiency of 6% after adoption of the process. In addition, we performed a qualitative analysis from the collected data through an online questionnaire. In particular, results show that association between failure reduction and review process adoption provides the most quality that has a positive effect on project milestones. We hope this report can be helpful to other companies and the scientific community to improve their process thereby increasing competitive advantages.
Keywords: Android, GSD, improvement quality process, mobile products.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 493588 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.
Keywords: Harmonics, passive filter, power factor, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193587 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes
Authors: V. Churkin, M. Lopatin
Abstract:
The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second – 95,3%.Keywords: Bass model, generalized Bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886586 Amelioration of Cardiac Arrythmias Classification Performance Using Artificial Neural Network, Adaptive Neuro-Fuzzy and Fuzzy Inference Systems Classifiers
Authors: Alexandre Boum, Salomon Madinatou
Abstract:
This paper aims at bringing a scientific contribution to the cardiac arrhythmia biomedical diagnosis systems; more precisely to the study of the amelioration of cardiac arrhythmia classification performance using artificial neural network, adaptive neuro-fuzzy and fuzzy inference systems classifiers. The purpose of this amelioration is to enable cardiologists to make reliable diagnosis through automatic cardiac arrhythmia analyzes and classifications based on high confidence classifiers. In this study, six classes of the most commonly encountered arrhythmias are considered: the Right Bundle Branch Block, the Left Bundle Branch Block, the Ventricular Extrasystole, the Auricular Extrasystole, the Atrial Fibrillation and the Normal Cardiac rate beat. From the electrocardiogram (ECG) extracted parameters, we constructed a matrix (360x360) serving as an input data sample for the classifiers based on neural networks and a matrix (1x6) for the classifier based on fuzzy logic. By varying three parameters (the quality of the neural network learning, the data size and the quality of the input parameters) the automatic classification permitted us to obtain the following performances: in terms of correct classification rate, 83.6% was obtained using the fuzzy logic based classifier, 99.7% using the neural network based classifier and 99.8% for the adaptive neuro-fuzzy based classifier. These results are based on signals containing at least 360 cardiac cycles. Based on the comparative analysis of the aforementioned three arrhythmia classifiers, the classifiers based on neural networks exhibit a better performance.
Keywords: Adaptive neuro-fuzzy, artificial neural network, cardiac arrythmias, fuzzy inference systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713585 Accuracy of Peak Demand Estimates for Office Buildings Using eQUEST
Authors: Mahdiyeh Zafaranchi, Ethan S. Cantor, William T. Riddell, Jess W. Everett
Abstract:
The New Jersey Department of Military and Veteran’s Affairs (NJ DMAVA) operates over 50 facilities throughout the state of New Jersey, US. NJ DMAVA is under a mandate to move toward decarbonization, which will eventually include eliminating the use of natural gas and other fossil fuels for heating. At the same time, the organization requires increased resiliency regarding electric grid disruption. These competing goals necessitate adopting the use of on-site renewables such as photovoltaic and geothermal power, as well as implementing power control strategies through microgrids. Planning for these changes requires a detailed understanding of current and future electricity use on yearly, monthly, and shorter time scales, as well as a breakdown of consumption by heating, ventilation, and air conditioning (HVAC) equipment. This paper discusses case studies of two buildings that were simulated using the QUick Energy Simulation Tool (eQUEST). Both buildings use electricity from the grid and photovoltaics. One building also uses natural gas. While electricity use data are available in hourly intervals and natural gas data are available in monthly intervals, the simulations were developed using monthly and yearly totals. This approach was chosen to reflect the information available for most NJ DMAVA facilities. Once completed, simulation results are compared to metrics recommended by several organizations to validate energy use simulations. In addition to yearly and monthly totals, the simulated peak demands are compared to actual monthly peak demand values. The simulations resulted in monthly peak demand values that were within 30% of the measured values. These benchmarks will help to assess future energy planning efforts for NJ DMAVA.
Keywords: Building Energy Modeling, eQUEST, peak demand, smart meters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 199584 Efficient Real-time Remote Data Propagation Mechanism for a Component-Based Approach to Distributed Manufacturing
Authors: V. Barot, S. McLeod, R. Harrison, A. A. West
Abstract:
Manufacturing Industries face a crucial change as products and processes are required to, easily and efficiently, be reconfigurable and reusable. In order to stay competitive and flexible, situations also demand distribution of enterprises globally, which requires implementation of efficient communication strategies. A prototype system called the “Broadcaster" has been developed with an assumption that the control environment description has been engineered using the Component-based system paradigm. This prototype distributes information to a number of globally distributed partners via an adoption of the circular-based data processing mechanism. The work highlighted in this paper includes the implementation of this mechanism in the domain of the manufacturing industry. The proposed solution enables real-time remote propagation of machine information to a number of distributed supply chain client resources such as a HMI, VRML-based 3D views and remote client instances regardless of their distribution nature and/ or their mechanisms. This approach is presented together with a set of evaluation results. Authors- main concentration surrounds the reliability and the performance metric of the adopted approach. Performance evaluation is carried out in terms of the response times taken to process the data in this domain and compared with an alternative data processing implementation such as the linear queue mechanism. Based on the evaluation results obtained, authors justify the benefits achieved from this proposed implementation and highlight any further research work that is to be carried out.
Keywords: Broadcaster, circular buffer, Component-based, distributed manufacturing, remote data propagation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378583 Application of Pulse Doubling in Star-Connected Autotransformer Based 12-Pulse AC-DC Converter for Power Quality Improvement
Authors: Rohollah. Abdollahi, Alireza. Jalilian
Abstract:
This paper presents a pulse doubling technique in a 12-pulse ac-dc converter which supplies direct torque controlled motor drives (DTCIMD-s) in order to have better power quality conditions at the point of common coupling. The proposed technique increases the number of rectification pulses without significant changes in the installations and yields in harmonic reduction in both ac and dc sides. The 12-pulse rectified output voltage is accomplished via two paralleled six-pulse ac-dc converters each of them consisting of three-phase diode bridge rectifier. An autotransformer is designed to supply the rectifiers. The design procedure of magnetics is in a way such that makes it suitable for retrofit applications where a six-pulse diode bridge rectifier is being utilized. Independent operation of paralleled diode-bridge rectifiers, i.e. dc-ripple re-injection methodology, requires a Zero Sequence Blocking Transformer (ZSBT). Finally, a tapped interphase reactor is connected at the output of ZSBT to double the pulse numbers of output voltage up to 24 pulses. The aforementioned structure improves power quality criteria at ac mains and makes them consistent with the IEEE-519 standard requirements for varying loads. Furthermore, near unity power factor is obtained for a wide range of DTCIMD operation. A comparison is made between 6- pulse, 12-pulse, and proposed converters from view point of power quality indices. Results show that input current total harmonic distortion (THD) is less than 5% for the proposed topology at various loads.
Keywords: AC–DC converter, star-connected autotransformer, power quality, 24 pulse rectifier, Pulse Doubling, direct torquecontrolled induction motor drive (DTCIMD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2867582 Comparing Field Displacement History with Numerical Results to Estimate Geotechnical Parameters: Case Study of Arash-Esfandiar-Niayesh under Passing Tunnel, 2.5 Traffic Lane Tunnel, Tehran, Iran
Authors: A. Golshani, M. Gharizade Varnusefaderani, S. Majidian
Abstract:
Underground structures are of those structures that have uncertainty in design procedures. That is due to the complexity of soil condition around. Under passing tunnels are also such affected structures. Despite geotechnical site investigations, lots of uncertainties exist in soil properties due to unknown events. As results, it possibly causes conflicting settlements in numerical analysis with recorded values in the project. This paper aims to report a case study on a specific under passing tunnel constructed by New Austrian Tunnelling Method in Iran. The intended tunnel has an overburden of about 11.3m, the height of 12.2m and, the width of 14.4m with 2.5 traffic lane. The numerical modeling was developed by a 2D finite element program (PLAXIS Version 8). Comparing displacement histories at the ground surface during the entire installation of initial lining, the estimated surface settlement was about four times the field recorded one, which indicates that some local unknown events affect that value. Also, the displacement ratios were in a big difference between the numerical and field data. Consequently, running several numerical back analyses using laboratory and field tests data, the geotechnical parameters were accurately revised to match with the obtained monitoring data. Finally, it was found that usually the values of soil parameters are conservatively low-estimated up to 40 percent by typical engineering judgment. Additionally, it could be attributed to inappropriate constitutive models applied for the specific soil condition.
Keywords: NATM, surface displacement history, soil tests, monitoring data, numerical back-analysis, geotechnical parameters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 804581 The Influence of the Intellectual Capital on the Firms’ Market Value: A Study of Listed Firms in the Tehran Stock Exchange (TSE)
Authors: Bita Mashayekhi, Seyed Meisam Tabatabaie Nasab
Abstract:
Intellectual capital is one of the most valuable and important parts of the intangible assets of enterprises especially in knowledge-based enterprises. With respect to increasing gap between the market value and the book value of the companies, intellectual capital is one of the components that can be placed in this gap. This paper uses the value added efficiency of the three components, capital employed, human capital and structural capital, to measure the intellectual capital efficiency of Iranian industries groups, listed in the Tehran Stock Exchange (TSE), using a 8 years period data set from 2005 to 2012. In order to analyze the effect of intellectual capital on the market-to-book value ratio of the companies, the data set was divided into 10 industries, Banking, Pharmaceutical, Metals & Mineral Nonmetallic, Food, Computer, Building, Investments, Chemical, Cement and Automotive, and the panel data method was applied to estimating pooled OLS. The results exhibited that value added of capital employed has a positive significant relation with increasing market value in the industries, Banking, Metals & Mineral Nonmetallic, Food, Computer, Chemical and Cement, and also, showed that value added efficiency of structural capital has a positive significant relation with increasing market value in the Banking, Pharmaceutical and Computer industries groups. The results of the value added showed a negative relation with the Banking and Pharmaceutical industries groups and a positive relation with computer and Automotive industries groups. Among the studied industries, computer industry has placed the widest gap between the market value and book value in its intellectual capital.Keywords: Capital Employed, Human Capital, Intellectual Capital, Market-to-Book Value, Structural Capital, Value Added Efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762580 Travel Time Evaluation of an Innovative U-Turn Facility on Urban Arterial Roadways
Authors: Ali Pirdavani, Tom Brijs, Tom Bellemans, Geert Wets, Koen Vanhoof
Abstract:
Signalized intersections on high-volume arterials are often congested during peak hours, causing a decrease in through movement efficiency on the arterial. Much of the vehicle delay incurred at conventional intersections is caused by high left-turn demand. Unconventional intersection designs attempt to reduce intersection delay and travel time by rerouting left-turns away from the main intersection and replacing it with right-turn followed by Uturn. The proposed new type of U-turn intersection is geometrically designed with a raised island which provides a protected U-turn movement. In this study several scenarios based on different distances between U-turn and main intersection, traffic volume of major/minor approaches and percentage of left-turn volumes were simulated by use of AIMSUN, a type of traffic microsimulation software. Subsequently some models are proposed in order to compute travel time of each movement. Eventually by correlating these equations to some in-field collected data of some implemented U-turn facilities, the reliability of the proposed models are approved. With these models it would be possible to calculate travel time of each movement under any kind of geometric and traffic condition. By comparing travel time of a conventional signalized intersection with U-turn intersection travel time, it would be possible to decide on converting signalized intersections into this new kind of U-turn facility or not. However comparison of travel time is not part of the scope of this research. In this paper only travel time of this innovative U-turn facility would be predicted. According to some before and after study about the traffic performance of some executed U-turn facilities, it is found that commonly, this new type of U-turn facility produces lower travel time. Thus, evaluation of using this type of unconventional intersection should be seriously considered.Keywords: Innovative U-turn facility, Microsimulation, Traveltime, Unconventional intersection design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1345579 Open Innovation Laboratory for Rapid Realization of Sensing, Smart and Sustainable Products (S3 Products) for Higher Education
Authors: J. Miranda, D. Chavarría-Barrientos, M. Ramírez-Cadena, M. E. Macías, P. Ponce, J. Noguez, R. Pérez-Rodríguez, P. K. Wright, A. Molina
Abstract:
Higher education methods need to evolve because the new generations of students are learning in different ways. One way is by adopting emergent technologies, new learning methods and promoting the maker movement. As a result, Tecnologico de Monterrey is developing Open Innovation Laboratories as an immediate response to educational challenges of the world. This paper presents an Open Innovation Laboratory for Rapid Realization of Sensing, Smart and Sustainable Products (S3 Products). The Open Innovation Laboratory is composed of a set of specific resources where students and teachers use them to provide solutions to current problems of priority sectors through the development of a new generation of products. This new generation of products considers the concepts Sensing, Smart, and Sustainable. The Open Innovation Laboratory has been implemented in different courses in the context of New Product Development (NPD) and Integrated Manufacturing Systems (IMS) at Tecnologico de Monterrey. The implementation consists of adapting this Open Innovation Laboratory within the course’s syllabus in combination with the implementation of specific methodologies for product development, learning methods (Active Learning and Blended Learning using Massive Open Online Courses MOOCs) and rapid product realization platforms. Using the concepts proposed it is possible to demonstrate that students can propose innovative and sustainable products, and demonstrate how the learning process could be improved using technological resources applied in the higher educational sector. Finally, examples of innovative S3 products developed at Tecnologico de Monterrey are presented.Keywords: Active learning, blended learning, maker movement, new product development, open innovation laboratory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269578 Methane versus Carbon Dioxide: Mitigation Prospects
Authors: Alexander J. Severinsky, Allen L. Sessoms
Abstract:
Atmospheric carbon dioxide (CO2) has dominated the discussion around the causes of climate change. This is a reflection of a 100-year time horizon for all greenhouse gases that became a norm. The 100-year time horizon is much too long – and yet, almost all mitigation efforts, including those set in the near-term frame of within 30 years, are still geared toward it. In this paper, we show that for a 30-year time horizon, methane (CH4) is the greenhouse gas whose radiative forcing exceeds that of CO2. In our analysis, we use the radiative forcing of greenhouse gases in the atmosphere, because they directly affect the rise in temperature on Earth. We found that in 2019, the radiative forcing (RF) of methane was ~2.5 W/m2 and that of carbon dioxide was ~2.1 W/m2. Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m2 and ~3.1 W/m2 respectively. There is a substantial spread in the data for anthropogenic and natural methane (CH4) emissions, along with natural gas, (which is primarily CH4), leakages from industrial production to consumption. For this reason, we estimate the minimum and maximum effects of a reduction of these leakages, and assume an effective immediate reduction by 80%. Such action may serve to reduce the annual radiative forcing of all CH4 emissions by ~15% to ~30%. This translates into a reduction of RF by 2050 from ~2.8 W/m2 to ~2.5 W/m2 in the case of the minimum effect that can be expected, and to ~2.15 W/m2 in the case of the maximum effort to reduce methane leakages. Under the BAU, we find that the RF of CO2 will increase from ~2.1 W/m2 now to ~3.1 W/m2 by 2050. We assume a linear reduction of 50% in anthropogenic emission over the course of the next 30 years, which would reduce the radiative forcing of CO2 from ~3.1 W/m2 to ~2.9 W/m2. In the case of "net zero," the other 50% of only anthropogenic CO2 emissions reduction would be limited to being either from sources of emissions or directly from the atmosphere. In this instance, the total reduction would be from ~3.1 W/m2 to ~2.7 W/m2, or ~0.4 W/m2. To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m2, an additional reduction of radiative forcing of CO2 would be approximately 2.7 -2.15 = 0.55 W/m2. In total, one would need to remove ~660 GT of CO2 from the atmosphere in order to match the maximum reduction of current methane leakages, and ~270 GT of CO2 from emitting sources, to reach "negative emissions". This amounts to over 900 GT of CO2.
Keywords: Methane Leakages, Methane Radiative Forcing, Methane Mitigation, Methane Net Zero.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 651577 Scientific Methods in Educational Management: The Metasystems Perspective
Authors: Elena A. Railean
Abstract:
Although scientific methods have been the subject of a large number of papers, the term ‘scientific methods in educational management’ is still not well defined. In this paper, it is adopted the metasystems perspective to define the mentioned term and distinguish them from methods used in time of the scientific management and knowledge management paradigms. In our opinion, scientific methods in educational management rely on global phenomena, events, and processes and their influence on the educational organization. Currently, scientific methods in educational management are integrated with the phenomenon of globalization, cognitivisation, and openness, etc. of educational systems and with global events like the COVID-19 pandemic. Concrete scientific methods are nested in a hierarchy of more and more abstract models of educational management, which form the context of the global impact on education, in general, and learning outcomes, in particular. However, scientific methods can be assigned to a specific mission, strategy, or tactics of educational management of the concrete organization, either by the global management, local development of school organization, or/and development of the life-long successful learner. By accepting this assignment, the scientific method becomes a personal goal of each individual with the educational organization or the option to develop the educational organization at the global standards. In our opinion, in educational management, the scientific methods need to confine the scope to the deep analysis of concrete tasks of the educational system (i.e., teaching, learning, assessment, development), which result in concrete strategies of organizational development. More important are seeking the ways for dynamic equilibrium between the strategy and tactic of the planetary tasks in the field of global education, which result in a need for ecological methods of learning and communication. In sum, distinction between local and global scientific methods is dependent on the subjective conception of the task assignment, measurement, and appraisal. Finally, we conclude that scientific methods are not holistic scientific methods, but the strategy and tactics implemented in the global context by an effective educational/academic manager.
Keywords: Educational management, scientific management, educational leadership, scientific method in educational management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1405576 Orthogonal Array Application and Response Surface Method Approach for Optimal Product Values: An Application for Oil Blending Process
Authors: Christopher C. Ihueze, Constance C. Obiuto, Christian E. Okafor, Charles C. Okpala
Abstract:
This paper presents a methodical approach for designing and optimizing process parameters in oil blending industries. Twenty seven replicated experiments were conducted for production of A-Z crown super oil (SAE20W/50) employing L9 orthogonal array to establish process response parameters. Power law model was fitted to experimental data and the obtained model was optimized applying the central composite design (CCD) of response surface methodology (RSM). Quadratic model was found to be significant for production of A-Z crown supper oil. The study recognized and specified four new lubricant formulations that conform to ISO oil standard in the course of analyzing the batch productions of A-Z crown supper oil as: L1: KV = 21.8293Cst, BS200 = 9430.00Litres, Ad102=11024.00Litres, PVI = 2520 Litres, L2: KV = 22.513Cst, BS200 = 12430.00 Litres, Ad102 = 11024.00 Litres, PVI = 2520 Litres, L3: KV = 22.1671Cst, BS200 = 9430.00 Litres, Ad102 = 10481.00 Litres, PVI= 2520 Litres, L4: KV = 22.8605Cst, BS200 = 12430.00 Litres, Ad102 = 10481.00 Litres, PVI = 2520 Litres. The analysis of variance showed that quadratic model is significant for kinematic viscosity production while the R-sq value statistic of 0.99936 showed that the variation of kinematic viscosity is due to its relationship with the control factors. This study therefore resulted to appropriate blending proportions of lubricants base oil and additives and recommends the optimal kinematic viscosity of A-Z crown super oil (SAE20W/50) to be 22.86Cst.
Keywords: Additives, control factors, kinematic viscosity, lubricant, orthogonal array, process parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947575 Assets Integrity Management in Oil and Gas Production Facilities Through Corrosion Mitigation and Inspection Strategy: A Case Study of Sarir Oilfield
Authors: Iftikhar Ahmad, Youssef Elkezza
Abstract:
Sarir oilfield is in North Africa. It has facilities of oil and gas production. The assets of the Sarir oilfield can be divided into five following categories, namely: (i) Well bore and wellheads; (ii) Vessels such as separators, desalters, and gas processing facilities; (iii) Pipelines including all flow lines, trunk lines, and shipping lines; (iv) storage tanks; (v) Other assets such as turbines and compressors, etc. The nature of the petroleum industry recognizes the potential human, environmental and financial consequences that can result from failing to maintain the integrity of wellheads, vessels, tanks, pipelines, and other assets. The importance of effective asset integrity management increases as the industry infrastructure continues to age. The primary objective of assets integrity management (AIM) is to maintain assets in a fit-for-service condition while extending their remaining life in the most reliable, safe, and cost-effective manner. Corrosion management is one of the important aspects of successful asset integrity management. It covers corrosion mitigation, monitoring, inspection, and risk evaluation. External corrosion on pipelines, well bores, buried assets, and bottoms of tanks is controlled with a combination of coatings by cathodic protection, while the external corrosion on surface equipment, wellheads, and storage tanks is controlled by coatings. The periodic cleaning of the pipeline by pigging helps in the prevention of internal corrosion. Further, internal corrosion of pipelines is prevented by chemical treatment and controlled operations. This paper describes the integrity management system used in the Sarir oil field for its oil and gas production facilities based on standard practices of corrosion mitigation and inspection.
Keywords: Assets integrity management, corrosion prevention in oilfield assets, corrosion management in oilfield, corrosion prevention and inspection activities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179574 Optimization of Solar Rankine Cycle by Exergy Analysis and Genetic Algorithm
Authors: R. Akbari, M. A. Ehyaei, R. Shahi Shavvon
Abstract:
Nowadays, solar energy is used for energy purposes such as the use of thermal energy for domestic, industrial and power applications, as well as the conversion of the sunlight into electricity by photovoltaic cells. In this study, the thermodynamic simulation of the solar Rankin cycle with phase change material (paraffin) was first studied. Then energy and exergy analyses were performed. For optimization, a single and multi-objective genetic optimization algorithm to maximize thermal and exergy efficiency was used. The parameters discussed in this paper included the effects of input pressure on turbines, input mass flow to turbines, the surface of converters and collector angles on thermal and exergy efficiency. In the organic Rankin cycle, where solar energy is used as input energy, the fluid selection is considered as a necessary factor to achieve reliable and efficient operation. Therefore, silicon oil is selected for a high-temperature cycle and water for a low-temperature cycle as an operating fluid. The results showed that increasing the mass flow to turbines 1 and 2 would increase thermal efficiency, while it reduces and increases the exergy efficiency in turbines 1 and 2, respectively. Increasing the inlet pressure to the turbine 1 decreases the thermal and exergy efficiency, and increasing the inlet pressure to the turbine 2 increases the thermal efficiency and exergy efficiency. Also, increasing the angle of the collector increased thermal efficiency and exergy. The thermal efficiency of the system was 22.3% which improves to 33.2 and 27.2% in single-objective and multi-objective optimization, respectively. Also, the exergy efficiency of the system was 1.33% which has been improved to 1.719 and 1.529% in single-objective and multi-objective optimization, respectively. These results showed that the thermal and exergy efficiency in a single-objective optimization is greater than the multi-objective optimization.
Keywords: Exergy analysis, Genetic algorithm, Rankine cycle, Single and Multi-objective function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 632573 Khilafat from Khilafat-e-Rashida: The Only Form of Governance to Unite Muslim Countries
Authors: Zoaib Mirza
Abstract:
Half of the Muslim countries in the world have declared Islam the state religion in their constitutions. Yet, none of these countries have implemented authentic Islamic laws in line with the Quran (Holy Book), practices of Prophet Mohammad (P.B.U.H) called the Sunnah, and his four successors known as the Rightly Guided - Khalifa. Since their independence, these countries have adopted different government systems like Democracy, Dictatorship, Republic, Communism, and Monarchy. Instead of benefiting the people, these government systems have put these countries into political, social, and economic crises. These Islamic countries do not have equal representation and membership in worldwide political forums. Western countries lead these forums. Therefore, it is now imperative for the Muslim leaders of all these countries to collaborate, reset, and implement the original Islamic form of government, which led to the prosperity and success of people, including non-Muslims, 1400 years ago. They should unite as one nation under Khalifat, which means establishing the authority of Allah (SWT) and following the divine commandments related to the social, political, and economic systems. As they have declared Islam in their constitution, they should work together to apply the divine framework of the governance revealed by Allah (SWT) and implemented by Prophet Mohammad (P.B.U.H) and his four successors called Khalifas. This paper provides an overview of the downfall and the end of the Khalifat system by 1924, the ways in which the West caused political, social, and economic crises in the Muslim countries, and finally, a summary of the social, political, and economic systems implemented by the Prophet Mohammad (P.B.U.H) and his successors, Khalifas, called the Rightly Guided – Hazrat Abu Bakr (RA), Hazrat Omar (RA), Hazrat Usman (RA), and Hazrat Ali (RA).
Keywords: Khalifat, Khilafat-e-Rashida, The Rightly Guided, colonization, capitalism, neocolonization, government systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 606572 Genetic Algorithm Application in a Dynamic PCB Assembly with Carryover Sequence- Dependent Setups
Authors: M. T. Yazdani Sabouni, Rasaratnam Logendran
Abstract:
We consider a typical problem in the assembly of printed circuit boards (PCBs) in a two-machine flow shop system to simultaneously minimize the weighted sum of weighted tardiness and weighted flow time. The investigated problem is a group scheduling problem in which PCBs are assembled in groups and the interest is to find the best sequence of groups as well as the boards within each group to minimize the objective function value. The type of setup operation between any two board groups is characterized as carryover sequence-dependent setup time, which exactly matches with the real application of this problem. As a technical constraint, all of the boards must be kitted before the assembly operation starts (kitting operation) and by kitting staff. The main idea developed in this paper is to completely eliminate the role of kitting staff by assigning the task of kitting to the machine operator during the time he is idle which is referred to as integration of internal (machine) and external (kitting) setup times. Performing the kitting operation, which is a preparation process of the next set of boards while the other boards are currently being assembled, results in the boards to continuously enter the system or have dynamic arrival times. Consequently, a dynamic PCB assembly system is introduced for the first time in the assembly of PCBs, which also has characteristics similar to that of just-in-time manufacturing. The problem investigated is computationally very complex, meaning that finding the optimal solutions especially when the problem size gets larger is impossible. Thus, a heuristic based on Genetic Algorithm (GA) is employed. An example problem on the application of the GA developed is demonstrated and also numerical results of applying the GA on solving several instances are provided.Keywords: Genetic algorithm, Dynamic PCB assembly, Carryover sequence-dependent setup times, Multi-objective.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571571 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: Deep neural models, natural language inference, recognizing textual entailment, sentence-to-sentence relation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456570 Social Movements and the Diffusion of Tactics and Repertoires: Activists' Network in Anti-globalism Movement
Authors: Kyoko Tominaga
Abstract:
Non-Government Organizations (NGOs), Non-Profit Organizations (NPOs), Social Enterprises and other actors play an important role in political decisions in governments at the international levels. Especially, such organizations’ and activists’ network in civil society is quite important to effect to the global politics. To solve the complex social problems in global era, diverse actors should corporate each other. Moreover, network of protesters is also contributes to diffuse tactics, information and other resources of social movements.Based on the findings from the study of International Trade Fairs (ITFs), the author analyzes the network of activists in anti-globalism movement. This research focuses the transition of 54 activists’ whole network in the “protest event” against 2008 G8 summit in Japan. Their network is examined at the three periods: Before protest event phase, during protest event phase and after event phase. A mixed method is used in this study: the author shows the hypothesis from social network analysis and evaluates that with interview data analysis. This analysis gives the two results. Firstly, the more protesters participate to the various events during the protest event, the more they build the network. After that, active protesters keep their network as well. From interview data, we can understand that the active protesters can build their network and diffuse the information because they communicate with other participants and understand that diverse issues are related. This paper comes to same conclusion with previous researches: protest events activate the network among the political activists. However, some participants succeed to build their network, others do not. “Networked” activists are participated in the various events for short period of time and encourage the diffusion of information and tactics of social movements.
Keywords: Social Movement, Global Justice Movement, Tactics, Diffusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2203569 A Conceptual Framework and a Mathematical Equation for Managing Construction-Material Waste and Cost Overruns
Authors: Saidu Ibrahim, Winston M. W. Shakantu
Abstract:
The problem of construction material waste remains unresolved, as a significant percentage of the materials delivered to some project sites end up as waste which might result in additional project cost. Cost overrun is a problem which affects 90% of the completed projects in the world. The argument on how to eliminate it has been on-going for the past 70 years, but there is neither substantial improvement nor significant solution for mitigating its detrimental effects. Research evidence has proposed various construction cost overruns and material-waste management approaches; nonetheless, these studies failed to give a clear indication on the framework and the equation for managing construction material waste and cost overruns. Hence, this research aims to develop a conceptual framework and a mathematical equation for managing material waste and cost overrun in the construction industry. The paper adopts the desktop methodological approach. This involves comparing the causes of material waste and those of cost overruns from the literature to determine the possible relationship. The review revealed a relationship between material waste and cost overrun that; increase in material waste would result to a corresponding increase in the amount of cost overrun at both the pre-contract and the post contract stages of a project. It was found from the equation that achieving an effective construction material waste management must ensure a “Good Quality-of-Planning, Estimating, and Design Management” and a “Good Quality- of-Construction, Procurement and Site Management”; a decrease in “Design Complexity” which would reduce “Material Waste” and subsequently reduce the amount of cost overrun by 86.74%. The conceptual framework and the mathematical equation developed in this study are recommended to the professionals of the construction industry.
Keywords: Conceptual framework, cost overrun, material waste, project stags.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2772568 Power Factor Correction Based on High Switching Frequency Resonant Power Converter
Authors: B. Sathyanandhi, P. M. Balasubramaniam
Abstract:
This paper presents Buck-Boost converter topology to maintain the input power factor by using the power factor stage control and regulation stage control. Suppose, if we are using the RL load the power factor will be reduced due to the presence of total harmonic distortion in the current wave. To improve the power factor the current waveform should follow the fundamental component of the voltage waveform. These can be achieved by using the high -frequency power converter. Based on the resonant circuit the converter is able to perform the function of Buck, Boost, and buck-boost converter. Here ,we have used Buck-Boost converter, because, the buck-boost converter has more advantages than the boost converter. Here the switching action of the power converter can take place by using the external zero comparator PFC stage control. The power converter consisting of the resonant circuit which is used to control the output voltage gain of the converter. The power converter is operated at a very high switching frequency in the range of 400KHz in order to overcome the switching losses of the power converter. Due to presence of high switching frequency, the power factor will improve. Therefore, the total harmonics distortion present in the current waveform has also reduced. These results has generated in the form of simulation by using MATLAB/SIMULINK software. Similar to the Buck and Boost converters, the operation of the Buck-Boost has best understood, in terms of the inductor's "reluctance" for allowing rapid change in current, which also reduces the Total Harmonic Distortion (THD) in the input current waveform, which can improve the input Power factor, based on the type of load used.
Keywords: Buck-boost converter, High switching frequency, Power factor correction, power factor correction stage Regulation stage, Total harmonic distortion (THD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366567 Non-Destructive Testing of Carbon Fiber Reinforced Plastic by Infrared Thermography Methods
Authors: W. Swiderski
Abstract:
Composite materials are one answer to the growing demand for materials with better parameters of construction and exploitation. Composite materials also permit conscious shaping of desirable properties to increase the extent of reach in the case of metals, ceramics or polymers. In recent years, composite materials have been used widely in aerospace, energy, transportation, medicine, etc. Fiber-reinforced composites including carbon fiber, glass fiber and aramid fiber have become a major structural material. The typical defect during manufacture and operation is delamination damage of layered composites. When delamination damage of the composites spreads, it may lead to a composite fracture. One of the many methods used in non-destructive testing of composites is active infrared thermography. In active thermography, it is necessary to deliver energy to the examined sample in order to obtain significant temperature differences indicating the presence of subsurface anomalies. To detect possible defects in composite materials, different methods of thermal stimulation can be applied to the tested material, these include heating lamps, lasers, eddy currents, microwaves or ultrasounds. The use of a suitable source of thermal stimulation on the test material can have a decisive influence on the detection or failure to detect defects. Samples of multilayer structure carbon composites were prepared with deliberately introduced defects for comparative purposes. Very thin defects of different sizes and shapes made of Teflon or copper having a thickness of 0.1 mm were screened. Non-destructive testing was carried out using the following sources of thermal stimulation, heating lamp, flash lamp, ultrasound and eddy currents. The results are reported in the paper.Keywords: Non-destructive testing, IR thermography, composite material, thermal stimulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553566 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study
Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin
Abstract:
Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.Keywords: Objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1637565 MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation
Authors: Alexandros Lioulemes, Michail Theofanidis, Varun Kanal, Konstantinos Tsiakas, Maher Abujelala, Chris Collander, William B. Townsend, Angie Boisselle, Fillia Makedon
Abstract:
This paper presents a home-based robot-rehabilitation instrument, called ”MAGNI Dynamics”, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user’s performance to different stiffness factors. The vision module uses the Kinect’s skeletal tracking to monitor the user’s effort in an unobtrusive and safe way, by estimating the torque that affects the user’s arm. The system’s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user’s recovery process.Keywords: Human-robot interaction, kinect, kinematics, dynamics, haptic control, rehabilitation robotics, artificial intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1326564 Geochemistry of Cenozoic Basaltic Rocksaround Liuhe National Geopark, Jiangsu Province, Eastern China: Petrogenesis and Mantle Source
Authors: Yung-Tan Lee, Ren-Yi Huang, Ju-Chin Chen, Jyh-Yi Shih, Meng-Lung Lin, Hsiao-Ling Yu, Yen-Tsui Hu, Chih-Cheng Chen
Abstract:
Cenozoic basalts found in Jiangsu province of eastern China include tholeiites and alkali basalts. The present paper analyzed the major, trace elements, rare earth elements of these Cenozoic basalts and combined with Sr-Nd isotopic compositions proposed by Chen et al. (1990)[1] in the literatures to discuss the petrogenesis of these basalts and the geochemical characteristics of the source mantle. Based on major, trace elements and fractional crystallization model established by Brooks and Nielsen (1982)[2] we suggest that the basaltic magma has experienced olivine + clinopyroxene fractionation during its evolution. The chemical compositions of basaltic rocks from Jiangsu province indicate that these basalts may belong to the same magmatic system. Spidergrams reveal that Cenozoic basalts from Jiangsu province have geochemical characteristics similar to those of ocean island basalts(OIB). The slight positive Nb and Ti anomalies found in basaltic rocks of this study suggest the presence of Ti-bearing minerals in the mantle source and these Ti-bearing minerals had contributed to basaltic magma during partial melting, indicating a metasomatic event might have occurred before the partial melting. Based on the Sr vs. Nd isotopic ratio plots, we suggest that Jiangsu basalts may be derived from partial melting of mantle source which may represent two-end members mixing of DMM and EM-I. Some Jiangsu basaltic magma may be derived from partial melting of EM-I heated by the upwelling asthenospheric mantle or asthenospheric diapirism.Keywords: Geochemistry, Jiangsu Province, Cenozoic basalts, Fractional crystallization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2314563 Assessing the Suitability of South African Waste Foundry Sand as an Additive in Clay Masonry Products
Authors: Nthabiseng Portia Mahumapelo, Andre van Niekerk, Ndabenhle Sosibo, Nirdesh Singh
Abstract:
The foundry industry generates large quantities of solid waste in the form of waste foundry sand. The ever-increasing quantities of this type of industrial waste put pressure on land-filling space and its proper management has become a global concern. The South African foundry industry is not different when it comes to this solid waste generation. Utilizing the foundry waste sand in other applications has become an attractive avenue to deal with this waste stream. In the present paper, an evaluation was done on the suitability of foundry waste sand as an additive in clay masonry products. Purchased clay was added to the foundry waste sand sample in a 50/50 ratio. The mixture was named FC sample. The FC sample was mixed with water in a pan mixer until the mixture was consistent and suitable for extrusion. The FC sample was extruded and cut into briquettes. Water absorption, shrinkage and modulus of rupture tests were conducted on the resultant briquettes. Foundry waste sand and FC samples were respectively characterized mineralogically using X-Ray Diffraction, and the major and trace elements were determined using Inductively Coupled Plasma Optical Emission Spectroscopy. Adding purchased clay to the foundry waste sand positively influenced the workability of the test sample. Another positive characteristic was the low linear shrinkage, which indicated that products manufactured from the FC sample would not be susceptible to cracking. The water absorption values were acceptable and the unfired and fired strength values of the briquette’s samples were acceptable. In conclusion, tests showed that foundry waste sand can be used as an additive in masonry clay bricks, provided it is blended with good quality clay.
Keywords: Foundry waste sand, masonry clay bricks, modulus of rupture, shrinkage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 665