Search results for: initial cost
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8776

Search results for: initial cost

8146 Seismic Fragility of Weir Structure Considering Aging Degradation of Concrete Material

Authors: HoYoung Son, DongHoon Shin, WooYoung Jung

Abstract:

This study presented the seismic fragility framework of concrete weir structure subjected to strong seismic ground motions and in particular, concrete aging condition of the weir structure was taken into account in this study. In order to understand the influence of concrete aging on the weir structure, by using probabilistic risk assessment, the analytical seismic fragility of the weir structure was derived for pre- and post-deterioration of concrete. The performance of concrete weir structure after five years was assumed for the concrete aging or deterioration, and according to after five years’ condition, the elastic modulus was simply reduced about one–tenth compared with initial condition of weir structures. A 2D nonlinear finite element analysis was performed considering the deterioration of concrete in weir structures using ABAQUS platform, a commercial structural analysis program. Simplified concrete degradation was resulted in the increase of almost 45% of the probability of failure at Limit State 3, in comparison to initial construction stage, by analyzing the seismic fragility.

Keywords: weir, FEM, concrete, fragility, aging

Procedia PDF Downloads 478
8145 Policy Recommendations for Reducing CO2 Emissions in Kenya's Electricity Generation, 2015-2030

Authors: Paul Kipchumba

Abstract:

Kenya is an East African Country lying at the Equator. It had a population of 46 million in 2015 with an annual growth rate of 2.7%, making a population of at least 65 million in 2030. Kenya’s GDP in 2015 was about 63 billion USD with per capita GDP of about 1400 USD. The rural population is 74%, whereas urban population is 26%. Kenya grapples with not only access to energy but also with energy security. There is direct correlation between economic growth, population growth, and energy consumption. Kenya’s energy composition is at least 74.5% from renewable energy with hydro power and geothermal forming the bulk of it; 68% from wood fuel; 22% from petroleum; 9% from electricity; and 1% from coal and other sources. Wood fuel is used by majority of rural and poor urban population. Electricity is mostly used for lighting. As of March 2015 Kenya had installed electricity capacity of 2295 MW, making a per capital electricity consumption of 0.0499 KW. The overall retail cost of electricity in 2015 was 0.009915 USD/ KWh (KES 19.85/ KWh), for installed capacity over 10MW. The actual demand for electricity in 2015 was 3400 MW and the projected demand in 2030 is 18000 MW. Kenya is working on vision 2030 that aims at making it a prosperous middle income economy and targets 23 GW of generated electricity. However, cost and non-cost factors affect generation and consumption of electricity in Kenya. Kenya does not care more about CO2 emissions than on economic growth. Carbon emissions are most likely to be paid by future costs of carbon emissions and penalties imposed on local generating companies by sheer disregard of international law on C02 emissions and climate change. The study methodology was a simulated application of carbon tax on all carbon emitting sources of electricity generation. It should cost only USD 30/tCO2 tax on all emitting sources of electricity generation to have solar as the only source of electricity generation in Kenya. The country has the best evenly distributed global horizontal irradiation. Solar potential after accounting for technology efficiencies such as 14-16% for solar PV and 15-22% for solar thermal is 143.94 GW. Therefore, the paper recommends adoption of solar power for generating all electricity in Kenya in order to attain zero carbon electricity generation in the country.

Keywords: co2 emissions, cost factors, electricity generation, non-cost factors

Procedia PDF Downloads 356
8144 Energy Efficient Resource Allocation and Scheduling in Cloud Computing Platform

Authors: Shuen-Tai Wang, Ying-Chuan Chen, Yu-Ching Lin

Abstract:

There has been renewal of interest in the relation between Green IT and cloud computing in recent years. Cloud computing has to be a highly elastic environment which provides stable services to users. The growing use of cloud computing facilities has caused marked energy consumption, putting negative pressure on electricity cost of computing center or data center. Each year more and more network devices, storages and computers are purchased and put to use, but it is not just the number of computers that is driving energy consumption upward. We could foresee that the power consumption of cloud computing facilities will double, triple, or even more in the next decade. This paper aims at resource allocation and scheduling technologies that are short of or have not well developed yet to reduce energy utilization in cloud computing platform. In particular, our approach relies on recalling services dynamically onto appropriate amount of the machines according to user’s requirement and temporarily shutting down the machines after finish in order to conserve energy. We present initial work on integration of resource and power management system that focuses on reducing power consumption such that they suffice for meeting the minimizing quality of service required by the cloud computing platform.

Keywords: cloud computing, energy utilization, power consumption, resource allocation

Procedia PDF Downloads 331
8143 Mass Production of Endemic Diatoms in Polk County, Florida Concomitant with Biofuel Extraction

Authors: Melba D. Horton

Abstract:

Algae are identified as an alternative source of biofuel because of their ubiquitous distribution in aquatic environments. Diatoms are unique forms of algae characterized by silicified cell walls which have gained prominence in various technological applications. Polk County is home to a multitude of ponds and lakes but has not been explored for the presence of diatoms. Considering the condition of the waters brought about by predominant phosphate mining activities in the area, this research was conducted to determine if endemic diatoms are present and explore their potential for low-cost mass production. Using custom-built photobioreactors, water samples from various lakes provided by the Polk County Parks and Recreation and from nearby ponds were used as the source of diatoms together with other algae obtained during collection. Results of the initial culture cycles were successful, but later an overgrowth of other algae crashed the diatom population. Experiments were conducted in the laboratory to tease out some factors possibly contributing to the die-off. Generally, the total biomass declines after two culture cycles and the causative factors need further investigation. The lipid yield is minimum; however, the high frustule production after die-off adds value to the overall benefit of the harvest.

Keywords: diatoms, algae, biofuel, lipid, photobioreactor, frustule

Procedia PDF Downloads 185
8142 Chemical Reaction Algorithm for Expectation Maximization Clustering

Authors: Li Ni, Pen ManMan, Li KenLi

Abstract:

Clustering is an intensive research for some years because of its multifaceted applications, such as biology, information retrieval, medicine, business and so on. The expectation maximization (EM) is a kind of algorithm framework in clustering methods, one of the ten algorithms of machine learning. Traditionally, optimization of objective function has been the standard approach in EM. Hence, research has investigated the utility of evolutionary computing and related techniques in the regard. Chemical Reaction Optimization (CRO) is a recently established method. So the property embedded in CRO is used to solve optimization problems. This paper presents an algorithm framework (EM-CRO) with modified CRO operators based on EM cluster problems. The hybrid algorithm is mainly to solve the problem of initial value sensitivity of the objective function optimization clustering algorithm. Our experiments mainly take the EM classic algorithm:k-means and fuzzy k-means as an example, through the CRO algorithm to optimize its initial value, get K-means-CRO and FKM-CRO algorithm. The experimental results of them show that there is improved efficiency for solving objective function optimization clustering problems.

Keywords: chemical reaction optimization, expection maimization, initia, objective function clustering

Procedia PDF Downloads 703
8141 Hard Carbon Derived From Dextrose as High-Performance Anode Material for Sodium-Ion Batteries

Authors: Rupan Das Chakraborty, Surendra K. Martha

Abstract:

Hard carbons (HCs) are extensively used as anode materials for sodium-ion batteries due to their availability, low cost, and ease of synthesis. It possesses the ability to store Na ion between stacked sp2 carbon layers and micropores. In this work, hard carbons are synthesized from different concentrations (0.5M to 5M) of dextrose solutions by hydrothermal synthesis followed by high-temperature calcination at 1100 ⁰C in an inert atmosphere. Dextrose has been chosen as a precursor material as it is a eco-friendly and renewable source. Among all hard carbon derived from different concentrations of dextrose solutions, hard carbon derived from 3M dextrose solution delivers superior electrochemical performance compared to other hard carbons. Hard carbon derived from 3M dextrose solution (Dextrose derived Hard Carbon-3M) provides an initial reversible capacity of 257 mAh g-1 with a capacity retention of 83 % at the end of 100 cycles at 30 mA g-1). The carbons obtained from different dextrose concentration show very similar Cyclic Voltammetry and chargedischarging behavior at a scan rate of 0.05 mV s-1 the Cyclic Voltammetry curve indicate that solvent reduction and the solid electrolyte interface (SEI) formation start at E < 1.2 V (vs Na/Na+). Among all 3M dextrose derived electrode indicate as a promising anode material for Sodium-ion batteries (SIBs).

Keywords: dextrose derived hard carbon, anode, sodium-ion battery, electrochemical performance

Procedia PDF Downloads 105
8140 Institutional Design for Managing Irrigation Problems: A Case Study of Farmers'- and Agency-Managed Irrigation Systems of Nepal

Authors: Tirtha Raj Dhakal, Brian Davidson, Bob Farquharson

Abstract:

Institutional design is an important aspect in efficient water resource management. In Nepal, the water supply in both farmers’- and agency-managed irrigation systems has become sub-standard because of the weak institutional framework. This study characterizes both forms of the schemes and links existing institution and governance of the schemes with its performance with reference to cost recovery, maintenance of the schemes and water distribution throughout the schemes. For this, two types of surveys were conducted. A management survey of ten farmers’-managed and five agency-managed schemes of Chitwan valley and its periphery was done. Also, a farm survey comprising 25 farmers from each of head, middle and tail regions of both schemes; Narayani Lift Irrigation Project (agency-managed) and Khageri Irrigation System (farmers’-managed) of Chitwan Valley as a case study was conducted. The results showed that cost recovery of agency-managed schemes in 2015 was less than two percent whereas service fee collection rate in farmers’-managed schemes was nearly 2/3rd that triggered poor maintenance of the schemes and unequal distribution of water throughout the schemes. Also, the institution on practice is unable to create any incentives for farmers for economical use of water as well as willingness to pay for its use. This, thus, compels the need of refined institutional framework which has been suggested in this paper aiming to improve the cost recovery and better water distribution throughout the irrigation schemes.

Keywords: cost recovery, governance, institution, schemes' performance

Procedia PDF Downloads 253
8139 Influence of Transportation Mode to the Deterioration Rate: Case Study of Food Transport by Ship

Authors: Danijela Tuljak-Suban, Valter Suban

Abstract:

Food as perishable goods represents a specific and sensitive part in the supply chain theory, since changing of its physical or chemical characteristics considerably influences the approach to stock management. The most delicate phase of this process is transportation, where it becomes difficult to ensure stability conditions that limit the deterioration, since the value of the deterioration rate could be easily influenced by the transportation mode. Fuzzy definition of variables allows taking into account these variations. Furthermore an appropriate choice of the defuzzification method permits to adapt results, as much as possible, to real conditions. In the article will be applied the those methods to the relationship between the deterioration rate of perishable goods and transportation by ship, with the aim: (a) to minimize the total costs function, defined as the sum of the ordering cost, holding cost, disposing cost and transportation costs, and (b) to improve supply chain sustainability by reducing the environmental impact and waste disposal costs.

Keywords: perishable goods, fuzzy reasoning, transport by ship, supply chain sustainability

Procedia PDF Downloads 541
8138 Development, Testing, and Application of a Low-Cost Technology Sulphur Dioxide Monitor as a Tool for use in a Volcanic Emissions Monitoring Network

Authors: Viveka Jackson, Erouscilla Joseph, Denise Beckles, Thomas Christopher

Abstract:

Sulphur Dioxide (SO2) has been defined as a non-flammable, non-explosive, colourless gas, having a pungent, irritating odour, and is one of the main gases emitted from volcanoes. Sulphur dioxide has been recorded in concentrations hazardous to humans (0.25 – 0.5 ppm (~650 – 1300 μg/m3), downwind of many volcanoes and hence warrants constant air-quality monitoring around these sites. It has been linked to an increase in chronic respiratory disease attributed to long-term exposures and alteration in lung and other physiological functions attributed to short-term exposures. Sulphur Springs in Saint Lucia is a highly active geothermal area, located within the Soufrière Volcanic Centre, and is a park widely visited by tourists and locals. It is also a current source of continuous volcanic emissions via its many fumaroles and bubbling pools, warranting concern by residents and visitors to the park regarding the effects of exposure to these gases. In this study, we introduce a novel SO2 measurement system for the monitoring and quantification of ambient levels of airborne volcanic SO2 using low-cost technology. This work involves the extensive production of low-cost SO2 monitors/samplers, as well as field examination in tandem with standard commercial samplers (SO2 diffusion tubes). It also incorporates community involvement in the volcanic monitoring process as non-professional users of the instrument. We intend to present the preliminary monitoring results obtained from the low-cost samplers, to identify the areas in the Park exposed to high concentrations of ambient SO2, and to assess the feasibility of the instrument for non-professional use and application in volcanic settings

Keywords: ambient SO2, community-based monitoring, risk-reduction, sulphur springs, low-cost

Procedia PDF Downloads 461
8137 Pavement Maintenance and Rehabilitation Scheduling Using Genetic Algorithm Based Multi Objective Optimization Technique

Authors: Ashwini Gowda K. S, Archana M. R, Anjaneyappa V

Abstract:

This paper presents pavement maintenance and management system (PMMS) to obtain optimum pavement maintenance and rehabilitation strategies and maintenance scheduling for a network using a multi-objective genetic algorithm (MOGA). Optimal pavement maintenance & rehabilitation strategy is to maximize the pavement condition index of the road section in a network with minimum maintenance and rehabilitation cost during the planning period. In this paper, NSGA-II is applied to perform maintenance optimization; this maintenance approach was expected to preserve and improve the existing condition of the highway network in a cost-effective way. The proposed PMMS is applied to a network that assessed pavement based on the pavement condition index (PCI). The minimum and maximum maintenance cost for a planning period of 20 years obtained from the non-dominated solution was found to be 5.190x10¹⁰ ₹ and 4.81x10¹⁰ ₹, respectively.

Keywords: genetic algorithm, maintenance and rehabilitation, optimization technique, pavement condition index

Procedia PDF Downloads 144
8136 New Methodology for Monitoring Alcoholic Fermentation Processes Using Refractometry

Authors: Boukhiar Aissa, Iguergaziz Nadia, Halladj Fatima, Lamrani Yasmina, Benamara Salem

Abstract:

Determining the alcohol content in alcoholic fermentation bioprocess has a great importance. In fact, it is a key indicator for monitoring this fermentation bioprocess. Several methodologies (chemical, spectrophotometric, chromatographic...) are used to the determination of this parameter. However, these techniques are very long and require: rigorous preparations, sometimes dangerous chemical reagents, and/or expensive equipment. In the present study, the date juice is used as a substrate of alcoholic fermentation. The extracted juice undergoes an alcoholic fermentation by Saccharomyces cerevisiae. The study of the possible use of refractometry as a sole means for the in situ control of this process revealed a good correlation (R2 = 0.98) between initial and final ° Brix: ° Brix f = 0.377× ° Brixi. In addition, we verified the relationship between the variation in final and initial ° Brix (Δ ° Brix) and alcoholic rate produced (A exp): CΔ° Brix / A exp = 1.1. This allows the tracing of abacus isoresponses that permit to determine the alcoholic and residual sugar rates with a mean relative error (MRE) of 5.35%.

Keywords: refractometry, alcohol, residual sugar, fermentation, brix, date, juice

Procedia PDF Downloads 473
8135 The Lean Manufacturing Practices in an Automotive Company Using Value Stream Mapping Technique

Authors: Seher Arslankaya, Merve Si̇mge Usuk

Abstract:

Lean manufacturing, which is based on the Toyota Production System, has focused on increasing the performance in various fields by eliminating the waste. By waste elimination, the lead time is reduced significantly and lean manufacturing provides companies with an important privilege under today's competitive conditions. The initial point of lean thinking is the value. This notion create of a specific product with specific properties for which the customer is ready to pay and which satisfies his needs within a specific time frame and at a specific price. Considering this, the final customer determines the value but the manufacturer creates this value of the product. The value stream is the whole set of activities required for each product. These activities may or may not be essential for the value. Through value stream mapping, all employees can see the sources of waste and develop future cases to eliminate it. This study focused on manufacturing to eliminate the waste which created a cost but did not create any value. The study was carried out at the Department of Assembly/Logistics at Toyota Motor Manufacturing Turkey from the automotive industry with a high product mix and variable demands. As a result of the value stream analysis, improvements are planned for the future cases. The process was improved by applying these suggestions.

Keywords: lead time, lean manufacturing, performance improvement, value stream papping

Procedia PDF Downloads 308
8134 Clinical Outcomes of Mild Traumatic Brain Injury with Acute Traumatic Intracranial Hemorrhage on Initial Emergency Ward Neuroimaging

Authors: S. Shafiee Ardestani, A. Najafi, N. Valizadeh, E. Payani, H. Karimian

Abstract:

Objectives: Treatment of mild traumatic brain injury in emergency ward patients with any type of traumatic intracranial hemorrhage is flexible. The aim of this study is to assess the clinical outcomes of mild traumatic brain injury patients who had acute traumatic intracranial hemorrhage on initial emergency ward neuroimaging. Materials-Methods: From March 2011 to November 2012 in a retrospective cohort study we enrolled emergency ward patients with mild traumatic brain injury with Glasgow Coma Scale (GCS) scores of 14 or 15 and who had stable vital signs. Patients who had any type of intracranial hemorrhage on first head CT and repeat head CT within 24 hours were included. Patients with initial GCS < 14, injury > 24 hours old, pregnancy, concomitant non-minor injuries, and coagulopathy were excluded. Primary endpoints were neurosurgical procedures and/or death and for discharged patients, return to the emergency ward during one week. Results: Among 755 patients who were referred to the emergency ward and underwent two head CTs during first 24 hours, 302 (40%) were included. The median interval between CT scans was 6 hours (ranging 4 to 8 hours). Consequently, 135 (45%) patients had subarachnoid hemorrhage, 124 (41%) patients had subdural hemorrhage, 15 (5%) patients had epidural hemorrhage, 28 (9%) patients had cerebral contusions, and 54 (18%) patients had intra-parenchymal hemorrhage. Six of 302 patients died within 15 days of injury. 200 patients (66%) have been discharged from the emergency ward, 25 (12%) of whom returned to the emergency ward after one week. Conclusion: Discharge of the head trauma patients after a repeat head CT and brief period of observation in the emergency ward lead to early discharge of mild traumatic brain injury patients with traumatic ICH without adverse events.

Keywords: clinical outcomes, emergency ward, mild traumatic intracranial hemorrhage, Glasgow Coma Scale (GCS)

Procedia PDF Downloads 332
8133 Distangling Biological Noise in Cellular Images with a Focus on Explainability

Authors: Manik Sharma, Ganapathy Krishnamurthi

Abstract:

The cost of some drugs and medical treatments has risen in recent years, that many patients are having to go without. A classification project could make researchers more efficient. One of the more surprising reasons behind the cost is how long it takes to bring new treatments to market. Despite improvements in technology and science, research and development continues to lag. In fact, finding new treatment takes, on average, more than 10 years and costs hundreds of millions of dollars. If successful, we could dramatically improve the industry's ability to model cellular images according to their relevant biology. In turn, greatly decreasing the cost of treatments and ensure these treatments get to patients faster. This work aims at solving a part of this problem by creating a cellular image classification model which can decipher the genetic perturbations in cell (occurring naturally or artificially). Another interesting question addressed is what makes the deep-learning model decide in a particular fashion, which can further help in demystifying the mechanism of action of certain perturbations and paves a way towards the explainability of the deep-learning model.

Keywords: cellular images, genetic perturbations, deep-learning, explainability

Procedia PDF Downloads 102
8132 School Partners in Initial Teacher Education: An Including or Excluding Approach When Engaging Schools

Authors: Laila Niklasson

Abstract:

The aim of the study is to critically discuss how partner schools are engaged during Initial teacher education, ITE. The background is an experiment in Sweden where the practicum organization is reorganized due to a need to enhance quality during practicum. It is a national initiative from the government, supported by the National Agency of Education and lasts 2014-2019. The main features are concentration of students to school with a certain amount of mentors, mentors who have a mentor education and teachers with relevant subject areas and where there could be a mentor team with a leader at the school. An expected outcome is for example that the student teachers should be engaged in peer-learning. The schools should be supported by extra lectures from university teachers during practicum and also extra research projects where the schools should be engaged. A case study of one university based ITE was carried out to explore the consequences for the schools not selected. The result showed that from engaging x schools in a region, x was engaged. The schools are both in urban and rural areas, mainly in the latter. There is also a tendency that private schools are not engaged. On a unit level recruitment is perceived as harder for schools not engaged. In addition they cannot market themselves as ´selected school´ which can affect parent´s selection of school for their children. Also, on unit level, but with consequences for professional development, they are not selected for research project and thereby are not fully supported during school development. The conclusion is that from an earlier inclusive approach concerning professions where all teachers were perceived as possible mentors, there is a change to an exclusive approach where selected schools and selected teachers should be engaged. The change could be perceived as a change in governance mentality, but also in how professions are perceived, and development work is pursued.

Keywords: initial teacher education, practicum schools, profession, quality development

Procedia PDF Downloads 139
8131 Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms

Authors: Hanieh Tarbiat Khosrowshahi, Mojtaba Shakeri

Abstract:

Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.

Keywords: connectivity restoration, genetic algorithms, multiple-node failure, relay nodes, wireless sensor networks

Procedia PDF Downloads 236
8130 Conceptual Perimeter Model for Estimating Building Envelope Quantities

Authors: Ka C. Lam, Oluwafunmibi S. Idowu

Abstract:

Building girth is important in building economics and mostly used in quantities take-off of various cost items. Literature suggests that the use of conceptual quantities can improve the accuracy of cost models. Girth or perimeter of a building can be used to estimate conceptual quantities. Hence, the current paper aims to model the perimeter-area function of buildings shapes for use at the conceptual design stage. A detailed literature review on existing building shape indexes was carried out. An empirical approach was used to study the relationship between area and the shortest length of a four-sided orthogonal polygon. Finally, a mathematical approach was used to establish the observed relationships. The empirical results obtained were in agreement with the mathematical model developed. A new equation termed “conceptual perimeter equation” is proposed. The equation can be used to estimate building envelope quantities such as external wall area, external finishing area and scaffolding area before sketch or detailed drawings are prepared.

Keywords: building envelope, building shape index, conceptual quantities, cost modelling, girth

Procedia PDF Downloads 336
8129 Two-Warehouse Inventory Model for Deteriorating Items with Inventory-Level-Dependent Demand under Two Dispatching Policies

Authors: Lei Zhao, Zhe Yuan, Wenyue Kuang

Abstract:

This paper studies two-warehouse inventory models for a deteriorating item considering that the demand is influenced by inventory levels. The problem mainly focuses on the optimal order policy and the optimal order cycle with inventory-level-dependent demand in two-warehouse system for retailers. It considers the different deterioration rates and the inventory holding costs in owned warehouse (OW) and rented warehouse (RW), and the conditions of transportation cost, allowed shortage and partial backlogging. Two inventory models are formulated: last-in first-out (LIFO) model and first-in-first-out (FIFO) model based on the policy choices of LIFO and FIFO, and a comparative analysis of LIFO model and FIFO model is made. The study finds that the FIFO policy is more in line with realistic operating conditions. Especially when the inventory holding cost of OW is high, and there is no difference or big difference between deterioration rates of OW and RW, the FIFO policy has better applicability. Meanwhile, this paper considers the differences between the effects of warehouse and shelf inventory levels on demand, and then builds retailers’ inventory decision model and studies the factors of the optimal order quantity, the optimal order cycle and the average inventory cost per unit time. To minimize the average total cost, the optimal dispatching policies are provided for retailers’ decisions.

Keywords: FIFO model, inventory-level-dependent, LIFO model, two-warehouse inventory

Procedia PDF Downloads 277
8128 Reservoir Properties Effect on Estimating Initial Gas in Place Using Flowing Material Balance Method

Authors: Yousef S. Kh. S. Hashem

Abstract:

Accurate estimation of initial gas in place (IGIP) plays an important factor in the decision to develop a gas field. One of the methods that are available in the industry to estimate the IGIP is material balance. This method required that the well has to be shut-in while pressure is measured as it builds to average reservoir pressure. Since gas demand is high and shut-in well surveys are very expensive, flowing gas material balance (FGMB) is sometimes used instead of material balance. This work investigated the effect of reservoir properties (pressure, permeability, and reservoir size) on the estimation of IGIP when using FGMB. A gas reservoir simulator that accounts for friction loss, wellbore storage, and the non-Darcy effect was used to simulate 165 different possible causes (3 pressures, 5 reservoir sizes, and 11 permeabilities). Both tubing pressure and bottom-hole pressure were analyzed using FGMB. The results showed that the FGMB method is very sensitive for tied reservoirs (k < 10). Also, it showed which method is best to be used for different reservoir properties. This study can be used as a guideline for the application of the FGMB method.

Keywords: flowing material balance, gas reservoir, reserves, gas simulator

Procedia PDF Downloads 147
8127 DC/DC Boost Converter Applied to Photovoltaic Pumping System Application

Authors: S. Abdourraziq, M. A. Abdourraziq

Abstract:

One of the most famous and important applications of solar energy systems is water pumping. It is often used for irrigation or to supply water in countryside or private firm. However, the cost and the efficiency are still a concern, especially with a continued variation of solar radiation and temperature throughout the day. Then, the improvement of the efficiency of the system components is one of the different solutions to reducing the cost. In this paper, we will present a detailed definition of each element of a PV pumping system, and we will present the different MPPT algorithm used in the literature. Our system consists of a PV panel, a boost converter, a motor-pump set, and a storage tank.

Keywords: PV cell, converter, MPPT, MPP, PV pumping system

Procedia PDF Downloads 152
8126 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor

Authors: Hao Yan, Xiaobing Zhang

Abstract:

The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.

Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model

Procedia PDF Downloads 86
8125 Evaluation of Arsenic Removal in Soils Contaminated by the Phytoremediation Technique

Authors: V. Ibujes, A. Guevara, P. Barreto

Abstract:

Concentration of arsenic represents a serious threat to human health. It is a bioaccumulable toxic element and is transferred through the food chain. In Ecuador, values of 0.0423 mg/kg As are registered in potatoes of the skirts of the Tungurahua volcano. The increase of arsenic contamination in Ecuador is mainly due to mining activity, since the process of gold extraction generates toxic tailings with mercury. In the Province of Azuay, due to the mining activity, the soil reaches concentrations of 2,500 to 6,420 mg/kg As whereas in the province of Tungurahua it can be found arsenic concentrations of 6.9 to 198.7 mg/kg due to volcanic eruptions. Since the contamination by arsenic, the present investigation is directed to the remediation of the soils in the provinces of Azuay and Tungurahua by phytoremediation technique and the definition of a methodology of extraction by means of analysis of arsenic in the system soil-plant. The methodology consists in selection of two types of plants that have the best arsenic removal capacity in synthetic solutions 60 μM As, a lower percentage of mortality and hydroponics resistance. The arsenic concentrations in each plant were obtained from taking 10 ml aliquots and the subsequent analysis of the ICP-OES (inductively coupled plasma-optical emission spectrometry) equipment. Soils were contaminated with synthetic solutions of arsenic with the capillarity method to achieve arsenic concentration of 13 and 15 mg/kg. Subsequently, two types of plants were evaluated to reduce the concentration of arsenic in soils for 7 weeks. The global variance for soil types was obtained with the InfoStat program. To measure the changes in arsenic concentration in the soil-plant system, the Rhizo and Wenzel arsenic extraction methodology was used and subsequently analyzed with the ICP-OES (optima 8000 Pekin Elmer). As a result, the selected plants were bluegrass and llanten, due to the high percentages of arsenic removal of 55% and 67% and low mortality rates of 9% and 8% respectively. In conclusion, Azuay soil with an initial concentration of 13 mg/kg As reached the concentrations of 11.49 and 11.04 mg/kg As for bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.79 and 11.10 mg/kg As for blue grass and llanten after 7 weeks. For the Tungurahua soil with an initial concentration of 13 mg/kg As it reached the concentrations of 11.56 and 12.16 mg/kg As for the bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.97 and 12.27 mg/kg Ace for bluegrass and llanten after 7 weeks. The best arsenic extraction methodology of soil-plant system is Wenzel.

Keywords: blue grass, llanten, phytoremediation, soil of Azuay, soil of Tungurahua, synthetic arsenic solution

Procedia PDF Downloads 98
8124 Transaction Cost Analysis, Execution Quality, and Best Execution under MiFID II

Authors: Rodrigo Zepeda

Abstract:

Transaction cost analysis (TCA) is a way of analyzing the relative performance of different intermediaries and different trading strategies for trades undertaken in financial instruments. It is a way for an investor to determine the overall quality of execution of a particular trade, and there are many different approaches to undertaking TCA. Under the updated Markets in Financial Instruments Directive (2014/65/EU) (MiFID II), investment firms are required when executing orders, to take all sufficient steps to obtain the best possible result for their clients. This requirement for 'Best Execution' must take into account price, costs, speed, likelihood of execution and settlement, size, nature or any other consideration relevant to the execution of the order. The new regulatory compliance framework under MiFID II will now also apply across a very broad range of financial instruments. This article will provide a comprehensive technical analysis of how TCA and Best Execution will significantly change under MiFID II. It will also explain why harmonization of post-trade reporting requirements under MiFID II could potentially support the development of peer group analysis, which in turn could provide a new and highly advanced framework for TCA that could more effectively support Best Execution requirements under MiFID II. The study is significant because there are no studies that have dealt with TCA and Best Execution under MiFID II in the literature.

Keywords: transaction cost analysis, execution quality, best execution, MiFID II, financial instruments

Procedia PDF Downloads 286
8123 Reliability Improvement of Power System Networks Using Adaptive Genetic Algorithm

Authors: Alireza Alesaadi

Abstract:

Reliability analysis is a powerful method for determining the weak points of the electrical networks. In designing of electrical network, it is tried to design the most reliable network with minimal system shutting down, but it is usually associated with increasing the cost. In this paper, using adaptive genetic algorithm, a method was presented that provides the most reliable system with a certain economical cost. Finally, the proposed method is applied to a sample network and results will be analyzed.

Keywords: reliability, adaptive genetic algorithm, electrical network, communication engineering

Procedia PDF Downloads 497
8122 Recovering Copper From Tailing and E-Waste to Create Copper Nanoparticles with Antimicrobial Properties

Authors: Erico R. Carmona, Lucas Hernandez-Saravia, Aliro Villacorta, Felipe Carevic

Abstract:

Tailings and electronic waste (e-waste) are an important source of global contamination. Chile is one of Organisation for Economic Co-operation and Development (OECD) member countries that least recycled this kind of industrial waste, reaching only 3% of the total. Tailings and e-waste recycling offers a valuable tool to minimize the increasing accumulation of waste, supplement the scarcity of some raw materials and to obtain economic benefits through the commercialization of these. It should be noted that this type of industrial waste is an important source of valuable metals, such as copper, which allow generating new business and added value through its transformation into new materials with advanced physical and biological properties. In this sense, the development of nanotechnology has led to the creation of nanomaterials with multiple applications given their unique physicochemical properties. Among others, copper nanoparticles (CuNPs) have gained great interest due to their optical, catalytic, conductive properties, and particularly because of their broad-spectrum antimicrobial activity. There are different synthesis methods of copper nanoparticles; however, green synthesis is one of the most promising methodologies, since it is simple, low-cost, ecological, and generates stable nanoparticles, which makes it a promising methodology for scaling up. Currently, there are few initiatives that involve the development of methods for the recovery and transformation of copper from waste to produce nanoparticles with new properties and better technological benefits. Thus, the objective of this work is to show preliminary data about the develop a sustainable transformation process of tailings and e-waste that allows obtaining a copper-based nanotechnological product with potential antimicrobial applications. For this, samples of tailings and e-waste collected from Tarapacá and Antofagasta region of northern Chile were used to recover copper through efficient, ecological, and low-cost alkaline hydrometallurgical treatments, which to allow obtaining copper with a high degree of purity. On the other hand, the transformation process from recycled copper to a nanomaterial was carried out through a green synthesis approach by using vegetal organic residue extracts that allows obtaining CuNPs following methodologies previously reported by authors. Initial physical characterization with UV-Vis, FTIR, AFM, and TEM methodologies will be reported for CuNPs synthesized.

Keywords: nanomaterials, industrial waste, chile, recycling

Procedia PDF Downloads 91
8121 Design Modification in CNC Milling Machine to Reduce the Weight of Structure

Authors: Harshkumar K. Desai, Anuj K. Desai, Jay P. Patel, Snehal V. Trivedi, Yogendrasinh Parmar

Abstract:

The need of continuous improvement in a product or process in this era of global competition leads to apply value engineering for functional and aesthetic improvement in consideration with economic aspect too. Solar industries located at G.I.D.C., Makarpura, Vadodara, Gujarat, India; a manufacturer of variety of CNC Machines had a challenge to analyze the structural design of column, base, carriage and table of CNC Milling Machine in the account of reduction of overall weight of a machine without affecting the rigidity and accuracy at the time of operation. The identified task is the first attempt to validate and optimize the proposed design of ribbed structure statically using advanced modeling and analysis tools in a systematic way. Results of stress and deformation obtained using analysis software are validated with theoretical analysis and found quite satisfactory. Such optimized results offer a weight reduction of the final assembly which is desired by manufacturers in favor of reduction of material cost, processing cost and handling cost finally.

Keywords: CNC milling machine, optimization, finite element analysis (FEA), weight reduction

Procedia PDF Downloads 272
8120 Supply Chain Coordination under Carbon Trading Mechanism in Case of Conflict

Authors: Fuqiang Wang, Jun Liu, Liyan Cai

Abstract:

This paper investigates the coordination of the conflicting two-stage low carbon supply chain consisting of upstream and downstream manufacturers. The conflict means that the upstream manufacturer takes action for carbon emissions reduction under carbon trading mechanism while the downstream manufacturer’s production cost rises. It assumes for the Stackelberg game that the upstream manufacturer plays as a leader and the downstream manufacturer does as a follower. Four kinds of the situation of decentralized decision making, centralized decision-making, the production cost sharing contract and the carbon emissions reduction revenue sharing contract under decentralized decision making are considered. The backward induction approach is adopted to solve the game. The results show that the more intense the conflict is, the lower the efficiency of carbon emissions reduction and the higher the retail price is. The optimal investment of the decentralized supply chain under the two contracts is unchanged and still lower than that of the centralized supply chain. Both the production cost sharing contract and the carbon emissions reduction revenue sharing contract cannot coordinate the supply chain, because that the sharing cost or carbon emissions reduction sharing revenue will transfer through the wholesale price mechanism. As a result, it requires more complicated contract forms to coordinate such a supply chain.

Keywords: cap-and-trade mechanism, carbon emissions reduction, conflict, supply chain coordination

Procedia PDF Downloads 336
8119 Predictive Models of Ruin Probability in Retirement Withdrawal Strategies

Authors: Yuanjin Liu

Abstract:

Retirement withdrawal strategies are very important to minimize the probability of ruin in retirement. The ruin probability is modeled as a function of initial withdrawal age, gender, asset allocation, inflation rate, and initial withdrawal rate. The ruin probability is obtained based on the 2019 period life table for the Social Security, IRS Required Minimum Distribution (RMD) Worksheets, US historical bond and equity returns, and inflation rates using simulation. Several popular machine learning algorithms of the generalized additive model, random forest, support vector machine, extreme gradient boosting, and artificial neural network are built. The model validation and selection are based on the test errors using hyperparameter tuning and train-test split. The optimal model is recommended for retirees to monitor the ruin probability. The optimal withdrawal strategy can be obtained based on the optimal predictive model.

Keywords: ruin probability, retirement withdrawal strategies, predictive models, optimal model

Procedia PDF Downloads 71
8118 Synthesis, Characterization of Organic and Inorganic Zn-Al Layered Double Hydroxides and Application for the Uptake of Methyl Orange from Aqueous Solution

Authors: Fatima Zahra Mahjoubi, Abderrahim Khalidi, Mohammed Abdennouri, Noureddine Barka

Abstract:

Zn-Al layered double hydroxides containing carbonate, nitrate and dodecylsulfate as the interlamellar anions have been prepared through a coprecipitation method. The resulting compounds were characterized using XRD, ICP, FTIR, TGA/DTA, TEM/EDX and pHPZC analysis. The XRD patterns revealed that carbonate and nitrate could be intercalated into the interlayer structure with basal spacing of 22.74 and 26.56 Å respectively. Bilayer intercalation of dodecylsulfate molecules was achieved in Zn-Al LDH with a basal spacing of 37.86 Å. The TEM observation indicated that the materials synthesized via coprecipitation present nanoscale LDH particle. The average particle size of Zn-AlCO3 is 150 to 200 nm. Irregular circular to hexagonal shaped particles with 30 to 40 nm in diameter was observed in the Zn-AlNO3 morphology. TEM image of Zn-AlDs display nanostructured sheet like particles with size distribution between 5 to 10 nm. The sorption characteristics and mechanisms of methyl orange dye on organic LDH were investigated and were subsequently compared with that on the inorganic Zn-Al layered double hydroxides. Adsorption experiments for MO were carried out as function of solution pH, contact time and initial dye concentration. The adsorption behavior onto inorganic LDHs was obviously influenced by initial pH. However, the adsorption capacity of organic LDH was influenced indistinctively by initial pH and the removal percentage of MO was practically constant at various value of pH. As the MO concentration increased, the curve of adsorption capacity became L-type onto LDHs. The adsorption behavior for Zn-AlDs was proposed by the dissolution of dye in a hydrophobic interlayer region (i.e., adsolubilization). The results suggested that Zn-AlDs could be applied as a potential adsorbent for MO removal in a wide range of pH.

Keywords: adsorption, dodecylsulfate, kinetics, layered double hydroxides, methyl orange removal

Procedia PDF Downloads 287
8117 The Batch Method Approach for Adsorption Mechanism Processes of Some Selected Heavy Metal Ions and Methylene Blue by Using Chemically Modified Luffa Cylindrica

Authors: Akanimo Emene, Mark D. Ogden, Robert Edyvean

Abstract:

Adsorption is a low cost, efficient and economically viable wastewater treatment process. Utilization of this treatment process has not been fully applied due to the complex and not fully understood nature of the adsorption system. To optimize its process is to choose a sufficient adsorbent and to study further the experimental parameters that influence the adsorption design system. Chemically modified adsorbent, Luffa cylindrica, was used to adsorb heavy metal ions and an organic pollutant, methylene blue, from aqueous environmental solution at varying experimental conditions. Experimental factors, adsorption time, initial metal ion or organic pollutant concentration, ionic strength, and pH of solution were studied. The experimental data were analyzed with kinetic and isotherm models. The antagonistic effect of the methylene and some heavy metal ions were recorded. An understanding of the use of this treated Luffa cylindrica for the removal of these toxic substances will establish and improve the commercial application of the adsorption process in treatment of contaminated waters.

Keywords: adsorption, heavy metal ions, Luffa cylindrica, wastewater treatment

Procedia PDF Downloads 190