Search results for: foundation optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4397

Search results for: foundation optimization

707 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method

Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky

Abstract:

It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finite-elements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.

Keywords: residual welding deformations, longitudinal and transverse shortenings of welding joints, method of analytic dependences, finite elements method

Procedia PDF Downloads 390
706 Growth and Differentiation of Mesenchymal Stem Cells on Titanium Alloy Ti6Al4V and Novel Beta Titanium Alloy Ti36Nb6Ta

Authors: Eva Filová, Jana Daňková, Věra Sovková, Matej Daniel

Abstract:

Titanium alloys are biocompatible metals that are widely used in clinical practice as load bearing implants. The chemical modification may influence cell adhesion, proliferation, and differentiation as well as stiffness of the material. The aim of the study was to evaluate the adhesion, growth and differentiation of pig mesenchymal stem cells on the novel beta titanium alloy Ti36Nb6Ta compared to standard medical titanium alloy Ti6Al4V. Discs of Ti36Nb6Ta and Ti6Al4V alloy were sterilized by ethanol, put in 48-well plates, and seeded by pig mesenchymal stem cells at the density of 60×103/cm2 and cultured in Minimum essential medium (Sigma) supplemented with 10% fetal bovine serum and penicillin/streptomycin. Cell viability was evaluated using MTS assay (CellTiter 96® AQueous One Solution Cell Proliferation Assay;Promega), cell proliferation using Quant-iT™ ds DNA Assay Kit (Life Technologies). Cells were stained immunohistochemically using monoclonal antibody beta-actin, and secondary antibody conjugated with AlexaFluor®488 and subsequently the spread area of cells was measured. Cell differentiation was evaluated by alkaline phosphatase assay using p-nitrophenyl phosphate (pNPP) as a substrate; the reaction was stopped by NaOH, and the absorbance was measured at 405 nm. Osteocalcin, specific bone marker was stained immunohistochemically and subsequently visualized using confocal microscopy; the fluorescence intensity was analyzed and quantified. Moreover, gene expression of osteogenic markers osteocalcin and type I collagen was evaluated by real-time reverse transcription-PCR (qRT-PCR). For statistical evaluation, One-way ANOVA followed by Student-Newman-Keuls Method was used. For qRT-PCR, the nonparametric Kruskal-Wallis Test and Dunn's Multiple Comparison Test were used. The absorbance in MTS assay was significantly higher on titanium alloy Ti6Al4V compared to beta titanium alloy Ti36Nb6Ta on days 7 and 14. Mesenchymal stem cells were well spread on both alloys, but no difference in spread area was found. No differences in alkaline phosphatase assay, fluorescence intensity of osteocalcin as well as the expression of type I collagen, and osteocalcin genes were observed. Higher expression of type I collagen compared to osteocalcin was observed for cells on both alloys. Both beta titanium alloy Ti36Nb6Ta and titanium alloy Ti6Al4V Ti36Nb6Ta supported mesenchymal stem cellsˈ adhesion, proliferation and osteogenic differentiation. Novel beta titanium alloys Ti36Nb6Ta is a promising material for bone implantation. The project was supported by the Czech Science Foundation: grant No. 16-14758S, the Grant Agency of the Charles University, grant No. 1246314 and by the Ministry of Education, Youth and Sports NPU I: LO1309.

Keywords: beta titanium, cell growth, mesenchymal stem cells, titanium alloy, implant

Procedia PDF Downloads 302
705 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression

Authors: Wu Peng, Anders Liljerehn, Martin Magnevall

Abstract:

In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.

Keywords: cutting force, kienzle model, predictive model, tool flank wear

Procedia PDF Downloads 94
704 Techno-Economic Optimization and Evaluation of an Integrated Industrial Scale NMC811 Cathode Active Material Manufacturing Process

Authors: Usama Mohamed, Sam Booth, Aliysn J. Nedoma

Abstract:

As part of the transition to electric vehicles, there has been a recent increase in demand for battery manufacturing. Cathodes typically account for approximately 50% of the total lithium-ion battery cell cost and are a pivotal factor in determining the viability of new industrial infrastructure. Cathodes which offer lower costs whilst maintaining or increasing performance, such as nickel-rich layered cathodes, have a significant competitive advantage when scaling up the manufacturing process. This project evaluates the techno-economic value proposition of an integrated industrial scale cathode active material (CAM) production process, closing the mass and energy balances, and optimizing the operation conditions using a sensitivity analysis. This is done by developing a process model of a co-precipitation synthesis route using Aspen Plus software and validated based on experimental data. The mechanism chemistry and equilibrium conditions were established based on previous literature and HSC-Chemistry software. This is then followed by integrating the energy streams, adding waste recovery and treatment processes, as well as testing the effect of key parameters (temperature, pH, reaction time, etc.) on CAM production yield and emissions. Finally, an economic analysis estimating the fixed and variable costs (including capital expenditure, labor costs, raw materials, etc.) to calculate the cost of CAM ($/kg and $/kWh), total plant cost ($) and net present value (NPV). This work sets the foundational blueprint for future research into sustainable industrial scale processes for CAM manufacturing.

Keywords: cathodes, industrial production, nickel-rich layered cathodes, process modelling, techno-economic analysis

Procedia PDF Downloads 86
703 Energy Consumption Estimation for Hybrid Marine Power Systems: Comparing Modeling Methodologies

Authors: Kamyar Maleki Bagherabadi, Torstein Aarseth Bø, Truls Flatberg, Olve Mo

Abstract:

Hydrogen fuel cells and batteries are one of the promising solutions aligned with carbon emission reduction goals for the marine sector. However, the higher installation and operation costs of hydrogen-based systems compared to conventional diesel gensets raise questions about the appropriate hydrogen tank size, energy, and fuel consumption estimations. Ship designers need methodologies and tools to calculate energy and fuel consumption for different component sizes to facilitate decision-making regarding feasibility and performance for retrofits and design cases. The aim of this work is to compare three alternative modeling approaches for the estimation of energy and fuel consumption with various hydrogen tank sizes, battery capacities, and load-sharing strategies. A fishery vessel is selected as an example, using logged load demand data over a year of operations. The modeled power system consists of a PEM fuel cell, a diesel genset, and a battery. The methodologies used are: first, an energy-based model; second, considering load variations during the time domain with a rule-based Power Management System (PMS); and third, a load variations model and dynamic PMS strategy based on optimization with perfect foresight. The errors and potentials of the methods are discussed, and design sensitivity studies for this case are conducted. The results show that the energy-based method can estimate fuel and energy consumption with acceptable accuracy. However, models that consider time variation of the load provide more realistic estimations of energy and fuel consumption regarding hydrogen tank and battery size, still within low computational time.

Keywords: fuel cell, battery, hydrogen, hybrid power system, power management system

Procedia PDF Downloads 12
702 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti

Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms

Abstract:

Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.

Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing

Procedia PDF Downloads 110
701 A Comprehensive Overview of Solar and Vertical Axis Wind Turbine Integration Micro-Grid

Authors: Adnan Kedir Jarso, Mesfin Megra Rorisa, Haftom Gebreslassie Gebregwergis, Frie Ayalew Yimam, Seada Hussen Adem

Abstract:

A microgrid is a small-scale power grid that can operate independently or in conjunction with the main power grid. It is a promising solution for providing reliable and sustainable energy to remote areas. The integration of solar and vertical axis wind turbines (VAWTs) in a microgrid can provide a stable and efficient source of renewable energy. This paper provides a comprehensive overview of the integration of solar and VAWTs in a microgrid. The paper discusses the design, operation, and control of a microgrid that integrates solar and VAWTs. The paper also examines the performance of the microgrid in terms of efficiency, reliability, and cost-effectiveness. The paper highlights the advantages and disadvantages of using solar and VAWTs in a microgrid. The paper concludes that the integration of solar and VAWTs in a microgrid is a promising solution for providing reliable and sustainable energy to remote areas. The paper recommends further research to optimize the design and operation of a microgrid that integrates solar and VAWTs. The paper also recommends the development of policies and regulations that promote the use of microgrids that integrate solar and VAWTs. In conclusion, the integration of solar and VAWTs in a microgrid is a promising solution for providing reliable and sustainable energy to remote areas. The paper provides a comprehensive overview of the integration of solar and VAWTs in a microgrid and highlights the advantages and disadvantages of using solar and VAWTs in a microgrid. The paper recommends further research and the development of policies and regulations that promote the use of microgrids that integrate solar and VAWTs.

Keywords: hybrid generation, intermittent power, optimization, photovoltaic, vertical axis wind turbine

Procedia PDF Downloads 65
700 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 48
699 Optimization of Samarium Extraction via Nanofluid-Based Emulsion Liquid Membrane Using Cyanex 272 as Mobile Carrier

Authors: Maliheh Raji, Hossein Abolghasemi, Jaber Safdari, Ali Kargari

Abstract:

Samarium as a rare-earth element is playing a growing important role in high technology. Traditional methods for extraction of rare earth metals such as ion exchange and solvent extraction have disadvantages of high investment and high energy consumption. Emulsion liquid membrane (ELM) as an improved solvent extraction technique is an effective transport method for separation of various compounds from aqueous solutions. In this work, the extraction of samarium from aqueous solutions by ELM was investigated using response surface methodology (RSM). The organic membrane phase of the ELM was a nanofluid consisted of multiwalled carbon nanotubes (MWCNT), Span80 as surfactant, Cyanex 272 as mobile carrier, and kerosene as base fluid. 1 M nitric acid solution was used as internal aqueous phase. The effects of the important process parameters on samarium extraction were investigated, and the values of these parameters were optimized using the Central Composition Design (CCD) of RSM. These parameters were the concentration of MWCNT in nanofluid, the carrier concentration, and the volume ratio of organic membrane phase to internal phase (Roi). The three-dimensional (3D) response surfaces of samarium extraction efficiency were obtained to visualize the individual and interactive effects of the process variables. A regression model for % extraction was developed, and its adequacy was evaluated. The result shows that % extraction improves by using MWCNT nanofluid in organic membrane phase and extraction efficiency of 98.92% can be achieved under the optimum conditions. In addition, demulsification was successfully performed and the recycled membrane phase was proved to be effective in the optimum condition.

Keywords: Cyanex 272, emulsion liquid membrane, MWCNT nanofluid, response surface methology, Samarium

Procedia PDF Downloads 407
698 COVID-19 Laws and Policy: The Use of Policy Surveillance For Better Legal Preparedness

Authors: Francesca Nardi, Kashish Aneja, Katherine Ginsbach

Abstract:

The COVID-19 pandemic has demonstrated both a need for evidence-based and rights-based public health policy and how challenging it can be to make effective decisions with limited information, evidence, and data. The O’Neill Institute, in conjunction with several partners, has been working since the beginning of the pandemic to collect, analyze, and distribute critical data on public health policies enacted in response to COVID-19 around the world in the COVID-19 Law Lab. Well-designed laws and policies can help build strong health systems, implement necessary measures to combat viral transmission, enforce actions that promote public health and safety for everyone, and on the individual level have a direct impact on health outcomes. Poorly designed laws and policies, on the other hand, can fail to achieve the intended results and/or obstruct the realization of fundamental human rights, further disease spread, or cause unintended collateral harms. When done properly, laws can provide the foundation that brings clarity to complexity, embrace nuance, and identifies gaps of uncertainty. However, laws can also shape the societal factors that make disease possible. Law is inseparable from the rest of society, and COVID-19 has exposed just how much laws and policies intersects all facets of society. In the COVID-19 context, evidence-based and well-informed law and policy decisions—made at the right time and in the right place—can and have meant the difference between life or death for many. Having a solid evidentiary base of legal information can promote the understanding of what works well and where, and it can drive resources and action to where they are needed most. We know that legal mechanisms can enable nations to reduce inequities and prepare for emerging threats, like novel pathogens that result in deadly disease outbreaks or antibiotic resistance. The collection and analysis of data on these legal mechanisms is a critical step towards ensuring that legal interventions and legal landscapes are effectively incorporated into more traditional kinds of health science data analyses. The COVID-19 Law Labs see a unique opportunity to collect and analyze this kind of non-traditional data to inform policy using laws and policies from across the globe and across diseases. This global view is critical to assessing the efficacy of policies in a wide range of cultural, economic, and demographic circumstances. The COVID-19 Law Lab is not just a collection of legal texts relating to COVID-19; it is a dataset of concise and actionable legal information that can be used by health researchers, social scientists, academics, human rights advocates, law and policymakers, government decision-makers, and others for cross-disciplinary quantitative and qualitative analysis to identify best practices from this outbreak, and previous ones, to be better prepared for potential future public health events.

Keywords: public health law, surveillance, policy, legal, data

Procedia PDF Downloads 128
697 Comparison of Growth Medium Efficiency into Stevia (Stevia rebaudiana Bertoni) Shoot Biomass and Stevioside Content in Thin-Layer System, TIS RITA® Bioreactor, and Bubble Column Bioreactor

Authors: Nurhayati Br Tarigan, Rizkita Rachmi Esyanti

Abstract:

Stevia (Stevia rebaudiana Bertoni) has a great potential to be used as a natural sweetener because it contains steviol glycoside, which is approximately 100 - 300 times sweeter than sucrose, yet low calories. Vegetative and generative propagation of S. rebaudiana is inefficient to produce stevia biomass and stevioside. One of alternative for stevia propagation is in vitro shoot culture. This research was conducted to optimize the best medium for shoot growth and to compare the bioconversion efficiency and stevioside production of S. rebaudiana shoot culture cultivated in thin layer culture (TLC), recipient for automated temporary immersion system (TIS RITA®) bioreactor, and bubble column bioreactor. The result showed that 1 ppm of Kinetin produced a healthy shoot and the highest number of leaves compared to BAP. Shoots were then cultivated in TLC, TIS RITA® bioreactor, and bubble column bioreactor. Growth medium efficiency was determined by yield and productivity. TLC produced the highest growth medium efficiency of S. rebaudiana, the yield was 0.471 ± 0.117 gbiomass.gsubstrate-1, and the productivity was 0.599 ± 0.122 gbiomass.Lmedium-1.day-1. While TIS RITA® bioreactor produced the lowest yield and productivity, 0.182 ± 0.024 gbiomass.gsubstrate-1 and 0.041 ± 0.0002 gbiomass.Lmedium-1.day-1 respectively. The yield of bubble column bioreactor was 0.354 ± 0.204 gbiomass.gsubstrate-1 and the productivity was 0,099 ± 0,009 gbiomass.Lmedium-1.day-1. The stevioside content from the highest to the lowest was obtained from stevia shoot which was cultivated on TLC, TIS RITA® bioreactor, and bubble column bioreactor; the content was 93,44 μg/g, 42,57 μg/g, and 23,03 μg/g respectively. All three systems could be used to produce stevia shoot biomass, but optimization on the number of nutrition and oxygen intake was required in each system.

Keywords: bubble column, growth medium efficiency, Stevia rebaudiana, stevioside, TIS RITA®, TLC

Procedia PDF Downloads 253
696 A Wearable Device to Overcome Post–Stroke Learned Non-Use; The Rehabilitation Gaming System for wearables: Methodology, Design and Usability

Authors: Javier De La Torre Costa, Belen Rubio Ballester, Martina Maier, Paul F. M. J. Verschure

Abstract:

After a stroke, a great number of patients experience persistent motor impairments such as hemiparesis or weakness in one entire side of the body. As a result, the lack of use of the paretic limb might be one of the main contributors to functional loss after clinical discharge. We aim to reverse this cycle by promoting the use of the paretic limb during activities of daily living (ADLs). To do so, we describe the key components of a system that is composed of a wearable bracelet (i.e., a smartwatch) and a mobile phone, designed to bring a set of neurorehabilitation principles that promote acquisition, retention and generalization of skills to the home of the patient. A fundamental question is whether the loss in motor function derived from learned–non–use may emerge as a consequence of decision–making processes for motor optimization. Our system is based on well-established rehabilitation strategies that aim to reverse this behaviour by increasing the reward associated with action execution as well as implicitly reducing the expected cost associated with the use of the paretic limb, following the notion of the reinforcement–induced movement therapy (RIMT). Here we validate an accelerometer–based measure of arm use, and its capacity to discriminate different activities that require increasing movement of the arm. We also show how the system can act as a personalized assistant by providing specific goals and adjusting them depending on the performance of the patients. The usability and acceptance of the device as a rehabilitation tool is tested using a battery of self–reported and objective measurements obtained from acute/subacute patients and healthy controls. We believe that an extension of these technologies will allow for the deployment of unsupervised rehabilitation paradigms during and beyond the hospitalization time.

Keywords: stroke, wearables, learned non use, hemiparesis, ADLs

Procedia PDF Downloads 195
695 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches

Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli

Abstract:

Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).

Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR

Procedia PDF Downloads 80
694 Rapid Degradation of High-Concentration Methylene Blue in the Combined System of Plasma-Enhanced Photocatalysis Using TiO₂-Carbon

Authors: Teguh Endah Saraswati, Kusumandari Kusumandari, Candra Purnawan, Annisa Dinan Ghaisani, Aufara Mahayum

Abstract:

The present study aims to investigate the degradation of methylene blue (MB) using TiO₂-carbon (TiO₂-C) photocatalyst combined with dielectric discharge (DBD) plasma. The carbon materials used in the photocatalyst were activated carbon and graphite. The thin layer of TiO₂-C photocatalyst was prepared by ball milling method which was then deposited on the plastic sheet. The characteristic of TiO₂-C thin layer was analyzed using X-ray diffraction (XRD), scanning electron microscopy (SEM) with energy dispersive X-ray (EDX) spectroscopy, and UV-Vis diffuse reflectance spectrophotometer. The XRD diffractogram patterns of TiO₂-G thin layer in various weight compositions of 50:1, 50:3, and 50:5 show the 2θ peaks found around 25° and 27° are the main characteristic of TiO₂ and carbon. SEM analysis shows spherical and regular morphology of the photocatalyst. Analysis using UV-Vis diffuse reflectance shows TiO₂-C has narrower band gap energy. The DBD plasma reactor was generated using two electrodes of Cu tape connected with stainless steel mesh and Fe wire separated by a glass dielectric insulator, supplied by a high voltage 5 kV with an air flow rate of 1 L/min. The optimization of the weight composition of TiO₂-C thin layer was studied based on the highest reduction of the MB concentration achieved, examined by UV-Vis spectrophotometer. The changes in pH values and color of MB indicated the success of MB degradation. Moreover, the degradation efficiency of MB was also studied in various higher concentrations of 50, 100, 200, 300 ppm treated for 0, 2, 4, 6, 8, 10 min. The degradation efficiency of MB treated in combination system of photocatalysis and DBD plasma reached more than 99% in 6 min, in which the greater concentration of methylene blue dye, the lower degradation rate of methylene blue dye would be achieved.

Keywords: activated carbon, DBD plasma, graphite, methylene blue, photocatalysis

Procedia PDF Downloads 105
693 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization

Procedia PDF Downloads 139
692 Computational Feasibility Study of a Torsional Wave Transducer for Tissue Stiffness Monitoring

Authors: Rafael Muñoz, Juan Melchor, Alicia Valera, Laura Peralta, Guillermo Rus

Abstract:

A torsional piezoelectric ultrasonic transducer design is proposed to measure shear moduli in soft tissue with direct access availability, using shear wave elastography technique. The measurement of shear moduli of tissues is a challenging problem, mainly derived from a) the difficulty of isolating a pure shear wave, given the interference of multiple waves of different types (P, S, even guided) emitted by the transducers and reflected in geometric boundaries, and b) the highly attenuating nature of soft tissular materials. An immediate application, overcoming these drawbacks, is the measurement of changes in cervix stiffness to estimate the gestational age at delivery. The design has been optimized using a finite element model (FEM) and a semi-analytical estimator of the probability of detection (POD) to determine a suitable geometry, materials and generated waves. The technique is based on the time of flight measurement between emitter and receiver, to infer shear wave velocity. Current research is centered in prototype testing and validation. The geometric optimization of the transducer was able to annihilate the compressional wave emission, generating a quite pure shear torsional wave. Currently, mechanical and electromagnetic coupling between emitter and receiver signals are being the research focus. Conclusions: the design overcomes the main described problems. The almost pure shear torsional wave along with the short time of flight avoids the possibility of multiple wave interference. This short propagation distance reduce the effect of attenuation, and allow the emission of very low energies assuring a good biological security for human use.

Keywords: cervix ripening, preterm birth, shear modulus, shear wave elastography, soft tissue, torsional wave

Procedia PDF Downloads 334
691 Project Production Control (PPC) Implementation for an Offshore Facilities Construction Project

Authors: Muhammad Hakim Bin Mat Tasir, Erwan Shahfizad Hasidan, Hamidah Makmor Bakry, M. Hafiz B. Izhar

Abstract:

Every key performance indicator used to monitor a project’s construction progress emphasizes trade productivity or specific commodity run-down curves. Examples include the productivity of welding by the number of joints completed per day, quantity of NDT (Non-Destructive Tests) inspection per day, etc. This perspective is based on progress and productivity; however, it does not enable a system perspective of how we produce. This paper uses a project production system perspective by which projects are a collection of production systems comprising the interconnected network of processes and operations that represent all the work activities to execute a project from start to finish. Furthermore, it also uses the 5 Levels of production system optimization as a frame. The goal of the paper is to describe the application of Project Production Control (PPC) to control and improve the performance of several production processes associated with the fabrication and assembly of a Central Processing Platform (CPP) Jacket, part of an offshore mega project. More specifically, the fabrication and assembly of buoyancy tanks as they were identified as part of the critical path and required the highest demand for capacity. In total, seven buoyancy tanks were built, with a total estimated weight of 2,200 metric tons. These huge buoyancy tanks were designed to be reversed launching and self-upending of the jacket, easily retractable, and reusable for the next project, ensuring sustainability. Results showed that an effective application of PPC not only positively impacted construction progress and productivity but also exposed sources of detrimental variability as the focus of continuous improvement practices. This approach augmented conventional project management practices, and the results had a high impact on construction scheduling, planning, and control.

Keywords: offshore, construction, project management, sustainability

Procedia PDF Downloads 41
690 Digital Twins in the Built Environment: A Systematic Literature Review

Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John

Abstract:

Digital Twins (DT) are an innovative concept of cyber-physical integration of data between an asset and its virtual replica. They have originated in established industries such as manufacturing and aviation and have garnered increasing attention as a potentially transformative technology within the built environment. With the potential to support decision-making, real-time simulations, forecasting abilities and managing operations, DT do not fall under a singular scope. This makes defining and leveraging the potential uses of DT a potential missed opportunity. Despite its recognised potential in established industries, literature on DT in the built environment remains limited. Inadequate attention has been given to the implementation of DT in construction projects, as opposed to its operational stage applications. Additionally, the absence of a standardised definition has resulted in inconsistent interpretations of DT in both industry and academia. There is a need to consolidate research to foster a unified understanding of the DT. Such consolidation is indispensable to ensure that future research is undertaken with a solid foundation. This paper aims to present a comprehensive systematic literature review on the role of DT in the built environment. To accomplish this objective, a review and thematic analysis was conducted, encompassing relevant papers from the last five years. The identified papers are categorised based on their specific areas of focus, and the content of these papers was translated into a through classification of DT. In characterising DT and the associated data processes identified, this systematic literature review has identified 6 DT opportunities specifically relevant to the built environment: Facilitating collaborative procurement methods, Supporting net-zero and decarbonization goals, Supporting Modern Methods of Construction (MMC) and off-site manufacturing (OSM), Providing increased transparency and stakeholders collaboration, Supporting complex decision making (real-time simulations and forecasting abilities) and Seamless integration with Internet of Things (IoT), data analytics and other DT. Finally, a discussion of each area of research is provided. A table of definitions of DT across the reviewed literature is provided, seeking to delineate the current state of DT implementation in the built environment context. Gaps in knowledge are identified, as well as research challenges and opportunities for further advancements in the implementation of DT within the built environment. This paper critically assesses the existing literature to identify the potential of DT applications, aiming to harness the transformative capabilities of data in the built environment. By fostering a unified comprehension of DT, this paper contributes to advancing the effective adoption and utilisation of this technology, accelerating progress towards the realisation of smart cities, decarbonisation, and other envisioned roles for DT in the construction domain.

Keywords: built environment, design, digital twins, literature review

Procedia PDF Downloads 59
689 A Review on Applications of Evolutionary Algorithms to Reservoir Operation for Hydropower Production

Authors: Nkechi Neboh, Josiah Adeyemo, Abimbola Enitan, Oludayo Olugbara

Abstract:

Evolutionary algorithms are techniques extensively used in the planning and management of water resources and systems. It is useful in finding optimal solutions to water resources problems considering the complexities involved in the analysis. River basin management is an essential area that involves the management of upstream, river inflow and outflow including downstream aspects of a reservoir. Water as a scarce resource is needed by human and the environment for survival and its management involve a lot of complexities. Management of this scarce resource is necessary for proper distribution to competing users in a river basin. This presents a lot of complexities involving many constraints and conflicting objectives. Evolutionary algorithms are very useful in solving this kind of complex problems with ease. Evolutionary algorithms are easy to use, fast and robust with many other advantages. Many applications of evolutionary algorithms, which are population based search algorithm, are discussed. Different methodologies involved in the modeling and simulation of water management problems in river basins are explained. It was found from this work that different evolutionary algorithms are suitable for different problems. Therefore, appropriate algorithms are suggested for different methodologies and applications based on results of previous studies reviewed. It is concluded that evolutionary algorithms, with wide applications in water resources management, are viable and easy algorithms for most of the applications. The results suggested that evolutionary algorithms, applied in the right application areas, can suggest superior solutions for river basin management especially in reservoir operations, irrigation planning and management, stream flow forecasting and real-time applications. The future directions in this work are suggested. This study will assist decision makers and stakeholders on the best evolutionary algorithm to use in varied optimization issues in water resources management.

Keywords: evolutionary algorithm, multi-objective, reservoir operation, river basin management

Procedia PDF Downloads 473
688 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 307
687 Localized Detection of ᴅ-Serine by Using an Enzymatic Amperometric Biosensor and Scanning Electrochemical Microscopy

Authors: David Polcari, Samuel C. Perry, Loredano Pollegioni, Matthias Geissler, Janine Mauzeroll

Abstract:

ᴅ-serine acts as an endogenous co-agonist for N-methyl-ᴅ-aspartate receptors in neuronal synapses. This makes it a key component in the development and function of a healthy brain, especially given its role in several neurodegenerative diseases such as Alzheimer’s disease and dementia. Despite such clear research motivations, the primary site and mechanism of ᴅ-serine release is still currently unclear. For this reason, we are developing a biosensor for the detection of ᴅ-serine utilizing a microelectrode in combination with a ᴅ-amino acid oxidase enzyme, which produces stoichiometric quantities of hydrogen peroxide in response to ᴅ-serine. For the fabrication of a biosensor with good selectivity, we use a permselective poly(meta-phenylenediamine) film to ensure only the target molecule is reacted, according to the size exclusion principle. In this work, we investigated the effect of the electrodeposition conditions used on the biosensor’s response time and selectivity. Careful optimization of the fabrication process allowed for enhanced biosensor response time. This allowed for the real time sensing of ᴅ-serine in a bulk solution, and also provided in means to map the efflux of ᴅ-serine in real time. This was done using scanning electrochemical microscopy (SECM) with the optimized biosensor to measure localized release of ᴅ-serine from an agar filled glass capillary sealed in an epoxy puck, which acted as a model system. The SECM area scan simultaneously provided information regarding the rate of ᴅ-serine flux from the model substrate, as well as the size of the substrate itself. This SECM methodology, which provides high spatial and temporal resolution, could be useful to investigate the primary site and mechanism of ᴅ-serine release in other biological samples.

Keywords: ᴅ-serine, enzymatic biosensor, microelectrode, scanning electrochemical microscopy

Procedia PDF Downloads 213
686 Design and Development of an 'Optimisation Controller' and a SCADA Based Monitoring System for Renewable Energy Management in Telecom Towers

Authors: M. Sundaram, H. R. Sanath Kumar, A. Ramprakash

Abstract:

Energy saving is a key sustainability focus area for the Indian telecom industry today. This is especially true in rural India where energy consumption contributes to 70 % of the total network operating cost. In urban areas, the energy cost for network operation ranges between 15-30 %. This expenditure on energy as a result of the lack of grid power availability highlights a potential barrier to telecom industry growth. As a result of this, telecom tower companies switch to diesel generators, making them the second largest consumer of diesel in India, consuming over 2.5 billion litres per annum. The growing cost of energy due to increasing diesel prices and concerns over rising greenhouse emissions have caused these companies to look at other renewable energy options. Even the TRAI (Telecom Regulation Authority of India) has issued a number of guidelines to implement Renewable Energy Technologies (RETs) in the telecom towers as part of its ‘Implementation of Green Technologies in Telecom Sector’ initiative. Our proposal suggests the implementation of a Programmable Logic Controller (PLC) based ‘optimisation controller’ that can not only efficiently utilize the energy from RETs but also help to conserve the power used in the telecom towers. When there are multiple RETs available to supply energy, this controller will pick the optimum amount of energy from each RET based on the availability and feasibility at that point of time, reducing the dependence on diesel generators. For effective maintenance of the towers, we are planing to implement a SCADA based monitoring system along with the ‘optimization controller’.

Keywords: operation costs, consumption of fuel and carbon footprint, implementation of a programmable logic controller (PLC) based ‘optimisation controller’, efficient SCADA based monitoring system

Procedia PDF Downloads 405
685 Optimal Design of Tuned Inerter Damper-Based System for the Control of Wind-Induced Vibration in Tall Buildings through Cultural Algorithm

Authors: Luis Lara-Valencia, Mateo Ramirez-Acevedo, Daniel Caicedo, Jose Brito, Yosef Farbiarz

Abstract:

Controlling wind-induced vibrations as well as aerodynamic forces, is an essential part of the structural design of tall buildings in order to guarantee the serviceability limit state of the structure. This paper presents a numerical investigation on the optimal design parameters of a Tuned Inerter Damper (TID) based system for the control of wind-induced vibration in tall buildings. The control system is based on the conventional TID, with the main difference that its location is changed from the ground level to the last two story-levels of the structural system. The TID tuning procedure is based on an evolutionary cultural algorithm in which the optimum design variables defined as the frequency and damping ratios were searched according to the optimization criteria of minimizing the root mean square (RMS) response of displacements at the nth story of the structure. A Monte Carlo simulation was used to represent the dynamic action of the wind in the time domain in which a time-series derived from the Davenport spectrum using eleven harmonic functions with randomly chosen phase angles was reproduced. The above-mentioned methodology was applied on a case-study derived from a 37-story prestressed concrete building with 144 m height, in which the wind action overcomes the seismic action. The results showed that the optimally tuned TID is effective to reduce the RMS response of displacements up to 25%, which demonstrates the feasibility of the system for the control of wind-induced vibrations in tall buildings.

Keywords: evolutionary cultural algorithm, Monte Carlo simulation, tuned inerter damper, wind-induced vibrations

Procedia PDF Downloads 122
684 IOT Based Automated Production and Control System for Clean Water Filtration Through Solar Energy Operated by Submersible Water Pump

Authors: Musse Mohamud Ahmed, Tina Linda Achilles, Mohammad Kamrul Hasan

Abstract:

Deterioration of the mother nature is evident these day with clear danger of human catastrophe emanating from greenhouses (GHG) with increasing CO2 emissions to the environment. PV technology can help to reduce the dependency on fossil fuel, decreasing air pollution and slowing down the rate of global warming. The objective of this paper is to propose, develop and design the production of clean water supply to rural communities using an appropriate technology such as Internet of Things (IOT) that does not create any CO2 emissions. Additionally, maximization of solar energy power output and reciprocally minimizing the natural characteristics of solar sources intermittences during less presence of the sun itself is another goal to achieve in this work. The paper presents the development of critical automated control system for solar energy power output optimization using several new techniques. water pumping system is developed to supply clean water with the application of IOT-renewable energy. This system is effective to provide clean water supply to remote and off-grid areas using Photovoltaics (PV) technology that collects energy generated from the sunlight. The focus of this work is to design and develop a submersible solar water pumping system that applies an IOT implementation. Thus, this system has been executed and programmed using Arduino Software (IDE), proteus, Maltab and C++ programming language. The mechanism of this system is that it pumps water from water reservoir that is powered up by solar energy and clean water production was also incorporated using filtration system through the submersible solar water pumping system. The filtering system is an additional application platform which is intended to provide a clean water supply to any households in Sarawak State, Malaysia.

Keywords: IOT, automated production and control system, water filtration, automated submersible water pump, solar energy

Procedia PDF Downloads 70
683 Study of Potato Cyst Nematodes (Globodera Rostochiensis, Globodera pallida) in Georgia

Authors: Ekatereine Abashidze, Nino Nazarashvili, Dali Gaganidze, Oleg Gorgadze, Mariam Aznarashvili, Eter Gvritishvili

Abstract:

Potato is one of the leading agricultural crops in Georgia. Georgia produces early and late potato varieties in almost all regions. Potato production is equal to 25,000 ha and its average yield is 20-25 t/ha. Among the plant pests that limit potato production and quality, the potato cyst nematodes (Globodera pallida (Stone) Behrens and Globodera rostochiensis (Wollenveber) Behrens) are harmful around the world. PCN is among the most difficult plant pests to control. Cysts protected by a durable wall can survive for over 30 years . Control of PCN (G. pallida and G. rostochiensis) is regulated by Council Directive 2007/33/EE C. There was no legislative regulation of these pests in Georgia before 2016. By Resolution #302 from July 1, 2016, developed within the action plan of the DCFTA (Deep and Comprehensive Free Trade Area) the Government of Georgia established control over potato cyst nematodes. The Agreement about the legal acts approximation to EU legislation concerns the approval of rules of PCN control and research of these pests. Taking into consideration the above mentioned, it is necessary to study PCN (G. pallida and G. rostochiensis) in the potato-growing areas of Georgia. The aim of this research is to conduct survey of potato cyst nematodes (Globodera rostochiensis and G. pallida) in two geographically distinct regions of Georgia - Samtskhe - Javakheti and Svanetii and to identify the species G. Rostochiensis and G. Pallida by the morphological - morphometric and molecular methods. Soil samples were taken in each village, in a zig-zag pattern on the potato fields of the private sector, using the Metlitsky method. Samples were taken also from infested potato plant roots. To extract nematode cysts from soil samples Fanwick can be used according to standard methods by EPPO. Cysts were measured under a stereoscopic microscope (Leica M50). Identification of the nematod species was carried out according to morphological and morphometric characteristics of the cysts and larvae using appropriate protocols EPPO. For molecular identification, a multiplex PCR test was performed by the universal ITS5 and cyst nematodes’ (G. pallida, G. rostochiensis) specific primers. To identify the species of potato cyst nematodes (PCN) in two regions (Samtskhe-Javakheti and Svaneti) were taken 200 samples, among them: 80 samples in Samtskhe-Javakheti region and 120 in Svaneti region. Cysts of Globiodera spp. were revealed in 50 samples obtained from Samtskhe-Javakheti and 80 samples from Svaneti regions. Morphological, morphometric and molecular analysis of two forms of PCN found in investigated regions of Georgia shows that one form of PCN belongs to G. rostoshiensi; the second form is the different species of Globodera sp.t is the subject of future research. Despite the different geographic locations, larvae and cysts of G. rostoshiensi were found in both regions. But cysts and larvae of G. pallida were not reported. Acknowledgement: The research has been supported by the Shota Rustaveli National Scientific Foundation of Georgia: Project # FR17_235.

Keywords: cyst nematode, globodera rostochiensis, globodera pallida, morphologic-morphometric measurement

Procedia PDF Downloads 186
682 A Methodology for Seismic Performance Enhancement of RC Structures Equipped with Friction Energy Dissipation Devices

Authors: Neda Nabid

Abstract:

Friction-based supplemental devices have been extensively used for seismic protection and strengthening of structures, however, the conventional use of these dampers may not necessarily lead to an efficient structural performance. Conventionally designed friction dampers follow a uniform height-wise distribution pattern of slip load values for more practical simplicity. This can lead to localizing structural damage in certain story levels, while the other stories accommodate a negligible amount of relative displacement demand. A practical performance-based optimization methodology is developed to tackle with structural damage localization of RC frame buildings with friction energy dissipation devices under severe earthquakes. The proposed methodology is based on the concept of uniform damage distribution theory. According to this theory, the slip load values of the friction dampers redistribute and shift from stories with lower relative displacement demand to the stories with higher inter-story drifts to narrow down the discrepancy between the structural damage levels in different stories. In this study, the efficacy of the proposed design methodology is evaluated through the seismic performance of five different low to high-rise RC frames equipped with friction wall dampers under six real spectrum-compatible design earthquakes. The results indicate that compared to the conventional design, using the suggested methodology to design friction wall systems can lead to, by average, up to 40% reduction of maximum inter-story drift; and incredibly more uniform height-wise distribution of relative displacement demands under the design earthquakes.

Keywords: friction damper, nonlinear dynamic analysis, RC structures, seismic performance, structural damage

Procedia PDF Downloads 210
681 Hybrid Energy System for the German Mining Industry: An Optimized Model

Authors: Kateryna Zharan, Jan C. Bongaerts

Abstract:

In recent years, economic attractiveness of renewable energy (RE) for the mining industry, especially for off-grid mines, and a negative environmental impact of fossil energy are stimulating to use RE for mining needs. Being that remote area mines have higher energy expenses than mines connected to a grid, integration of RE may give a mine economic benefits. Regarding the literature review, there is a lack of business models for adopting of RE at mine. The main aim of this paper is to develop an optimized model of RE integration into the German mining industry (GMI). Hereby, the GMI with amount of around 800 mill. t. annually extracted resources is included in the list of the 15 major mining country in the world. Accordingly, the mining potential of Germany is evaluated in this paper as a perspective market for RE implementation. The GMI has been classified in order to find out the location of resources, quantity and types of the mines, amount of extracted resources, and access of the mines to the energy resources. Additionally, weather conditions have been analyzed in order to figure out where wind and solar generation technologies can be integrated into a mine with the highest efficiency. Despite the fact that the electricity demand of the GMI is almost completely covered by a grid connection, the hybrid energy system (HES) based on a mix of RE and fossil energy is developed due to show environmental and economic benefits. The HES for the GMI consolidates a combination of wind turbine, solar PV, battery and diesel generation. The model has been calculated using the HOMER software. Furthermore, the demonstrated HES contains a forecasting model that predicts solar and wind generation in advance. The main result from the HES such as CO2 emission reduction is estimated in order to make the mining processing more environmental friendly.

Keywords: diesel generation, German mining industry, hybrid energy system, hybrid optimization model for electric renewables, optimized model, renewable energy

Procedia PDF Downloads 329
680 Analysing the Interactive Effects of Factors Influencing Sand Production on Drawdown Time in High Viscosity Reservoirs

Authors: Gerald Gwamba, Bo Zhou, Yajun Song, Dong Changyin

Abstract:

The challenges that sand production presents to the oil and gas industry, particularly while working in poorly consolidated reservoirs, cannot be overstated. From restricting production to blocking production tubing, sand production increases the costs associated with production as it elevates the cost of servicing production equipment over time. Production in reservoirs that present with high viscosities, flow rate, cementation, clay content as well as fine sand contents is even more complex and challenging. As opposed to the one-factor at a-time testing, investigating the interactive effects arising from a combination of several factors offers increased reliability of results as well as representation of actual field conditions. It is thus paramount to investigate the conditions leading to the onset of sanding during production to ensure the future sustainability of hydrocarbon production operations under viscous conditions. We adopt the Design of Experiments (DOE) to analyse, using Taguchi factorial designs, the most significant interactive effects of sanding. We propose an optimized regression model to predict the drawdown time at sand production. The results obtained underscore that reservoirs characterized by varying (high and low) levels of viscosity, flow rate, cementation, clay, and fine sand content have a resulting impact on sand production. The only significant interactive effect recorded arises from the interaction between BD (fine sand content and flow rate), while the main effects included fluid viscosity and cementation, with percentage significances recorded as 31.3%, 37.76%, and 30.94%, respectively. The drawdown time model presented could be useful for predicting the time to reach the maximum drawdown pressure under viscous conditions during the onset of sand production.

Keywords: factorial designs, DOE optimization, sand production prediction, drawdown time, regression model

Procedia PDF Downloads 133
679 Simulation of Bird Strike on Airplane Wings by Using SPH Methodology

Authors: Tuğçe Kiper Elibol, İbrahim Uslan, Mehmet Ali Guler, Murat Buyuk, Uğur Yolum

Abstract:

According to the FAA report, 142603 bird strikes were reported for a period of 24 years, between 1990 – 2013. Bird strike with aerospace structures not only threaten the flight security but also cause financial loss and puts life in danger. The statistics show that most of the bird strikes are happening with the nose and the leading edge of the wings. Also, a substantial amount of bird strikes is absorbed by the jet engines and causes damage on blades and engine body. Crash proof designs are required to overcome the possibility of catastrophic failure of the airplane. Using computational methods for bird strike analysis during the product development phase has considerable importance in terms of cost saving. Clearly, using simulation techniques to reduce the number of reference tests can dramatically affect the total cost of an aircraft, where for bird strike often full-scale tests are considered. Therefore, development of validated numerical models is required that can replace preliminary tests and accelerate the design cycle. In this study, to verify the simulation parameters for a bird strike analysis, several different numerical options are studied for an impact case against a primitive structure. Then, a representative bird mode is generated with the verified parameters and collided against the leading edge of a training aircraft wing, where each structural member of the wing was explicitly modeled. A nonlinear explicit dynamics finite element code, LS-DYNA was used for the bird impact simulations. SPH methodology was used to model the behavior of the bird. Dynamic behavior of the wing superstructure was observed and will be used for further design optimization purposes.

Keywords: bird impact, bird strike, finite element modeling, smoothed particle hydrodynamics

Procedia PDF Downloads 306
678 Metabolic Manipulation as a Strategy for Optimization of Biomass Productivity and Oil Content in the Microalgae Desmodesmus Sp.

Authors: Ivan A. Sandoval Salazar, Silvia F. Valderrama

Abstract:

The microalgae oil emerges as a promising source of raw material for many industrial applications. Thus, this study had as a main focus on the cultivation of the microalgae species Desmodesmus sp. in laboratory scale with a view to maximizing biomass production and triglyceride content in the lipid fraction. Initially, culture conditions were selected to optimize biomass production, which was subsequently subjected to nutritional stress by varying nitrate and phosphate concentrations in order to increase the content and productivity of fatty acids. The culture medium BOLD 3N, nitrate and phosphate, light intensity 250,500 and 1000 μmol photons.m².s⁻¹, photoperiod of 12:12 were evaluated. Under the best conditions of the tests, a maximum cell division of 1.13 div.dia⁻¹ was obtained on the sixth day of culture, beginning of the exponential phase, and a maximum concentration of 8.42x107 cell.mL⁻¹ and dry biomass of 3.49 gL⁻¹ on the 20th day, in the stationary phase. The lipid content in the first stage of culture was approximately 8% after 12 days and at the end of the culture in the stationary phase ranged from 12% to 16% (20 days). In the microalgae grown at 250 μmol fotons.m2.s-1 the fatty acid profile was mostly polyunsaturated (52%). The total of unsaturated fatty acids, identified in this species of microalga, reached values between 70 and 75%, being qualified for use in the food and pharmaceutical industry. In addition, this study showed that the cultivation conditions influenced mainly the production of polyunsaturated fatty acids, with the predominance of γ-linolenic acid. However, in the cultures submitted to the highest the intensity of light (1000 μmol photons.m².s⁻¹) and low concentrations of nitrate and phosphate, saturated and monounsaturated fatty acids, which present greater oxidative stability, were identified mainly (60 to 70 %) being qualified for the production of biodiesel and for oleochemistry.

Keywords: microalgae, Desmodesmus sp, fatty acids, biodiesel

Procedia PDF Downloads 130