Search results for: geochemical modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3979

Search results for: geochemical modeling

2689 Multi-Stakeholder Involvement in Construction and Challenges of Building Information Modeling Implementation

Authors: Zeynep Yazicioglu

Abstract:

Project development is a complex process where many stakeholders work together. Employers and main contractors are the base stakeholders, whereas designers, engineers, sub-contractors, suppliers, supervisors, and consultants are other stakeholders. A combination of the complexity of the building process with a large number of stakeholders often leads to time and cost overruns and irregular resource utilization. Failure to comply with the work schedule and inefficient use of resources in the construction processes indicate that it is necessary to accelerate production and increase productivity. The development of computer software called Building Information Modeling, abbreviated as BIM, is a major technological breakthrough in this area. The use of BIM enables architectural, structural, mechanical, and electrical projects to be drawn in coordination. BIM is a tool that should be considered by every stakeholder with the opportunities it offers, such as minimizing construction errors, reducing construction time, forecasting, and determination of the final construction cost. It is a process spreading over the years, enabling all stakeholders associated with the project and construction to use it. The main goal of this paper is to explore the problems associated with the adoption of BIM in multi-stakeholder projects. The paper is a conceptual study, summarizing the author’s practical experience with design offices and construction firms working with BIM. In the transition period to BIM, three of the challenges will be examined in this paper: 1. The compatibility of supplier companies with BIM, 2. The need for two-dimensional drawings, 3. Contractual issues related to BIM. The paper reviews the literature on BIM usage and reviews the challenges in the transition stage to BIM. Even on an international scale, the supplier that can work in harmony with BIM is not very common, which means that BIM's transition is continuing. In parallel, employers, local approval authorities, and material suppliers still need a 2-D drawing. In the BIM environment, different stakeholders can work on the same project simultaneously, giving rise to design ownership issues. Practical applications and problems encountered are also discussed, providing a number of suggestions for the future.

Keywords: BIM opportunities, collaboration, contract issues about BIM, stakeholders of project

Procedia PDF Downloads 97
2688 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities

Authors: Kung-Jen Tu, Danny Vernatha

Abstract:

To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.

Keywords: database, electricity sub-meters, energy anomaly detection, sensor

Procedia PDF Downloads 291
2687 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling

Authors: Aamna Lawrence, Ashutosh Mishra

Abstract:

Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.

Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor

Procedia PDF Downloads 114
2686 Simulation of Antimicrobial Resistance Gene Fate in Narrow Grass Hedges

Authors: Marzieh Khedmati, Shannon L. Bartelt-Hunt

Abstract:

Vegetative Filter Strips (VFS) are used for controlling the volume of runoff and decreasing contaminant concentrations in runoff before entering water bodies. Many studies have investigated the role of VFS in sediment and nutrient removal, but little is known about their efficiency for the removal of emerging contaminants such as antimicrobial resistance genes (ARGs). Vegetative Filter Strip Modeling System (VFSMOD) was used to simulate the efficiency of VFS in this regard. Several studies demonstrated the ability of VFSMOD to predict reductions in runoff volume and sediment concentration moving through the filters. The objectives of this study were to calibrate the VFSMOD with experimental data and assess the efficiency of the model in simulating the filter behavior in removing ARGs (ermB) and tylosin. The experimental data were obtained from a prior study conducted at the University of Nebraska (UNL) Rogers Memorial Farm. Three treatment factors were tested in the experiments, including manure amendment, narrow grass hedges and rainfall events. Sediment Delivery Ratio (SDR) was defined as the filter efficiency and the related experimental and model values were compared to each other. The VFS Model generally agreed with the experimental results and as a result, the model was used for predicting filter efficiencies when the runoff data are not available. Narrow Grass Hedges (NGH) were shown to be effective in reducing tylosin and ARGs concentration. The simulation showed that the filter efficiency in removing ARGs is different for different soil types and filter lengths. There is an optimum length for the filter strip that produces minimum runoff volume. Based on the model results increasing the length of the filter by 1-meter leads to higher efficiency but widening beyond that decreases the efficiency. The VFSMOD, which was proved to work well in estimation of VFS trapping efficiency, showed confirming results for ARG removal.

Keywords: antimicrobial resistance genes, emerging contaminants, narrow grass hedges, vegetative filter strips, vegetative filter strip modeling system

Procedia PDF Downloads 124
2685 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 110
2684 Modeling of Void Formation in 3D Woven Fabric During Resin Transfer Moulding

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Jan Kočí, Andy Long

Abstract:

Resin transfer molding (RTM) is increasingly used for manufacturing high-quality composite structures due to its additional advantages over prepregs of low-cost out-of-autoclave processing. However, to retain the advantages, it is critical to reduce the void content during the injection. Reinforcements commonly used in RTM, such as woven fabrics, have dual-scale porosity with mesoscale pores between the yarns and the micro-scale pores within the yarns. Due to the fabric geometry and the nature of the dual-scale flow, the flow front during injection creates a complicated fingering formation which leads to void formation. Analytical modeling of void formation for woven fabrics has been widely studied elsewhere. However, there is scope for improvement to the reduction in void formation in 3D fabrics wherein the in-plane yarn layers are confined by additional through-thickness binder yarns. In the present study, the structural morphology of the tortuous pore spaces in the 3D fabric has been studied and implemented using open-source software TexGen. An analytical model for the void and the fingering formation has been implemented based on an idealized unit cell model of the 3D fabric. Since the pore spaces between the yarns are free domains, the region is treated as flow-through connected channels, whereas intra-yarn flow has been modeled using Darcy’s law with an additional term to account for capillary pressure. Later the void fraction has been characterised using the criterion of void formation by comparing the fill time for inter and intra yarn flow. Moreover, the dual-scale two-phase flow of resin with air has been simulated in the commercial CFD solver OpenFOAM/ANSYS to predict the probable location of voids and validate the analytical model. The use of an idealised unit cell model will give the insight to optimise the mesoscale geometry of the reinforcement and injection parameters to minimise the void content during the LCM process.

Keywords: 3D fiber, void formation, RTM, process modelling

Procedia PDF Downloads 83
2683 Surface Water Flow of Urban Areas and Sustainable Urban Planning

Authors: Sheetal Sharma

Abstract:

Urban planning is associated with land transformation from natural areas to modified and developed ones which leads to modification of natural environment. The basic knowledge of relationship between both should be ascertained before proceeding for the development of natural areas. Changes on land surface due to build up pavements, roads and similar land cover, affect surface water flow. There is a gap between urban planning and basic knowledge of hydrological processes which should be known to the planners. The paper aims to identify these variations in surface flow due to urbanization for a temporal scale of 40 years using Storm Water Management Mode (SWMM) and again correlating these findings with the urban planning guidelines in study area along with geological background to find out the suitable combinations of land cover, soil and guidelines. For the purpose of identifying the changes in surface flows, 19 catchments were identified with different geology and growth in 40 years facing different ground water levels fluctuations. The increasing built up, varying surface runoff are studied using Arc GIS and SWMM modeling, regression analysis for runoff. Resulting runoff for various land covers and soil groups with varying built up conditions were observed. The modeling procedures also included observations for varying precipitation and constant built up in all catchments. All these observations were combined for individual catchment and single regression curve was obtained for runoff. Thus, it was observed that alluvial with suitable land cover was better for infiltration and least generation of runoff but excess built up could not be sustained on alluvial soil. Similarly, basalt had least recharge and most runoff demanding maximum vegetation over it. Sandstone resulted in good recharging if planned with more open spaces and natural soils with intermittent vegetation. Hence, these observations made a keystone base for planners while planning various land uses on different soils. This paper contributes and provides a solution to basic knowledge gap, which urban planners face during development of natural surfaces.

Keywords: runoff, built up, roughness, recharge, temporal changes

Procedia PDF Downloads 266
2682 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 331
2681 3D Modeling of Flow and Sediment Transport in Tanks with the Influence of Cavity

Authors: A. Terfous, Y. Liu, A. Ghenaim, P. A. Garambois

Abstract:

With increasing urbanization worldwide, it is crucial to sustainably manage sediment flows in urban networks and especially in stormwater detention basins. One key aspect is to propose optimized designs for detention tanks in order to best reduce flood peak flows and in the meantime settle particles. It is, therefore, necessary to understand complex flows patterns and sediment deposition conditions in stormwater detention basins. The aim of this paper is to study flow structure and particle deposition pattern for a given tank geometry in view to control and maximize sediment deposition. Both numerical simulation and experimental works were done to investigate the flow and sediment distribution in a storm tank with a cavity. As it can be indicated, the settle distribution of the particle in a rectangular tank is mainly determined by the flow patterns and the bed shear stress. The flow patterns in a rectangular tank differ with different geometry, entrance flow rate and the water depth. With the changing of flow patterns, the bed shear stress will change respectively, which also play an influence on the particle settling. The accumulation of the particle in the bed changes the conditions at the bottom, which is ignored in the investigations, however it worth much more attention, the influence of the accumulation of the particle on the sedimentation should be important. The approach presented here is based on the resolution of the Reynolds averaged Navier-Stokes equations to account for turbulent effects and also a passive particle transport model. An analysis of particle deposition conditions is presented in this paper in terms of flow velocities and turbulence patterns. Then sediment deposition zones are presented thanks to the modeling with particle tracking method. It is shown that two recirculation zones seem to significantly influence sediment deposition. Due to the possible overestimation of particle trap efficiency with standard wall functions and stick conditions, further investigations seem required for basal boundary conditions based on turbulent kinetic energy and shear stress. These observations are confirmed by experimental investigations processed in the laboratory.

Keywords: storm sewers, sediment deposition, numerical simulation, experimental investigation

Procedia PDF Downloads 306
2680 Predicting Food Waste and Losses Reduction for Fresh Products in Modified Atmosphere Packaging

Authors: Matar Celine, Gaucel Sebastien, Gontard Nathalie, Guilbert Stephane, Guillard Valerie

Abstract:

To increase the very short shelf life of fresh fruits and vegetable, Modified Atmosphere Packaging (MAP) allows an optimal atmosphere composition to be maintained around the product and thus prevent its decay. This technology relies on the modification of internal packaging atmosphere due to equilibrium between production/consumption of gases by the respiring product and gas permeation through the packaging material. While, to the best of our knowledge, benefit of MAP for fresh fruits and vegetable has been widely demonstrated in the literature, its effect on shelf life increase has never been quantified and formalized in a clear and simple manner leading difficult to anticipate its economic and environmental benefit, notably through the decrease of food losses. Mathematical modelling of mass transfers in the food/packaging system is the basis for a better design and dimensioning of the food packaging system. But up to now, existing models did not permit to estimate food quality nor shelf life gain reached by using MAP. However, shelf life prediction is an indispensable prerequisite for quantifying the effect of MAP on food losses reduction. The objective of this work is to propose an innovative approach to predict shelf life of MAP food product and then to link it to a reduction of food losses and wastes. In this purpose, a ‘Virtual MAP modeling tool’ was developed by coupling a new predictive deterioration model (based on visual surface prediction of deterioration encompassing colour, texture and spoilage development) with models of the literature for respiration and permeation. A major input of this modelling tool is the maximal percentage of deterioration (MAD) which was assessed from dedicated consumers’ studies. Strawberries of the variety Charlotte were selected as the model food for its high perishability, high respiration rate; 50-100 ml CO₂/h/kg produced at 20°C, allowing it to be a good representative of challenging post-harvest storage. A value of 13% was determined as a limit of acceptability for the consumers, permitting to define products’ shelf life. The ‘Virtual MAP modeling tool’ was validated in isothermal conditions (5, 10 and 20°C) and in dynamic temperature conditions mimicking commercial post-harvest storage of strawberries. RMSE values were systematically lower than 3% for respectively, O₂, CO₂ and deterioration profiles as a function of time confirming the goodness of model fitting. For the investigated temperature profile, a shelf life gain of 0.33 days was obtained in MAP compared to the conventional storage situation (no MAP condition). Shelf life gain of more than 1 day could be obtained for optimized post-harvest conditions as numerically investigated. Such shelf life gain permitted to anticipate a significant reduction of food losses at the distribution and consumer steps. This food losses' reduction as a function of shelf life gain has been quantified using a dedicated mathematical equation that has been developed for this purpose.

Keywords: food losses and wastes, modified atmosphere packaging, mathematical modeling, shelf life prediction

Procedia PDF Downloads 172
2679 Geomechanical Numerical Modeling of Well Wall in Drilling with Finite Difference Method

Authors: Marzieh Zarei

Abstract:

Well instability is one of the most fundamental challenges faced by the oil and gas industry. Well wall stability analysis is a gap to be filled in the oil industry. The collection of static data such as well logging leads to the construction of a geomechanical numerical model, which will help in assessing the probable risks in future drilling. In this paper, geomechanical model was designed, and mechanical properties of the rock was determined at all points of the model. It was found the safe mud window was determined and the minimum and maximum mud pressures were determined in the ranges of 70-60 MPa and 110-100 MPa, respectively.

Keywords: geomechanics, numerical model, well stability, in-situ stress, underbalanced drilling

Procedia PDF Downloads 109
2678 Factors Affecting Expectations and Intentions of University Students’ Mobile Phone Use in Educational Contexts

Authors: Davut Disci

Abstract:

Objective: to measure the factors affecting expectations and intentions of using mobile phone in educational contexts by university students, using advanced equations and modeling techniques. Design and Methodology: According to the literature, Mobile Addiction, Parental Surveillance- Safety/Security, Social Relations, and Mobile Behavior are most used terms of defining mobile use of people. Therefore these variables are tried to be measured to find and estimate their effects on expectations and intentions of using mobile phone in educational context. 421 university students participated in this study and there are 229 Female and 192 Male students. For the purpose of examining the mobile behavior and educational expectations and intentions, a questionnaire is prepared and applied to the participants who had to answer all the questions online. Furthermore, responses to close-ended questions are analyzed by using The Statistical Package for Social Sciences(SPSS) software, reliabilities are measured by Cronbach’s Alpha analysis and hypothesis are examined via using Multiple Regression and Linear Regression analysis and the model is tested with Structural Equation Modeling(SEM) technique which is important for testing the model scientifically. Besides these responses, open-ended questions are taken into consideration. Results: When analyzing data gathered from close-ended questions, it is found that Mobile Addiction, Parental Surveillance, Social Relations and Frequency of Using Mobile Phone Applications are affecting the mobile behavior of the participants in different levels, helping them to use mobile phone in educational context. Moreover, as for open-ended questions, participants stated that they use many mobile applications in their learning environment in terms of contacting with friends, watching educational videos, finding course material via internet. They also agree in that mobile phone brings greater flexibility to their lives. According to the SEM results the model is not evaluated and it can be said that it may be improved to show in SEM besides in multiple regression. Conclusion: This study shows that the specified model can be used by educationalist, school authorities to improve their learning environment.

Keywords: education, mobile behavior, mobile learning, technology, Turkey

Procedia PDF Downloads 410
2677 Deasphalting of Crude Oil by Extraction Method

Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov

Abstract:

The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.

Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy

Procedia PDF Downloads 229
2676 Factors Affecting Expectations and Intentions of University Students in Educational Context

Authors: Davut Disci

Abstract:

Objective: to measure the factors affecting expectations and intentions of using mobile phone in educational contexts by university students, using advanced equations and modeling techniques. Design and Methodology: According to the literature, Mobile Addiction, Parental Surveillance-Safety/Security, Social Relations, and Mobile Behavior are most used terms of defining mobile use of people. Therefore, these variables are tried to be measured to find and estimate their effects on expectations and intentions of using mobile phone in educational context. 421 university students participated in this study and there are 229 Female and 192 Male students. For the purpose of examining the mobile behavior and educational expectations and intentions, a questionnaire is prepared and applied to the participants who had to answer all the questions online. Furthermore, responses to close-ended questions are analyzed by using The Statistical Package for Social Sciences(SPSS) software, reliabilities are measured by Cronbach’s Alpha analysis and hypothesis are examined via using Multiple Regression and Linear Regression analysis and the model is tested with Structural Equation Modeling (SEM) technique which is important for testing the model scientifically. Besides these responses, open-ended questions are taken into consideration. Results: When analyzing data gathered from close-ended questions, it is found that Mobile Addiction, Parental Surveillance, Social Relations and Frequency of Using Mobile Phone Applications are affecting the mobile behavior of the participants in different levels, helping them to use mobile phone in educational context. Moreover, as for open-ended questions, participants stated that they use many mobile applications in their learning environment in terms of contacting with friends, watching educational videos, finding course material via internet. They also agree in that mobile phone brings greater flexibility to their lives. According to the SEM results the model is not evaluated and it can be said that it may be improved to show in SEM besides in multiple regression. Conclusion: This study shows that the specified model can be used by educationalist, school authorities to improve their learning environment.

Keywords: learning technology, instructional technology, mobile learning, technology

Procedia PDF Downloads 442
2675 A Damage-Plasticity Concrete Model for Damage Modeling of Reinforced Concrete Structures

Authors: Thanh N. Do

Abstract:

This paper addresses the modeling of two critical behaviors of concrete material in reinforced concrete components: (1) the increase in strength and ductility due to confining stresses from surrounding transverse steel reinforcements, and (2) the progressive deterioration in strength and stiffness due to high strain and/or cyclic loading. To improve the state-of-the-art, the author presents a new 3D constitutive model of concrete material based on plasticity and continuum damage mechanics theory to simulate both the confinement effect and the strength deterioration in reinforced concrete components. The model defines a yield function of the stress invariants and a compressive damage threshold based on the level of confining stresses to automatically capture the increase in strength and ductility when subjected to high compressive stresses. The model introduces two damage variables to describe the strength and stiffness deterioration under tensile and compressive stress states. The damage formulation characterizes well the degrading behavior of concrete material, including the nonsymmetric strength softening in tension and compression, as well as the progressive strength and stiffness degradation under primary and follower load cycles. The proposed damage model is implemented in a general purpose finite element analysis program allowing an extensive set of numerical simulations to assess its ability to capture the confinement effect and the degradation of the load-carrying capacity and stiffness of structural elements. It is validated against a collection of experimental data of the hysteretic behavior of reinforced concrete columns and shear walls under different load histories. These correlation studies demonstrate the ability of the model to describe vastly different hysteretic behaviors with a relatively consistent set of parameters. The model shows excellent consistency in response determination with very good accuracy. Its numerical robustness and computational efficiency are also very good and will be further assessed with large-scale simulations of structural systems.

Keywords: concrete, damage-plasticity, shear wall, confinement

Procedia PDF Downloads 155
2674 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 160
2673 Modeling and Control of an Acrobot Using MATLAB and Simulink

Authors: Dong Sang Yoo

Abstract:

The problem of finding control laws for underactuated systems has attracted growing attention since these systems are characterized by the fact that they have fewer actuators than the degrees of freedom to be controlled. The acrobot, which is a planar two-link robotic arm in the vertical plane with an actuator at the elbow but no actuator at the shoulder, is a representative of underactuated systems. In this paper, the dynamic model of the acrobot is implemented using Mathworks’ Simscape. And the sliding mode control is constructed using MATLAB and Simulink.

Keywords: acrobot, MATLAB and simulink, sliding mode control, underactuated system

Procedia PDF Downloads 780
2672 Mechanical Behavior of 16NC6 Steel Hardened by Burnishing

Authors: Litim Tarek, Taamallah Ouahiba

Abstract:

This work relates to the physico-geometrical aspect of the surface layers of 16NC6 steel having undergone the burnishing treatment by hard steel ball. The results show that the optimal effects of burnishing are closely linked to the shape and the material of the active part of the device as well as to the surface plastic deformation ability of the material to be treated. Thus the roughness is improved by more than 70%, and the consolidation rate is increased by 30%. In addition, modeling of the rational traction curves provides a work hardening coefficient of up to 0.3 in the presence of burnishing.

Keywords: 16NC6 steel, burnishing, hardening, roughness

Procedia PDF Downloads 148
2671 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications

Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui

Abstract:

Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.

Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow

Procedia PDF Downloads 255
2670 Empirical Modeling and Optimization of Laser Welding of AISI 304 Stainless Steel

Authors: Nikhil Kumar, Asish Bandyopadhyay

Abstract:

Laser welding process is a capable technology for forming the automobile, microelectronics, marine and aerospace parts etc. In the present work, a mathematical and statistical approach is adopted to study the laser welding of AISI 304 stainless steel. A robotic control 500 W pulsed Nd:YAG laser source with 1064 nm wavelength has been used for welding purpose. Butt joints are made. The effects of welding parameters, namely; laser power, scanning speed and pulse width on the seam width and depth of penetration has been investigated using the empirical models developed by response surface methodology (RSM). Weld quality is directly correlated with the weld geometry. Twenty sets of experiments have been conducted as per central composite design (CCD) design matrix. The second order mathematical model has been developed for predicting the desired responses. The results of ANOVA indicate that the laser power has the most significant effect on responses. Microstructural analysis as well as hardness of the selected weld specimens has been carried out to understand the metallurgical and mechanical behaviour of the weld. Average micro-hardness of the weld is observed to be higher than the base metal. Higher hardness of the weld is the resultant of grain refinement and δ-ferrite formation in the weld structure. The result suggests that the lower line energy generally produce fine grain structure and improved mechanical properties than the high line energy. The combined effects of input parameters on responses have been analyzed with the help of developed 3-D response surface and contour plots. Finally, multi-objective optimization has been conducted for producing weld joint with complete penetration, minimum seam width and acceptable welding profile. Confirmatory tests have been conducted at optimum parametric conditions to validate the applied optimization technique.

Keywords: ANOVA, laser welding, modeling and optimization, response surface methodology

Procedia PDF Downloads 284
2669 Wear Measurement of Thermomechanical Parameters of the Metal Carbide

Authors: Riad Harouz, Brahim Mahfoud

Abstract:

The threads and the circles on reinforced concrete are obtained by process of hot rolling with pebbles finishers in metal carbide which present a way of rolling around the outside diameter. Our observation is that this throat presents geometrical wear after the end of its cycle determined in tonnage. In our study, we have determined, in a first step, experimentally measurements of the wear in terms of thermo-mechanical parameters (Speed, Load, and Temperature) and the influence of these parameters on the wear. In the second stage, we have developed a mathematical model of lifetime useful for the prognostic of the wear and their changes.

Keywords: lifetime, metal carbides, modeling, thermo-mechanical, wear

Procedia PDF Downloads 292
2668 Using Structural Equation Modeling to Measure the Impact of Young Adult-Dog Personality Characteristics on Dog Walking Behaviours during the COVID-19 Pandemic

Authors: Renata Roma, Christine Tardif-Williams

Abstract:

Engaging in daily walks with a dog (f.e. Canis lupus familiaris) during the COVID-19 pandemic may be linked to feelings of greater social-connectedness and global self-worth, and lower stress after controlling for mental health issues, lack of physical contact with others, and other stressors associated with the current pandemic. Therefore, maintaining a routine of dog walking might mitigate the effects of stressors experienced during the pandemic and promote well-being. However, many dog owners do not walk their dogs for many reasons, which are related to the owner’s and the dog’s personalities. Note that the consistency of certain personality characteristics among dogs demonstrates that it is possible to accurately measure different dimensions of personality in both dogs and their human counterparts. In addition, behavioural ratings (e.g., the dog personality questionnaire - DPQ) are reliable tools to assess the dog’s personality. Clarifying the relevance of personality factors in the context of young adult-dog relationships can shed light on interactional aspects that can potentially foster protective behaviours and promote well-being among young adults during the pandemic. This study examines if and how nine combinations of dog- and young adult-related personality characteristics (e.g., neuroticism-fearfulness) can amplify the influence of personality factors in the context of dog walking during the COVID-19 pandemic. Responses to an online large-scale survey among 440 (389 females; 47 males; 4 nonbinaries, Mage=20.7, SD= 2.13 range=17-25) young adults living with a dog in Canada were analyzed using structural equation modeling (SEM). As extraversion, conscientiousness, and neuroticism, measured through the five-factor model (FFM) inventory, are related to maintaining a routine of physical activities, these dimensions were selected for this analysis. Following an approach successfully adopted in the field of dog-human interactions, the FFM was used as the organizing framework to measure and compare the human’s and the dog’s personality in the context of dog walking. The dog-related personality dimensions activity/excitability, responsiveness to training, and fearful were correlated dimensions captured through DPQ and were added to the analysis. Two questions were used to assess dog walking. The actor-partner interdependence model (APIM) was used to check if the young adult’s responses about the dog were biased; no significant bias was observed. Activity/excitability and responsiveness to training in dogs were greatly associated with dog walking. For young adults, high scores in conscientiousness and extraversion predicted more walks with the dog. Conversely, higher scores in neuroticism predicted less engagement in dog walking. For participants high in conscientiousness, the dog’s responsiveness to training (standardized=0.14, p=0.02) and the dog’s activity/excitability (standardized=0.15, p=0.00) levels moderated dog walking behaviours by promoting more daily walks. These results suggest that some combinations in young adult and dog personality characteristics are associated with greater synergy in the young adult-dog dyad that might amplify the impact of personality factors on young adults’ dog-walking routines. These results can inform programs designed to promote the mental and physical health of young adults during the Covid-19 pandemic by highlighting the impact of synergy and reciprocity in personality characteristics between young adults and dogs.

Keywords: Covid-19 pandemic, dog walking, personality, structural equation modeling, well-being

Procedia PDF Downloads 102
2667 Comparison of Fundamental Frequency Model and PWM Based Model for UPFC

Authors: S. A. Al-Qallaf, S. A. Al-Mawsawi, A. Haider

Abstract:

Among all FACTS devices, the unified power flow controller (UPFC) is considered to be the most versatile device. This is due to its capability to control all the transmission system parameters (impedance, voltage magnitude, and phase angle). With the growing interest in UPFC, the attention to develop a mathematical model has increased. Several models were introduced for UPFC in literature for different type of studies in power systems. In this paper a novel comparison study between two dynamic models of UPFC with their proposed control strategies.

Keywords: FACTS, UPFC, dynamic modeling, PWM, fundamental frequency

Procedia PDF Downloads 335
2666 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 76
2665 Modeling and Analyzing the WAP Class 2 Wireless Transaction Protocol Using Event-B

Authors: Rajaa Filali, Mohamed Bouhdadi

Abstract:

This paper presents an incremental formal development of the Wireless Transaction Protocol (WTP) in Event-B. WTP is part of the Wireless Application Protocol (WAP) architectures and provides a reliable request-response service. To model and verify the protocol, we use the formal technique Event-B which provides an accessible and rigorous development method. This interaction between modelling and proving reduces the complexity and helps to eliminate misunderstandings, inconsistencies, and specification gaps. As result, verification of WTP allows us to find some deficiencies in the current specification.

Keywords: event-B, wireless transaction protocol, proof obligation, refinement, Rodin, ProB

Procedia PDF Downloads 303
2664 Fracture And Fatigue Crack Growth Analysis and Modeling

Authors: Volkmar Nolting

Abstract:

Fatigue crack growth prediction has become an important topic in both engineering and non-destructive evaluation. Crack propagation is influenced by the mechanical properties of the material and is conveniently modelled by the Paris-Erdogan equation. The critical crack size and the total number of load cycles are calculated. From a Larson-Miller plot the maximum operational temperature can for a given stress level be determined so that failure does not occur within a given time interval t. The study is used to determine a reasonable inspection cycle and thus enhances operational safety and reduces costs.

Keywords: fracturemechanics, crack growth prediction, lifetime of a component, structural health monitoring

Procedia PDF Downloads 25
2663 Literature Review and Approach for the Use of Digital Factory Models in an Augmented Reality Application for Decision Making in Restructuring Processes

Authors: Rene Hellmuth, Jorg Frohnmayer

Abstract:

The requirements of the factory planning and the building concerned have changed in the last years. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring gains more importance in order to maintain the competitiveness of a factory. Even today, the methods and process models used in factory planning are predominantly based on the classical planning principles of Schmigalla, Aggteleky and Kettner, which, however, are not specifically designed for reorganization. In addition, they are designed for a largely static environmental situation and a manageable planning complexity as well as for medium to long-term planning cycles with a low variability of the factory. Existing approaches already regard factory planning as a continuous process that makes it possible to react quickly to adaptation requirements. However, digital factory models are not yet used as a source of information for building data. Approaches which consider building information modeling (BIM) or digital factory models in general either do not refer to factory conversions or do not yet go beyond a concept. This deficit can be further substantiated. A method for factory conversion planning using a current digital building model is lacking. A corresponding approach must take into account both the existing approaches to factory planning and the use of digital factory models in practice. A literature review will be conducted first. In it, approaches to classic factory planning and approaches to conversion planning are examined. In addition, it will be investigated which approaches already contain digital factory models. In the second step, an approach is presented how digital factory models based on building information modeling can be used as a basis for augmented reality tablet applications. This application is suitable for construction sites and provides information on the costs and time required for conversion variants. Thus a fast decision making is supported. In summary, the paper provides an overview of existing factory planning approaches and critically examines the use of digital tools. Based on this preliminary work, an approach is presented, which suggests the sensible use of digital factory models for decision support in the case of conversion variants of the factory building. The augmented reality application is designed to summarize the most important information for decision-makers during a reconstruction process.

Keywords: augmented reality, digital factory model, factory planning, restructuring

Procedia PDF Downloads 127
2662 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 134
2661 Petrology of the Post-Collisional Dolerites, Basalts from the Javakheti Highland, South Georgia

Authors: Bezhan Tutberidze

Abstract:

The Neogene-Quaternary volcanic rocks of the Javakheti Highland are products of post-collisional continental magmatism and are related to divergent and convergent margins of Eurasian-Afroarabian lithospheric plates. The studied area constitutes an integral part of the volcanic province of Central South Georgia. Three cycles of volcanic activity are identified here: 1. Late Miocene-Early Pliocene, 2. Late Pliocene-Early /Middle/ Pleistocene and 3. Late Pleistocene. An intense basic dolerite magmatic activity occurred within the time span of the Late Pliocene and lasted until at least Late /Middle/ Pleistocene. The age of the volcanogenic and volcanogenic-sedimentary formation was dated by geomorphological, paleomagnetic, paleontological and geochronological methods /1.7-1.9 Ma/. The volcanic area of the Javakheti Highland contains multiple dolerite Plateaus: Akhalkalaki, Gomarethi, Dmanisi, and Tsalka. Petrographic observations of these doleritic rocks reveal fairly constant mineralogical composition: olivine / Fo₈₇.₆₋₈₂.₇ /, plagioclase / Ab₂₂.₈ An₇₅.₉ Or₁.₃; Ab₄₅.₀₋₃₂.₃ An₅₂.₉₋₆₂.₃ Or₂.₁₋₅.₄/. The pyroxene is an augite and may exhibit a visible zoning: / Wo 39.7-43.1 En 43.5-45.2 Fs 16.8-11.7/. Opaque minerals /magnetite, titanomagnetite/ is abundant as inclusions within olivine and pyroxene crystals. The texture of dolerites exhibits intergranular, holocrystalline to ophitic to sub ophitic granular. Dolerites are most common vesicular rocks. Vesicles range in shape from spherical to elongated and in size from 0.5 mm to than 1.5-2 cm and makeup about 20-50 % of the volume. The dolerites have been subjected to considerable alteration. The secondary minerals in the geothermal field are: zeolite, calcite, chlorite, aragonite, clay-like mineral /dominated by smectites/ and iddingsite –like mineral; rare quartz and pumpellyite are present. These vesicles are filled by secondary minerals. In the chemistry, dolerites are the calc-alkalic transition to sub-alkaline with a predominance of Na₂O over K₂O. Chemical analyses indicate that dolerites of all plateaus of the Javakheti Highland have similar geochemical compositions, signifying that they were formed from the same magmatic source by crystallization of olivine basalis magma which less differentiated / ⁸⁷Sr \ ⁸⁶Sr 0.703920-0704195/. There is one argument, which is less convincing, according to which the dolerites/basalts of the Javakheti Highland are considered to be an activity of a mantle plume. Unfortunately, there does not exist reliable evidence to prove this. The petrochemical peculiarities and eruption nature of the dolerites of the Javakheti Plateau point against their plume origin. Nevertheless, it is not excluded that they influence the formation of dolerite producing primary basaltic magma.

Keywords: calc-alkalic, dolerite, Georgia, Javakheti Highland

Procedia PDF Downloads 251
2660 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia

Authors: Zeinu Ahmed Rabba, Derek D Stretch

Abstract:

Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.

Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase

Procedia PDF Downloads 265