Search results for: modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3784

Search results for: modeling

454 Effects of Polymer Adsorption and Desorption on Polymer Flooding in Waterflooded Reservoir

Authors: Sukruthai Sapniwat, Falan Srisuriyachai

Abstract:

Polymer Flooding is one of the most well-known methods in Enhanced Oil Recovery (EOR) technology which can be implemented after either primary or secondary recovery, resulting in favorable conditions for the displacement mechanism in order to lower the residual oil in the reservoir. Polymer substances can lower the mobility ratio of the whole process by increasing the viscosity of injected water. Therefore, polymer flooding can increase volumetric sweep efficiency, which leads to a better recovery factor. Moreover, polymer adsorption onto rock surface can help decrease reservoir permeability contrast with high heterogeneity. Due to the reduction of the absolute permeability, effective permeability to water, representing flow ability of the injected fluid, is also reduced. Once polymer is adsorbed onto rock surface, polymer molecule can be desorbed when different fluids are injected. This study is performed to evaluate the effects of the adsorption and desorption process of polymer solutions to yield benefits on the oil recovery mechanism. A reservoir model is constructed by reservoir simulation program called STAR® commercialized by the Computer Modeling Group (CMG). Various polymer concentrations, starting times of polymer flooding process and polymer injection rates were evaluated with selected values of polymer desorption degrees including 0, 25, 50, 75 and 100%. The higher the value, the more adsorbed polymer molecules to return back to flowing fluid. According to the results, polymer desorption lowers polymer consumption, especially at low concentrations. Furthermore, starting time of polymer flooding and injection rate affect the oil production. The results show that waterflooding followed by earlier polymer flooding can increase the oil recovery factor while the higher injection rate also enhances the recovery. Polymer concentration is related to polymer consumption due to the two main benefits of polymer flooding control described above. Therefore, polymer slug size should be optimized based on polymer concentration. Polymer desorption causes polymer re-employment that is previously adsorbed onto rock surface, resulting in an increase of sweep efficiency in the further period of polymer flooding process. Even though waterflooding supports polymer injectivity, water cut at the producer can prematurely terminate the oil production. The injection rate decreases polymer adsorption due to decreased retention time of polymer flooding process.

Keywords: enhanced oil recovery technology, polymer adsorption and desorption, polymer flooding, reservoir simulation

Procedia PDF Downloads 290
453 Risk and Reliability Based Probabilistic Structural Analysis of Railroad Subgrade Using Finite Element Analysis

Authors: Asif Arshid, Ying Huang, Denver Tolliver

Abstract:

Finite Element (FE) method coupled with ever-increasing computational powers has substantially advanced the reliability of deterministic three dimensional structural analyses of a structure with uniform material properties. However, railways trackbed is made up of diverse group of materials including steel, wood, rock and soil, while each material has its own varying levels of heterogeneity and imperfections. It is observed that the application of probabilistic methods for trackbed structural analysis while incorporating the material and geometric variabilities is deeply underworked. The authors developed and validated a 3-dimensional FE based numerical trackbed model and in this study, they investigated the influence of variability in Young modulus and thicknesses of granular layers (Ballast and Subgrade) on the reliability index (-index) of the subgrade layer. The influence of these factors is accounted for by changing their Coefficients of Variance (COV) while keeping their means constant. These variations are formulated using Gaussian Normal distribution. Two failure mechanisms in subgrade namely Progressive Shear Failure and Excessive Plastic Deformation are examined. Preliminary results of risk-based probabilistic analysis for Progressive Shear Failure revealed that the variations in Ballast depth are the most influential factor for vertical stress at the top of subgrade surface. Whereas, in case of Excessive Plastic Deformations in subgrade layer, the variations in its own depth and Young modulus proved to be most important while ballast properties remained almost indifferent. For both these failure moods, it is also observed that the reliability index for subgrade failure increases with the increase in COV of ballast depth and subgrade Young modulus. The findings of this work is of particular significance in studying the combined effect of construction imperfections and variations in ground conditions on the structural performance of railroad trackbed and evaluating the associated risk involved. In addition, it also provides an additional tool to supplement the deterministic analysis procedures and decision making for railroad maintenance.

Keywords: finite element analysis, numerical modeling, probabilistic methods, risk and reliability analysis, subgrade

Procedia PDF Downloads 103
452 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 137
451 Long-Term Resilience Performance Assessment of Dual and Singular Water Distribution Infrastructures Using a Complex Systems Approach

Authors: Kambiz Rasoulkhani, Jeanne Cole, Sybil Sharvelle, Ali Mostafavi

Abstract:

Dual water distribution systems have been proposed as solutions to enhance the sustainability and resilience of urban water systems by improving performance and decreasing energy consumption. The objective of this study was to evaluate the long-term resilience and robustness of dual water distribution systems versus singular water distribution systems under various stressors such as demand fluctuation, aging infrastructure, and funding constraints. To this end, the long-term dynamics of these infrastructure systems was captured using a simulation model that integrates institutional agency decision-making processes with physical infrastructure degradation to evaluate the long-term transformation of water infrastructure. A set of model parameters that varies for dual and singular distribution infrastructure based on the system attributes, such as pipes length and material, energy intensity, water demand, water price, average pressure and flow rate, as well as operational expenditures, were considered and input in the simulation model. Accordingly, the model was used to simulate various scenarios of demand changes, funding levels, water price growth, and renewal strategies. The long-term resilience and robustness of each distribution infrastructure were evaluated based on various performance measures including network average condition, break frequency, network leakage, and energy use. An ecologically-based resilience approach was used to examine regime shifts and tipping points in the long-term performance of the systems under different stressors. Also, Classification and Regression Tree analysis was adopted to assess the robustness of each system under various scenarios. Using data from the City of Fort Collins, the long-term resilience and robustness of the dual and singular water distribution systems were evaluated over a 100-year analysis horizon for various scenarios. The results of the analysis enabled: (i) comparison between dual and singular water distribution systems in terms of long-term performance, resilience, and robustness; (ii) identification of renewal strategies and decision factors that enhance the long-term resiliency and robustness of dual and singular water distribution systems under different stressors.

Keywords: complex systems, dual water distribution systems, long-term resilience performance, multi-agent modeling, sustainable and resilient water systems

Procedia PDF Downloads 257
450 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 61
449 1D/3D Modeling of a Liquid-Liquid Two-Phase Flow in a Milli-Structured Heat Exchanger/Reactor

Authors: Antoinette Maarawi, Zoe Anxionnaz-Minvielle, Pierre Coste, Nathalie Di Miceli Raimondi, Michel Cabassud

Abstract:

Milli-structured heat exchanger/reactors have been recently widely used, especially in the chemical industry, due to their enhanced performances in heat and mass transfer compared to conventional apparatuses. In our work, the ‘DeanHex’ heat exchanger/reactor with a 2D-meandering channel is investigated both experimentally and numerically. The square cross-sectioned channel has a hydraulic diameter of 2mm. The aim of our study is to model local physico-chemical phenomena (heat and mass transfer, axial dispersion, etc.) for a liquid-liquid two-phase flow in our lab-scale meandering channel, which represents the central part of the heat exchanger/reactor design. The numerical approach of the reactor is based on a 1D model for the flow channel encapsulated in a 3D model for the surrounding solid, using COMSOL Multiphysics V5.5. The use of the 1D approach to model the milli-channel reduces significantly the calculation time compared to 3D approaches, which are generally focused on local effects. Our 1D/3D approach intends to bridge the gap between the simulation at a small scale and the simulation at the reactor scale at a reasonable CPU cost. The heat transfer process between the 1D milli-channel and its 3D surrounding is modeled. The feasibility of this 1D/3D coupling was verified by comparing simulation results to experimental ones originated from two previous works. Temperature profiles along the channel axis obtained by simulation fit the experimental profiles for both cases. The next step is to integrate the liquid-liquid mass transfer model and to validate it with our experimental results. The hydrodynamics of the liquid-liquid two-phase system is modeled using the ‘mixture model approach’. The mass transfer behavior is represented by an overall volumetric mass transfer coefficient ‘kLa’ correlation obtained from our experimental results in the millimetric size meandering channel. The present work is a first step towards the scale-up of our ‘DeanHex’ expecting future industrialization of such equipment. Therefore, a generalized scaled-up model of the reactor comprising all the transfer processes will be built in order to predict the performance of the reactor in terms of conversion rate and energy efficiency at an industrial scale.

Keywords: liquid-liquid mass transfer, milli-structured reactor, 1D/3D model, process intensification

Procedia PDF Downloads 93
448 Design and Construction Demeanor of a Very High Embankment Using Geosynthetics

Authors: Mariya Dayana, Budhmal Jain

Abstract:

Kannur International Airport Ltd. (KIAL) is a new Greenfield airport project with airside development on an undulating terrain with an average height of 90m above Mean Sea Level (MSL) and a maximum height of 142m. To accommodate the desired Runway length and Runway End Safety Area (RESA) at both the ends along the proposed alignment, it resulted in 45.5 million cubic meters in cutting and filling. The insufficient availability of land for the construction of free slope embankment at RESA 07 end resulted in the design and construction of Reinforced Soil Slope (RSS) with a maximum slope of 65 degrees. An embankment fill of average 70m height with steep slopes located in high rainfall area is a unique feature of this project. The design and construction was challenging being asymmetrical with curves and bends. The fill was reinforced with high strength Uniaxial geogrids laid perpendicular to the slope. Weld mesh wrapped with coir mat acted as the facia units to protect it against surface failure. Face anchorage were also provided by wrapping the geogrids along the facia units where the slope angle was steeper than 45 degrees. Considering high rainfall received on this table top airport site, extensive drainage system was designed for the high embankment fill. Gabion wall up to 10m height were also designed and constructed along the boundary to accommodate the toe of the RSS fill beside the jeepable track at the base level. The design of RSS fill was done using ReSSA software and verified in PLAXIS 2D modeling. Both slip surface failure and wedge failure cases were considered in static and seismic analysis for local and global failure cases. The site won excavated laterite soil was used as the fill material for the construction. Extensive field and laboratory tests were conducted during the construction of RSS system for quality assurance. This paper represents a case study detailing the design and construction of a very high embankment using geosynthetics for the provision of Runway length and RESA area.

Keywords: airport, embankment, gabion, high strength uniaxial geogrid, kial, laterite soil, plaxis 2d

Procedia PDF Downloads 139
447 Macroeconomic Implications of Artificial Intelligence on Unemployment in Europe

Authors: Ahmad Haidar

Abstract:

Modern economic systems are characterized by growing complexity, and addressing their challenges requires innovative approaches. This study examines the implications of artificial intelligence (AI) on unemployment in Europe from a macroeconomic perspective, employing data modeling techniques to understand the relationship between AI integration and labor market dynamics. To understand the AI-unemployment nexus comprehensively, this research considers factors such as sector-specific AI adoption, skill requirements, workforce demographics, and geographical disparities. The study utilizes a panel data model, incorporating data from European countries over the last two decades, to explore the potential short-term and long-term effects of AI implementation on unemployment rates. In addition to investigating the direct impact of AI on unemployment, the study also delves into the potential indirect effects and spillover consequences. It considers how AI-driven productivity improvements and cost reductions might influence economic growth and, in turn, labor market outcomes. Furthermore, it assesses the potential for AI-induced changes in industrial structures to affect job displacement and creation. The research also highlights the importance of policy responses in mitigating potential negative consequences of AI adoption on unemployment. It emphasizes the need for targeted interventions such as skill development programs, labor market regulations, and social safety nets to enable a smooth transition for workers affected by AI-related job displacement. Additionally, the study explores the potential role of AI in informing and transforming policy-making to ensure more effective and agile responses to labor market challenges. In conclusion, this study provides a comprehensive analysis of the macroeconomic implications of AI on unemployment in Europe, highlighting the importance of understanding the nuanced relationships between AI adoption, economic growth, and labor market outcomes. By shedding light on these relationships, the study contributes valuable insights for policymakers, educators, and researchers, enabling them to make informed decisions in navigating the complex landscape of AI-driven economic transformation.

Keywords: artificial intelligence, unemployment, macroeconomic analysis, european labor market

Procedia PDF Downloads 48
446 Designing Offshore Pipelines Facing the Geohazard of Active Seismic Faults

Authors: Maria Trimintziou, Michael Sakellariou, Prodromos Psarropoulos

Abstract:

Nowadays, the exploitation of hydrocarbons reserves in deep seas and oceans, in combination with the need to transport hydrocarbons among countries, has made the design, construction and operation of offshore pipelines very significant. Under this perspective, it is evident that many more offshore pipelines are expected to be constructed in the near future. Since offshore pipelines are usually crossing extended areas, they may face a variety of geohazards that impose substantial permanent ground deformations (PGDs) to the pipeline and potentially threaten its integrity. In case of a geohazard area, there exist three options to proceed. The first option is to avoid the problematic area through rerouting, which is usually regarded as an unfavorable solution due to its high cost. The second is to apply (if possible) mitigation/protection measures in order to eliminate the geohazard itself. Finally, the last appealing option is to allow the pipeline crossing through the geohazard area, provided that the pipeline will have been verified against the expected PGDs. In areas with moderate or high seismicity the design of an offshore pipeline is more demanding due to the earthquake-related geohazards, such as landslides, soil liquefaction phenomena, and active faults. It is worthy to mention that although worldwide there is a great experience in offshore geotechnics and pipeline design, the experience in seismic design of offshore pipelines is rather limited due to the fact that most of the pipelines have been constructed in non-seismic regions (e.g. North Sea, West Australia, Gulf of Mexico, etc.). The current study focuses on the seismic design of offshore pipelines against active faults. After an extensive literature review of the provisions of the seismic norms worldwide and of the available analytical methods, the study simulates numerically (through finite-element modeling and strain-based criteria) the distress of offshore pipelines subjected to PGDs induced by active seismic faults at the seabed. Factors, such as the geometrical properties of the fault, the mechanical properties of the ruptured soil formations, and the pipeline characteristics, are examined. After some interesting conclusions regarding the seismic vulnerability of offshore pipelines, potential cost-effective mitigation measures are proposed taking into account constructability issues.

Keywords: offhore pipelines, seismic design, active faults, permanent ground deformations (PGDs)

Procedia PDF Downloads 556
445 The Relationship between Body Positioning and Badminton Smash Quality

Authors: Gongbing Shan, Shiming Li, Zhao Zhang, Bingjun Wan

Abstract:

Badminton originated in ancient civilizations in Europe and Asia more than 2000 years ago. Presently, it is played almost everywhere with estimated 220 million people playing badminton regularly, ranging from professionals to recreational players; and it is the second most played sport in the world after soccer. In Asia, the popularity of badminton and involvement of people surpass soccer. Unfortunately, scientific researches on badminton skills are hardly proportional to badminton’s popularity. A search of literature has shown that the literature body of biomechanical investigations is relatively small. One of the dominant skills in badminton is the forehand overhead smash, which consists of 1/5 attacks during games. Empirical evidences show that one has to adjust the body position in relation to the coming shuttlecock to produce a powerful and accurate smash. Therefore, positioning is a fundamental aspect influencing smash quality. A search of literature has shown that there is a dearth/lack of study on this fundamental aspect. The goals of this study were to determine the influence of positioning and training experience on smash quality in order to discover information that could help learn/acquire the skill. Using a 10-camera, 3D motion capture system (VICON MX, 200 frames/s) and 15-segment, full-body biomechanical model, 14 skilled and 15 novice players were measured and analyzed. Results have revealed that the body positioning has direct influence on the quality of a smash, especially on shuttlecock release angle and clearance height (passing over the net) of offensive players. The results also suggest that, for training a proper positioning, one could conduct a self-selected comfort position towards a statically hanged shuttlecock and then step one foot back – a practical reference marker for learning. This perceptional marker could be applied in guiding the learning and training of beginners. As one gains experience through repetitive training, improved limbs’ coordination would increase smash quality further. The researchers hope that the findings will benefit practitioners for developing effective training programs for beginners.

Keywords: 3D motion analysis, biomechanical modeling, shuttlecock release speed, shuttlecock release angle, clearance height

Procedia PDF Downloads 463
444 Simulation of Colombian Exchange Rate to Cover the Exchange Risk Using Financial Options Like Hedge Strategy

Authors: Natalia M. Acevedo, Luis M. Jimenez, Erick Lambis

Abstract:

Imperfections in the capital market are used to argue the relevance of the corporate risk management function. With corporate hedge, the value of the company is increased by reducing the volatility of the expected cash flow and making it possible to face a lower bankruptcy costs and financial difficulties, without sacrificing tax advantages for debt financing. With the propose to avoid exchange rate troubles over cash flows of Colombian exporting firms, this dissertation uses financial options, over exchange rate between Peso and Dollar, for realizing a financial hedge. In this study, a strategy of hedge is designed for an exporting company in Colombia with the objective of preventing fluctuations because, if the exchange rate down, the number of Colombian pesos that obtains the company by exports, is less than agreed. The exchange rate of Colombia is measured by the TRM (Representative Market Rate), representing the number of Colombian pesos for an American dollar. First, the TMR is modelled through the Geometric Brownian Motion, with this, the project price is simulated using Montecarlo simulations and finding the mean of TRM for three, six and twelve months. For financial hedging, currency options were used. The 6-month projection was covered with financial options on European-type currency with a strike price of $ 2,780.47 for each month; this value corresponds to the last value of the historical TRM. In the compensation of the options in each month, the price paid for the premium, calculated with the Black-Scholes method for currency options, was considered. Finally, with the modeling of prices and the Monte Carlo simulation, the effect of the exchange hedging with options on the exporting company was determined, this by means of the unit price estimate to which the dollars in the scenario without coverage were changed and scenario with coverage. After using the scenarios: is determinate that the TRM will have a bull trend and the exporting firm will be affected positively because they will get more pesos for each dollar. The results show that the financial options manage to reduce the exchange risk. The expected value with coverage is approximate to the expected value without coverage, but the 5% percentile with coverage is greater than without coverage. The foregoing indicates that in the worst scenarios the exporting companies will obtain better prices for the sale of the currencies if they cover.

Keywords: currency hedging, futures, geometric Brownian motion, options

Procedia PDF Downloads 99
443 Development of a Framework for Assessment of Market Penetration of Oil Sands Energy Technologies in Mining Sector

Authors: Saeidreza Radpour, Md. Ahiduzzaman, Amit Kumar

Abstract:

Alberta’s mining sector consumed 871.3 PJ in 2012, which is 67.1% of the energy consumed in the industry sector and about 40% of all the energy consumed in the province of Alberta. Natural gas, petroleum products, and electricity supplied 55.9%, 20.8%, and 7.7%, respectively, of the total energy use in this sector. Oil sands mining and upgrading to crude oil make up most of the mining energy sector activities in Alberta. Crude oil is produced from the oil sands either by in situ methods or by the mining and extraction of bitumen from oil sands ore. In this research, the factors affecting oil sands production have been assessed and a framework has been developed for market penetration of new efficient technologies in this sector. Oil sands production amount is a complex function of many different factors, broadly categorized into technical, economic, political, and global clusters. The results of developed and implemented statistical analysis in this research show that the importance of key factors affecting on oil sands production in Alberta is ranked as: Global energy consumption (94% consistency), Global crude oil price (86% consistency), and Crude oil export (80% consistency). A framework for modeling oil sands energy technologies’ market penetration (OSETMP) has been developed to cover related technical, economic and environmental factors in this sector. It has been assumed that the impact of political and social constraints is reflected in the model by changes of global oil price or crude oil price in Canada. The market share of novel in situ mining technologies with low energy and water use are assessed and calculated in the market penetration framework include: 1) Partial upgrading, 2) Liquid addition to steam to enhance recovery (LASER), 3) Solvent-assisted process (SAP), also called solvent-cyclic steam-assisted gravity drainage (SC-SAGD), 4) Cyclic solvent, 5) Heated solvent, 6) Wedge well, 7) Enhanced modified steam and Gas push (emsagp), 8) Electro-thermal dynamic stripping process (ET-DSP), 9) Harris electro-magnetic heating applications (EMHA), 10) Paraffin froth separation. The results of the study will show the penetration profile of these technologies over a long term planning horizon.

Keywords: appliances efficiency improvement, diffusion models, market penetration, residential sector

Procedia PDF Downloads 295
442 Advancing Hydrogen Production Through Additive Manufacturing: Optimising Structures of High Performance Electrodes

Authors: Fama Jallow, Melody Neaves, Professor Mcgregor

Abstract:

The quest for sustainable energy sources has driven significant interest in hydrogen production as a clean and efficient fuel. Alkaline water electrolysis (AWE) has emerged as a prominent method for generating hydrogen, necessitating the development of advanced electrode designs with improved performance characteristics. Additive manufacturing (AM) by laser powder bed fusion (LPBF) method presents an opportunity to tailor electrode microstructures and properties, enhancing their performance. This research proposes investigating the AM of electrodes with different lattice structures to optimize hydrogen production. The primary objective is to employ advanced modeling techniques to identify and select two optimal lattice structures for electrode fabrication. LPBF will be used to fabricate electrodes with precise control over lattice geometry, pore size, and distribution. The performance evaluation will encompass energy consumption and porosity analysis. AWE will assess energy efficiency, aiming to identify lattice structures with enhanced hydrogen production rates and reduced power requirements. Computed tomography (CT) scanning will analyze porosity to determine material integrity and mass transport characteristics. The research aims to bridge the gap between AM and hydrogen production by investigating lattice structures potential in electrode design. By systematically exploring lattice structures and their impact on performance, this study aims to provide valuable insights into the design and fabrication of highly efficient and cost-effective electrodes for AWE. The outcomes hold promise for advancing hydrogen production through AM. The research will have a significant impact on the development of sustainable energy sources. The findings from this study will help to improve the efficiency of AWE, making it a more viable option for hydrogen production. This could lead to a reduction in our reliance on fossil fuels, which would have a positive impact on the environment. The research is also likely to have a commercial impact. The findings could be used to develop new electrode designs that are more efficient and cost-effective. This could lead to the development of new hydrogen production technologies, which could have a significant impact on the energy market.

Keywords: hydrogen production, electrode, lattice structure, Africa

Procedia PDF Downloads 43
441 Durability Analysis of a Knuckle Arm Using VPG System

Authors: Geun-Yeon Kim, S. P. Praveen Kumar, Kwon-Hee Lee

Abstract:

A steering knuckle arm is the component that connects the steering system and suspension system. The structural performances such as stiffness, strength, and durability are considered in its design process. The former study suggested the lightweight design of a knuckle arm considering the structural performances and using the metamodel-based optimization. The six shape design variables were defined, and the optimum design was calculated by applying the kriging interpolation method. The finite element method was utilized to predict the structural responses. The suggested knuckle was made of the aluminum Al6082, and its weight was reduced about 60% in comparison with the base steel knuckle, satisfying the design requirements. Then, we investigated its manufacturability by performing foraging analysis. The forging was done as hot process, and the product was made through two-step forging. As a final step of its developing process, the durability is investigated by using the flexible dynamic analysis software, LS-DYNA and the pre and post processor, eta/VPG. Generally, a car make does not provide all the information with the part manufacturer. Thus, the part manufacturer has a limit in predicting the durability performance with the unit of full car. The eta/VPG has the libraries of suspension, tire, and road, which are commonly used parts. That makes a full car modeling. First, the full car is modeled by referencing the following information; Overall Length: 3,595mm, Overall Width: 1,595mm, CVW (Curve Vehicle Weight): 910kg, Front Suspension: MacPherson Strut, Rear Suspension: Torsion Beam Axle, Tire: 235/65R17. Second, the road is selected as the cobblestone. The road condition of the cobblestone is almost 10 times more severe than that of usual paved road. Third, the dynamic finite element analysis using the LS-DYNA is performed to predict the durability performance of the suggested knuckle arm. The life of the suggested knuckle arm is calculated as 350,000km, which satisfies the design requirement set up by the part manufacturer. In this study, the overall design process of a knuckle arm is suggested, and it can be seen that the developed knuckle arm satisfies the design requirement of the durability with the unit of full car. The VPG analysis is successfully performed even though it does not an exact prediction since the full car model is very rough one. Thus, this approach can be used effectively when the detail to full car is not given.

Keywords: knuckle arm, structural optimization, Metamodel, forging, durability, VPG (Virtual Proving Ground)

Procedia PDF Downloads 397
440 Identifying Environmental Adaptive Genetic Loci in Caloteropis Procera (Estabragh): Population Genetics and Landscape Genetic Analyses

Authors: Masoud Sheidaei, Mohammad-Reza Kordasti, Fahimeh Koohdar

Abstract:

Calotropis procera (Aiton) W.T.Aiton, (Apocynaceae), is an economically and medicinally important plant species which is an evergreen, perennial shrub growing in arid and semi-arid climates, and can tolerate very low annual rainfall (150 mm) and a dry season. The plant can also tolerate temperature ran off 20 to30°C and is not frost tolerant. This plant species prefers free-draining sandy soils but can grow also in alkaline and saline soils.It is found at a range of altitudes from exposed coastal sites to medium elevations up to 1300 m. Due to morpho-physiological adaptations of C. procera and its ability to tolerate various abiotic stresses. This taxa can compete with desirable pasture species and forms dense thickets that interfere with stock management, particularly mustering activities. Caloteropis procera grows only in southern part of Iran where in comprises a limited number of geographical populations. We used different population genetics and r landscape analysis to produce data on geographical populations of C. procera based on molecular genetic study using SCoT molecular markers. First, we used spatial principal components (sPCA), as it can analyze data in a reduced space and can be used for co-dominant markers as well as presence / absence data as is the case in SCoT molecular markers. This method also carries out Moran I and Mantel tests to reveal spatial autocorrelation and test for the occurrence of Isolation by distance (IBD). We also performed Random Forest analysis to identify the importance of spatial and geographical variables on genetic diversity. Moreover, we used both RDA (Redundency analysis), and LFMM (Latent factor mixed model), to identify the genetic loci significantly associated with geographical variables. A niche modellng analysis was carried our to predict present potential area for distribution of these plants and also the area present by the year 2050. The results obtained will be discussed in this paper.

Keywords: population genetics, landscape genetic, Calotreropis procera, niche modeling, SCoT markers

Procedia PDF Downloads 62
439 Modelling of Phase Transformation Kinetics in Post Heat-Treated Resistance Spot Weld of AISI 1010 Mild Steel

Authors: B. V. Feujofack Kemda, N. Barka, M. Jahazi, D. Osmani

Abstract:

Automobile manufacturers are constantly seeking means to reduce the weight of car bodies. The usage of several steel grades in auto body assembling has been found to be a good technique to enlighten vehicles weight. This few years, the usage of dual phase (DP) steels, transformation induced plasticity (TRIP) steels and boron steels in some parts of the auto body have become a necessity because of their lightweight. However, these steels are martensitic, when they undergo a fast heat treatment, the resultant microstructure is essential, made of martensite. Resistance spot welding (RSW), one of the most used techniques in assembling auto bodies, becomes problematic in the case of these steels. RSW being indeed a process were steel is heated and cooled in a very short period of time, the resulting weld nugget is mostly fully martensitic, especially in the case of DP, TRIP and boron steels but that also holds for plain carbon steels as AISI 1010 grade which is extensively used in auto body inner parts. Martensite in its turn must be avoided as most as possible when welding steel because it is the principal source of brittleness and it weakens weld nugget. Thus, this work aims to find a mean to reduce martensite fraction in weld nugget when using RSW for assembling. The prediction of phase transformation kinetics during RSW has been done. That phase transformation kinetics prediction has been made possible through the modelling of the whole welding process, and a technique called post weld heat treatment (PWHT) have been applied in order to reduce martensite fraction in the weld nugget. Simulation has been performed for AISI 1010 grade, and results show that the application of PWHT leads to the formation of not only martensite but also ferrite, bainite and pearlite during the cooling of weld nugget. Welding experiments have been done in parallel and micrographic analyses show the presence of several phases in the weld nugget. Experimental weld geometry and phase proportions are in good agreement with simulation results, showing here the validity of the model.

Keywords: resistance spot welding, AISI 1010, modeling, post weld heat treatment, phase transformation, kinetics

Procedia PDF Downloads 91
438 Mathematical Study of CO₂ Dispersion in Carbonated Water Injection Enhanced Oil Recovery Using Non-Equilibrium 2D Simulator

Authors: Ahmed Abdulrahman, Jalal Foroozesh

Abstract:

CO₂ based enhanced oil recovery (EOR) techniques have gained massive attention from major oil firms since they resolve the industry's two main concerns of CO₂ contribution to the greenhouse effect and the declined oil production. Carbonated water injection (CWI) is a promising EOR technique that promotes safe and economic CO₂ storage; moreover, it mitigates the pitfalls of CO₂ injection, which include low sweep efficiency, early CO₂ breakthrough, and the risk of CO₂ leakage in fractured formations. One of the main challenges that hinder the wide adoption of this EOR technique is the complexity of accurate modeling of the kinetics of CO₂ mass transfer. The mechanisms of CO₂ mass transfer during CWI include the slow and gradual cross-phase CO₂ diffusion from carbonated water (CW) to the oil phase and the CO₂ dispersion (within phase diffusion and mechanical mixing), which affects the oil physical properties and the spatial spreading of CO₂ inside the reservoir. A 2D non-equilibrium compositional simulator has been developed using a fully implicit finite difference approximation. The material balance term (k) was added to the governing equation to account for the slow cross-phase diffusion of CO₂ from CW to the oil within the gird cell. Also, longitudinal and transverse dispersion coefficients have been added to account for CO₂ spatial distribution inside the oil phase. The CO₂-oil diffusion coefficient was calculated using the Sigmund correlation, while a scale-dependent dispersivity was used to calculate CO₂ mechanical mixing. It was found that the CO₂-oil diffusion mechanism has a minor impact on oil recovery, but it tends to increase the amount of CO₂ stored inside the formation and slightly alters the residual oil properties. On the other hand, the mechanical mixing mechanism has a huge impact on CO₂ spatial spreading (accurate prediction of CO₂ production) and the noticeable change in oil physical properties tends to increase the recovery factor. A sensitivity analysis has been done to investigate the effect of formation heterogeneity (porosity, permeability) and injection rate, it was found that the formation heterogeneity tends to increase CO₂ dispersion coefficients, and a low injection rate should be implemented during CWI.

Keywords: CO₂ mass transfer, carbonated water injection, CO₂ dispersion, CO₂ diffusion, cross phase CO₂ diffusion, within phase CO2 diffusion, CO₂ mechanical mixing, non-equilibrium simulation

Procedia PDF Downloads 141
437 Prediction of Cardiovascular Markers Associated With Aromatase Inhibitors Side Effects Among Breast Cancer Women in Africa

Authors: Jean Paul M. Milambo

Abstract:

Purpose: Aromatase inhibitors (AIs) are indicated in the treatment of hormone-receptive breast cancer in postmenopausal women in various settings. Studies have shown cardiovascular events in some developed countries. To date the data is sparce for evidence-based recommendations in African clinical settings due to lack of cancer registries, capacity building and surveillance systems. Therefore, this study was conducted to assess the feasibility of HyBeacon® probe genotyping adjunctive to standard care for timely prediction and diagnosis of Aromatase inhibitors (AIs) associated adverse events in breast cancer survivors in Africa. Methods: Cross sectional study was conducted to assess the knowledge of POCT among six African countries using online survey and telephonically contacted. Incremental cost effectiveness ratio (ICER) was calculated, using diagnostic accuracy study. This was based on mathematical modeling. Results: One hundred twenty-six participants were considered for analysis (mean age = 61 years; SD = 7.11 years; 95%CI: 60-62 years). Comparison of genotyping from HyBeacon® probe technology to Sanger sequencing showed that sensitivity was reported at 99% (95% CI: 94.55% to 99.97%), specificity at 89.44% (95% CI: 87.25 to 91.38%), PPV at 51% (95%: 43.77 to 58.26%), and NPV at 99.88% (95% CI: 99.31 to 100.00%). Based on the mathematical model, the assumptions revealed that ICER was R7 044.55. Conclusion: POCT using HyBeacon® probe genotyping for AI-associated adverse events maybe cost effective in many African clinical settings. Integration of preventive measures for early detection and prevention guided by different subtype of breast cancer diagnosis with specific clinical, biomedical and genetic screenings may improve cancer survivorship. Feasibility of POCT was demonstrated but the implementation could be achieved by improving the integration of POCT within primary health cares, referral cancer hospitals with capacity building activities at different level of health systems. This finding is pertinent for a future envisioned implementation and global scale-up of POCT-based initiative as part of risk communication strategies with clear management pathways.

Keywords: breast cancer, diagnosis, point of care, South Africa, aromatase inhibitors

Procedia PDF Downloads 52
436 Quantitative Seismic Interpretation in the LP3D Concession, Central of the Sirte Basin, Libya

Authors: Tawfig Alghbaili

Abstract:

LP3D Field is located near the center of the Sirt Basin in the Marada Trough approximately 215 km south Marsa Al Braga City. The Marada Trough is bounded on the west by a major fault, which forms the edge of the Beda Platform, while on the east, a bounding fault marks the edge of the Zelten Platform. The main reservoir in the LP3D Field is Upper Paleocene Beda Formation. The Beda Formation is mainly limestone interbedded with shale. The reservoir average thickness is 117.5 feet. To develop a better understanding of the characterization and distribution of the Beda reservoir, quantitative seismic data interpretation has been done, and also, well logs data were analyzed. Six reflectors corresponding to the tops of the Beda, Hagfa Shale, Gir, Kheir Shale, Khalifa Shale, and Zelten Formations were picked and mapped. Special work was done on fault interpretation part because of the complexities of the faults at the structure area. Different attribute analyses were done to build up more understanding of structures lateral extension and to view a clear image of the fault blocks. Time to depth conversion was computed using velocity modeling generated from check shot and sonic data. The simplified stratigraphic cross-section was drawn through the wells A1, A2, A3, and A4-LP3D. The distribution and the thickness variations of the Beda reservoir along the study area had been demonstrating. Petrophysical analysis of wireline logging also was done and Cross plots of some petrophysical parameters are generated to evaluate the lithology of reservoir interval. Structure and Stratigraphic Framework was designed and run to generate different model like faults, facies, and petrophysical models and calculate the reservoir volumetric. This study concluded that the depth structure map of the Beda formation shows the main structure in the area of study, which is north to south faulted anticline. Based on the Beda reservoir models, volumetric for the base case has been calculated and it has STOIIP of 41MMSTB and Recoverable oil of 10MMSTB. Seismic attributes confirm the structure trend and build a better understanding of the fault system in the area.

Keywords: LP3D Field, Beda Formation, reservoir models, Seismic attributes

Procedia PDF Downloads 179
435 Braille Code Matrix

Authors: Mohammed E. A. Brixi Nigassa, Nassima Labdelli, Ahmed Slami, Arnaud Pothier, Sofiane Soulimane

Abstract:

According to the world health organization (WHO), there are almost 285 million people with visual disability, 39 million of these people are blind. Nevertheless, there is a code for these people that make their life easier and allow them to access information more easily; this code is the Braille code. There are several commercial devices allowing braille reading, unfortunately, most of these devices are not ergonomic and too expensive. Moreover, we know that 90 % of blind people in the world live in low-incomes countries. Our contribution aim is to concept an original microactuator for Braille reading, as well as being ergonomic, inexpensive and lowest possible energy consumption. Nowadays, the piezoelectric device gives the better actuation for low actuation voltage. In this study, we focus on piezoelectric (PZT) material which can bring together all these conditions. Here, we propose to use one matrix composed by six actuators to form the 63 basic combinations of the Braille code that contain letters, numbers, and special characters in compliance with the standards of the braille code. In this work, we use a finite element model with Comsol Multiphysics software for designing and modeling this type of miniature actuator in order to integrate it into a test device. To define the geometry and the design of our actuator, we used physiological limits of perception of human being. Our results demonstrate in our study that piezoelectric actuator could bring a large deflection out-of-plain. Also, we show that microactuators can exhibit non uniform compression. This deformation depends on thin film thickness and the design of membrane arm. The actuator composed of four arms gives the higher deflexion and it always gives a domed deformation at the center of the deviceas in case of the Braille system. The maximal deflection can be estimated around ten micron per Volt (~ 10µm/V). We noticed that the deflection according to the voltage is a linear function, and this deflection not depends only on the voltage the voltage, but also depends on the thickness of the film used and the design of the anchoring arm. Then, we were able to simulate the behavior of the entire matrix and thus display different characters in Braille code. We used these simulations results to achieve our demonstrator. This demonstrator is composed of a layer of PDMS on which we put our piezoelectric material, and then added another layer of PDMS to isolate our actuator. In this contribution, we compare our results to optimize the final demonstrator.

Keywords: Braille code, comsol software, microactuators, piezoelectric

Procedia PDF Downloads 337
434 Antecedents and Consequents of Organizational Politics: A Select Study of a Central University

Authors: Poonam Mishra, Shiv Kumar Sharma, Sanjeev Swami

Abstract:

Purpose: The Purpose of this paper is to investigate the relationship of percieved organizational politics with three levels of antecedents (i.e., organizational level, work environment level and individual level)and its consequents simultaneously. The study addresses antecedents and consequents of percieved political behavior in the higher education sector of India with specific reference to a central university. Design/ Methodology/ Approach: A conceptual framework and hypotheses were first developed on the basis of review of previous studies on organizational politics. A questionnaire was then developed carrying 66 items related to 8-constructs and demographic characteristics of respondents. Jundegemental sampling was used to select respondents. Primary data is collected through structured questionnaire from 45 faculty members of a central university. The sample constitutes Professors, Associate Professors and Assistant Professors from various departments of the University. To test hypotheses data was analyzed statistically using partial least square-structural equations modeling (PLS-SEM). Findings: Results indicated a strong support for OP’s relationship with three of the four proposed antecedents that are, workforce diversity, relationship conflict and need for power with relationship conflict having the strongest impact. No significant relationship was found between role conflict and perception of organizational politics. The three consequences that is, intention to turnover, job anxiety, and organizational commitment are significantly impacted by perception of organizational politics. Practical Implications– This study will be helpful in motivating future research for improving the quality of higher education in India by reducing the level of antecedents that adds to the level of perception of organizational politics, ultimately resulting in unfavorable outcomes. Originality/value: Although a large number of studies on atecedents and consequents of percieved organizational politics have been reported, little attention has been paid to test all the separate but interdependent relationships simultaneously; in this paper organizational politics will be simultaneously treated as a dependent variable and same will be treated as independent variable in subsequent relationships.

Keywords: organizational politics, workforce diversity, relationship conflict, role conflict, need for power, intention to turnover, job anxiety, organizational commitment

Procedia PDF Downloads 465
433 Engaging Employees in Innovation - A Quantitative Study on The Role of Affective Commitment to Change Among Norwegian Employees in Higher Education.

Authors: Barbara Rebecca Mutonyi, Chukwuemeka Echebiri, Terje Slåtten, Gudbrand Lien

Abstract:

The concept of affective commitment to change has been scarcely explored among employees in the higher education literature. The present study addresses this knowledge gap in the literature by examining how various psychological factors, such as psychological empowerment (PsyEmp), and psychological capital (PsyCap), promotes affective commitment to change. As affective commitment to change has been identified by previous studies as an important aspect to implementation behavior, the study examines the correlation of affective commitment to change on employee innovative behavior (EIB) in higher education. The study proposes mediation relationship between PsyEmp, PsyCap, and affective commitment to change. 250 employees in higher education in Norway were sampled for this study. The study employed online survey for data collection, utilizing Stata software to perform Partial least square equation modeling to test the proposed hypotheses of the study. Through bootstrapping, the study was able to test for mediating effects. Findings of the study shows a strong direct relationship between the leadership factor PsyEmp on the individual factor PsyCap ( = 0.453). In addition, the findings of the study reveal that both PsyEmp and PsyCap are related to affective commitment to change ( = 0.28 and  = 0.249, respectively). In total, PsyEmp and PsyCap explains about 10% of the variance in the concept of affective commitment to change. Further, the direct effect of effective commitment to change and EIB is also supported ( = 0.183). The three factors, PsyEmp, PsyCap, and affective commitment to change, explains nearly 40% (R2 = 0.39) of the variance found in EIB. The relationship between PsyEmp, PsyCap, and affective commitment to change are mediated through the individual factor PsyCap. In order to effectively promote affective commitment to change among higher education employees, higher education managers should focus on both the leadership factor, PsyEmp, as well as the individual factor, PsyCap, of their employees. In this regard, higher education managers should strengthen employees EIB through providing autonomy, creating a safe environment that encourages innovation thinking and action, and providing employees in higher education opportunities to be involved in changes occurring at work. This contributes to strengthening employees´ affective commitment to change, that further improves their EIB in their work roles as higher education employees. As such, the results of this study implicate the ambidextrous nature of the concepts of affective commitment to change and EIB that should be considered in future studies of innovation in higher education research.

Keywords: affective commitment to change, psychological capital, innovative behavior, psychological empowerment, higher education

Procedia PDF Downloads 87
432 Cascade Multilevel Inverter-Based Grid-Tie Single-Phase and Three-Phase-Photovoltaic Power System Controlling and Modeling

Authors: Syed Masood Hussain

Abstract:

An effective control method, including system-level control and pulse width modulation for quasi-Z-source cascade multilevel inverter (qZS-CMI) based grid-tie photovoltaic (PV) power system is proposed. The system-level control achieves the grid-tie current injection, independent maximum power point tracking (MPPT) for separate PV panels, and dc-link voltage balance for all quasi-Z-source H-bridge inverter (qZS-HBI) modules. A recent upsurge in the study of photovoltaic (PV) power generation emerges, since they directly convert the solar radiation into electric power without hampering the environment. However, the stochastic fluctuation of solar power is inconsistent with the desired stable power injected to the grid, owing to variations of solar irradiation and temperature. To fully exploit the solar energy, extracting the PV panels’ maximum power and feeding them into grids at unity power factor become the most important. The contributions have been made by the cascade multilevel inverter (CMI). Nevertheless, the H-bridge inverter (HBI) module lacks boost function so that the inverter KVA rating requirement has to be increased twice with a PV voltage range of 1:2; and the different PV panel output voltages result in imbalanced dc-link voltages. However, each HBI module is a two-stage inverter, and many extra dc–dc converters not only increase the complexity of the power circuit and control and the system cost, but also decrease the efficiency. Recently, the Z-source/quasi-Z-source cascade multilevel inverter (ZS/qZS-CMI)-based PV systems were proposed. They possess the advantages of both traditional CMI and Z-source topologies. In order to properly operate the ZS/qZS-CMI, the power injection, independent control of dc-link voltages, and the pulse width modulation (PWM) are necessary. The main contributions of this paper include: 1) a novel multilevel space vector modulation (SVM) technique for the single phase qZS-CMI is proposed, which is implemented without additional resources; 2) a grid-connected control for the qZS-CMI based PV system is proposed, where the all PV panel voltage references from their independent MPPTs are used to control the grid-tie current; the dual-loop dc-link peak voltage control.

Keywords: Quzi-Z source inverter, Photo voltaic power system, space vector modulation, cascade multilevel inverter

Procedia PDF Downloads 522
431 Mathematical Modeling on Capturing of Magnetic Nanoparticles in an Implant Assisted Channel for Magnetic Drug Targeting

Authors: Shashi Sharma, V. K. Katiyar, Uaday Singh

Abstract:

The ability to manipulate magnetic particles in fluid flows by means of inhomogeneous magnetic fields is used in a wide range of biomedical applications including magnetic drug targeting (MDT). In MDT, magnetic carrier particles bounded with drug molecules are injected into the vascular system up-stream from the malignant tissue and attracted or retained at the specific region in the body with the help of an external magnetic field. Although the concept of MDT has been around for many years, however, wide spread acceptance of the technique is still looming despite the fact that it has shown some promise in both in vivo and clinical studies. This is because traditional MDT has some inherent limitations. Typically, the magnetic force is not very strong and it is also very short ranged. Since the magnetic force must overcome rather large hydrodynamic forces in the body, MDT applications have been limited to sites located close to the surface of the skin. Even in this most favorable situation, studies have shown that it is difficult to collect appreciable amounts of the MDCPs at the target site. To overcome these limitations of the traditional MDT approach, Ritter and co-workers reported the implant assisted magnetic drug targeting (IA-MDT). In IA-MDT, the magnetic implants are placed strategically at the target site to greatly and locally increase the magnetic force on MDCPs and help to attract and retain the MDCPs at the targeted region. In the present work, we develop a mathematical model to study the capturing of magnetic nanoparticles flowing in a fluid in an implant assisted cylindrical channel under the magnetic field. A coil of ferromagnetic SS 430 has been implanted inside the cylindrical channel to enhance the capturing of magnetic nanoparticles under the magnetic field. The dominant magnetic and drag forces, which significantly affect the capturing of nanoparticles, are incorporated in the model. It is observed through model results that capture efficiency increases from 23 to 51 % as we increase the magnetic field from 0.1 to 0.5 T, respectively. The increase in capture efficiency by increase in magnetic field is because as the magnetic field increases, the magnetization force, which is attractive in nature and responsible to attract or capture the magnetic particles, increases and results the capturing of large number of magnetic particles due to high strength of attractive magnetic force.

Keywords: capture efficiency, implant assisted-magnetic drug targeting (IA-MDT), magnetic nanoparticles, modelling

Procedia PDF Downloads 439
430 The Influence of Infiltration and Exfiltration Processes on Maximum Wave Run-Up: A Field Study on Trinidad Beaches

Authors: Shani Brathwaite, Deborah Villarroel-Lamb

Abstract:

Wave run-up may be defined as the time-varying position of the landward extent of the water’s edge, measured vertically from the mean water level position. The hydrodynamics of the swash zone and the accurate prediction of maximum wave run-up, play a critical role in the study of coastal engineering. The understanding of these processes is necessary for the modeling of sediment transport, beach recovery and the design and maintenance of coastal engineering structures. However, due to the complex nature of the swash zone, there remains a lack of detailed knowledge in this area. Particularly, there has been found to be insufficient consideration of bed porosity and ultimately infiltration/exfiltration processes, in the development of wave run-up models. Theoretically, there should be an inverse relationship between maximum wave run-up and beach porosity. The greater the rate of infiltration during an event, associated with a larger bed porosity, the lower the magnitude of the maximum wave run-up. Additionally, most models have been developed using data collected on North American or Australian beaches and may have limitations when used for operational forecasting in Trinidad. This paper aims to assess the influence and significance of infiltration and exfiltration processes on wave run-up magnitudes within the swash zone. It also seeks to pay particular attention to how well various empirical formulae can predict maximum run-up on contrasting beaches in Trinidad. Traditional surveying techniques will be used to collect wave run-up and cross-sectional data on various beaches. Wave data from wave gauges and wave models will be used as well as porosity measurements collected using a double ring infiltrometer. The relationship between maximum wave run-up and differing physical parameters will be investigated using correlation analyses. These physical parameters comprise wave and beach characteristics such as wave height, wave direction, period, beach slope, the magnitude of wave setup, and beach porosity. Most parameterizations to determine the maximum wave run-up are described using differing parameters and do not always have a good predictive capability. This study seeks to improve the formulation of wave run-up by using the aforementioned parameters to generate a formulation with a special focus on the influence of infiltration/exfiltration processes. This will further contribute to the improvement of the prediction of sediment transport, beach recovery and design of coastal engineering structures in Trinidad.

Keywords: beach porosity, empirical models, infiltration, swash, wave run-up

Procedia PDF Downloads 317
429 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 63
428 Cleaning of Polycyclic Aromatic Hydrocarbons (PAH) Obtained from Ferroalloys Plant

Authors: Stefan Andersson, Balram Panjwani, Bernd Wittgens, Jan Erik Olsen

Abstract:

Polycyclic Aromatic hydrocarbons are organic compounds consisting of only hydrogen and carbon aromatic rings. PAH are neutral, non-polar molecules that are produced due to incomplete combustion of organic matter. These compounds are carcinogenic and interact with biological nucleophiles to inhibit the normal metabolic functions of the cells. Norways, the most important sources of PAH pollution is considered to be aluminum plants, the metallurgical industry, offshore oil activity, transport, and wood burning. Stricter governmental regulations regarding emissions to the outer and internal environment combined with increased awareness of the potential health effects have motivated Norwegian metal industries to increase their efforts to reduce emissions considerably. One of the objective of the ongoing industry and Norwegian research council supported "SCORE" project is to reduce potential PAH emissions from an off gas stream of a ferroalloy furnace through controlled combustion. In a dedicated combustion chamber. The sizing and configuration of the combustion chamber depends on the combined properties of the bulk gas stream and the properties of the PAH itself. In order to achieve efficient and complete combustion the residence time and minimum temperature need to be optimized. For this design approach reliable kinetic data of the individual PAH-species and/or groups thereof are necessary. However, kinetic data on the combustion of PAH are difficult to obtain and there is only a limited number of studies. The paper presents an evaluation of the kinetic data for some of the PAH obtained from literature. In the present study, the oxidation is modelled for pure PAH and also for PAH mixed with process gas. Using a perfectly stirred reactor modelling approach the oxidation is modelled including advanced reaction kinetics to study influence of residence time and temperature on the conversion of PAH to CO2 and water. A Chemical Reactor Network (CRN) approach is developed to understand the oxidation of PAH inside the combustion chamber. Chemical reactor network modeling has been found to be a valuable tool in the evaluation of oxidation behavior of PAH under various conditions.

Keywords: PAH, PSR, energy recovery, ferro alloy furnace

Procedia PDF Downloads 244
427 Define Immersive Need Level for Optimal Adoption of Virtual Words with BIM Methodology

Authors: Simone Balin, Cecilia M. Bolognesi, Paolo Borin

Abstract:

In the construction industry, there is a large amount of data and interconnected information. To manage this information effectively, a transition to the immersive digitization of information processes is required. This transition is important to improve knowledge circulation, product quality, production sustainability and user satisfaction. However, there is currently a lack of a common definition of immersion in the construction industry, leading to misunderstandings and limiting the use of advanced immersive technologies. Furthermore, the lack of guidelines and a common vocabulary causes interested actors to abandon the virtual world after the first collaborative steps. This research aims to define the optimal use of immersive technologies in the AEC sector, particularly for collaborative processes based on the BIM methodology. Additionally, the research focuses on creating classes and levels to structure and define guidelines and a vocabulary for the use of the " Immersive Need Level." This concept, matured by recent technological advancements, aims to enable a broader application of state-of-the-art immersive technologies, avoiding misunderstandings, redundancies, or paradoxes. While the concept of "Informational Need Level" has been well clarified with the recent UNI EN 17412-1:2021 standard, when it comes to immersion, current regulations and literature only provide some hints about the technology and related equipment, leaving the procedural approach and the user's free interpretation completely unexplored. Therefore, once the necessary knowledge and information are acquired (Informational Need Level), it is possible to transition to an Immersive Need Level that involves the practical application of the acquired knowledge, exploring scenarios and solutions in a more thorough and detailed manner, with user involvement, via different immersion scales, in the design, construction or management process of a building or infrastructure. The need for information constitutes the basis for acquiring relevant knowledge and information, while the immersive need can manifest itself later, once a solid information base has been solidified, using the senses and developing immersive awareness. This new approach could solve the problem of inertia among AEC industry players in adopting and experimenting with new immersive technologies, expanding collaborative iterations and the range of available options.

Keywords: AECindustry, immersive technology (IMT), virtual reality, augmented reality, building information modeling (BIM), decision making, collaborative process, information need level, immersive level of need

Procedia PDF Downloads 52
426 Relative Importance of Different Mitochondrial Components in Maintaining the Barrier Integrity of Retinal Endothelial Cells: Implications for Vascular-associated Retinal Diseases

Authors: Shaimaa Eltanani, Thangal Yumnamcha, Ahmed S. Ibrahim

Abstract:

Purpose: Mitochondria dysfunction is central to breaking the barrier integrity of retinal endothelial cells (RECs) in various blinding eye diseases such as diabetic retinopathy and retinopathy of prematurity. Therefore, we aimed to dissect the role of different mitochondrial components, specifically, those of oxidative phosphorylation (OxPhos), in maintaining the barrier function of RECs. Methods: Electric cell-substrate impedance sensing (ECIS) technology was used to assess in real-time the role of different mitochondrial components in the total impedance (Z) of human RECs (HRECs) and its components; the capacitance (C) and the total resistance (R). HRECs were treated with specific mitochondrial inhibitors that target different steps in OxPhos: Rotenone for complex I; Oligomycin for ATP synthase; and FCCP for uncoupling OxPhos. Furthermore, data were modeled to investigate the effects of these inhibitors on the three parameters that govern the total resistance of cells: cell-cell interactions (Rb), cell-matrix interactions (α), and cell membrane permeability (Cm). Results: Rotenone (1 µM) produced the greatest reduction in the Z, followed by FCCP (1 µM), whereas no reduction in the Z was observed after the treatment with Oligomycin (1 µM). Following this further, we deconvoluted the effect of these inhibitors on Rb, α, and Cm. Firstly, rotenone (1 µM) completely abolished the resistance contribution of Rb, as the Rb became zero immediately after the treatment. Secondly, FCCP (1 µM) eliminated the resistance contribution of Rb only after 2.5 hours and increased Cm without considerable effect on α. Lastly, Oligomycin had the lowest impact among these inhibitors on Rb, which became similar to the control group at the end of the experiment without noticeable effects on Cm or α. Conclusion: These results demonstrate differential roles for complex I, complex V, and coupling of OxPhos in maintaining the barrier functionality of HRECs, in which complex I being the most important component in regulating the barrier functionality and the spreading behavior of HRECs. Such differences can be used in investigating gene expression as well as for screening selective agents that improve the functionality of complex I to be used in the therapeutic approach for treating REC-related retinal diseases.

Keywords: human retinal endothelial cells (hrecs), rotenone, oligomycin, fccp, oxidative phosphorylation, oxphos, capacitance, impedance, ecis modeling, rb resistance, α resistance, and barrier integrity

Procedia PDF Downloads 70
425 The Relationship between Proximity to Sources of Industrial-Related Outdoor Air Pollution and Children Emergency Department Visits for Asthma in the Census Metropolitan Area of Edmonton, Canada, 2004/2005 to 2009/2010

Authors: Laura A. Rodriguez-Villamizar, Alvaro Osornio-Vargas, Brian H. Rowe, Rhonda J. Rosychuk

Abstract:

Introduction/Objectives: The Census Metropolitan Area of Edmonton (CMAE) has important industrial emissions to the air from the Industrial Heartland Alberta (IHA) at the Northeast and the coal-fired power plants (CFPP) at the West. The objective of the study was to explore the presence of clusters of children asthma ED visits in the areas around the IHA and the CFPP. Methods: Retrospective data on children asthma ED visits was collected at the dissemination area (DA) level for children between 2 and 14 years of age, living in the CMAE between April 1, 2004, and March 31, 2010. We conducted a spatial analysis of disease clusters around putative sources with count (ecological) data using descriptive, hypothesis testing, and multivariable modeling analysis. Results: The mean crude rate of asthma ED visits was 9.3/1,000 children population per year during the study period. Circular spatial scan test for cases and events identified a cluster of children asthma ED visits in the DA where the CFPP are located in the Wabamum area. No clusters were identified around the IHA area. The multivariable models suggest that there is a significant decline in risk for children asthma ED visits as distance increases around the CFPP area this effect is modified at the SE direction with mean angle 125.58 degrees, where the risk increases with distance. In contrast, the regression models for IHA suggest that there is a significant increase in risk for children asthma ED visits as distance increases around the IHA area and this effect is modified at SW direction with mean angle 216.52 degrees, where the risk increases at shorter distances. Conclusions: Different methods for detecting clusters of disease consistently suggested the existence of a cluster of children asthma ED visits around the CFPP but not around the IHA within the CMAE. These results are probably explained by the direction of the air pollutants dispersion caused by the predominant and subdominant wind direction at each point. The use of different approaches to detect clusters of disease is valuable to have a better understanding of the presence, shape, direction and size of clusters of disease around pollution sources.

Keywords: air pollution, asthma, disease cluster, industry

Procedia PDF Downloads 254