Search results for: interface modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5175

Search results for: interface modeling

495 A Semi-Markov Chain-Based Model for the Prediction of Deterioration of Concrete Bridges in Quebec

Authors: Eslam Mohammed Abdelkader, Mohamed Marzouk, Tarek Zayed

Abstract:

Infrastructure systems are crucial to every aspect of life on Earth. Existing Infrastructure is subjected to degradation while the demands are growing for a better infrastructure system in response to the high standards of safety, health, population growth, and environmental protection. Bridges play a crucial role in urban transportation networks. Moreover, they are subjected to high level of deterioration because of the variable traffic loading, extreme weather conditions, cycles of freeze and thaw, etc. The development of Bridge Management Systems (BMSs) has become a fundamental imperative nowadays especially in the large transportation networks due to the huge variance between the need for maintenance actions, and the available funds to perform such actions. Deterioration models represent a very important aspect for the effective use of BMSs. This paper presents a probabilistic time-based model that is capable of predicting the condition ratings of the concrete bridge decks along its service life. The deterioration process of the concrete bridge decks is modeled using semi-Markov process. One of the main challenges of the Markov Chain Decision Process (MCDP) is the construction of the transition probability matrix. Yet, the proposed model overcomes this issue by modeling the sojourn times based on some probability density functions. The sojourn times of each condition state are fitted to probability density functions based on some goodness of fit tests such as Kolmogorov-Smirnov test, Anderson Darling, and chi-squared test. The parameters of the probability density functions are obtained using maximum likelihood estimation (MLE). The condition ratings obtained from the Ministry of Transportation in Quebec (MTQ) are utilized as a database to construct the deterioration model. Finally, a comparison is conducted between the Markov Chain and semi-Markov chain to select the most feasible prediction model.

Keywords: bridge management system, bridge decks, deterioration model, Semi-Markov chain, sojourn times, maximum likelihood estimation

Procedia PDF Downloads 204
494 Inversion of PROSPECT+SAIL Model for Estimating Vegetation Parameters from Hyperspectral Measurements with Application to Drought-Induced Impacts Detection

Authors: Bagher Bayat, Wouter Verhoef, Behnaz Arabi, Christiaan Van der Tol

Abstract:

The aim of this study was to follow the canopy reflectance patterns in response to soil water deficit and to detect trends of changes in biophysical and biochemical parameters of grass (Poa pratensis species). We used visual interpretation, imaging spectroscopy and radiative transfer model inversion to monitor the gradual manifestation of water stress effects in a laboratory setting. Plots of 21 cm x 14.5 cm surface area with Poa pratensis plants that formed a closed canopy were subjected to water stress for 50 days. In a regular weekly schedule, canopy reflectance was measured. In addition, Leaf Area Index (LAI), Chlorophyll (a+b) content (Cab) and Leaf Water Content (Cw) were measured at regular time intervals. The 1-D bidirectional canopy reflectance model SAIL, coupled with the leaf optical properties model PROSPECT, was inverted using hyperspectral measurements by means of an iterative optimization method to retrieve vegetation biophysical and biochemical parameters. The relationships between retrieved LAI, Cab, Cw, and Cs (Senescent material) with soil moisture content were established in two separated groups; stress and non-stressed. To differentiate the water stress condition from the non-stressed condition, a threshold was defined that was based on the laboratory produced Soil Water Characteristic (SWC) curve. All parameters retrieved by model inversion using canopy spectral data showed good correlation with soil water content in the water stress condition. These parameters co-varied with soil moisture content under the stress condition (Chl: R2= 0.91, Cw: R2= 0.97, Cs: R2= 0.88 and LAI: R2=0.48) at the canopy level. To validate the results, the relationship between vegetation parameters that were measured in the laboratory and soil moisture content was established. The results were totally in agreement with the modeling outputs and confirmed the results produced by radiative transfer model inversion and spectroscopy. Since water stress changes all parts of the spectrum, we concluded that analysis of the reflectance spectrum in the VIS-NIR-MIR region is a promising tool for monitoring water stress impacts on vegetation.

Keywords: hyperspectral remote sensing, model inversion, vegetation responses, water stress

Procedia PDF Downloads 220
493 The Effects of Subjective and Objective Indicators of Inequality on Life Satisfaction in a Comparative Perspective Using a Multi-Level Analysis

Authors: Atefeh Bagherianziarat, Dana Hamplova

Abstract:

The inverse social gradient in life satisfaction (LS) is a well-established research finding. To estimate the influence of inequality on LS, most of the studies have explored the effect of the objective aspects of inequality or individuals’ socioeconomic status (SES). However, relatively fewer studies have confirmed recently the significant effect of the subjective aspect of inequality or subjective socioeconomic status (SSS) on life satisfaction over and above SES. In other words, it is confirmed by some studies that individuals’ perception of their unequal status in society or SSS can moderate the impact of their absolute unequal status on their life satisfaction. Nevertheless, this newly confirmed moderating link has not been affirmed to work likewise in societies with different levels of social inequality and also for people who believe in the value of equality, at different levels. In this study, we compared the moderative influence of subjective inequality on the link between objective inequality and life satisfaction. In particular, we focus on differences across welfare state regimes based on Esping-Andersen's theory. Also, we explored the moderative role of believing in the value of equality on the link between objective and subjective inequality on LS in the given societies. Since our studied variables were measured at both individual and country levels, we applied a multilevel analysis to the European Social Survey data (round 9). The results showed that people in deferent regimes reported statistically meaningful different levels of life satisfaction that is explained to different extends by their household income and their perception of their income inequality. The findings of the study supported the previous findings of the moderator influence of perceived inequality on the link between objective inequality and LS. However, this link is different in various welfare state regimes. The results of the multilevel modeling showed that country-level subjective equality is a positive predictor for individuals’ life satisfaction, while the GINI coefficient that was considered as the indicator of absolute inequality has a smaller effect on life satisfaction. Also, country-level subjective equality moderates the confirmed link between individuals’ income and their life satisfaction. It can be concluded that both individual and country-level subjective inequality slightly moderate the effect of individuals’ income on their life satisfaction.

Keywords: individual values, life satisfaction, multilevel analysis, objective inequality, subjective inequality, welfare regimes status

Procedia PDF Downloads 92
492 Habitat Suitability, Genetic Diversity and Population Structure of Two Sympatric Fruit Bat Species Reveal the Need of an Urgent Conservation Action

Authors: Mohamed Thani Ibouroi, Ali Cheha, Claudine Montgelard, Veronique Arnal, Dawiyat Massoudi, Guillelme Astruc, Said Ali Ousseni Dhurham, Aurelien Besnard

Abstract:

The Livingstone's flying fox (Pteropus livingstonii) and the Comorian fruit bat (P.seychellensis comorensis) are two endemic fruit bat species among the mostly threatened animals of the Comoros archipelagos. Despite their role as important ecosystem service providers like all flying fox species as pollinators and seed dispersers, little is known about their ecologies, population genetics and structures making difficult the development of evidence-based conservation strategies. In this study, we assess spatial distribution and ecological niche of both species using Species Distribution Modeling (SDM) based on the recent Ensemble of Small Models (ESMs) approach using presence-only data. Population structure and genetic diversity of the two species were assessed using both mitochondrial and microsatellite markers based on non-invasive genetic samples. Our ESMs highlight a clear niche partitioning of the two sympatric species. Livingstone’s flying fox has a very limited distribution, restricted on steep slope of natural forests at high elevation. On the contrary, the Comorian fruit bat has a relatively large geographic range spread over low elevations in farmlands and villages. Our genetic analysis shows a low genetic diversity for both fruit bats species. They also show that the Livingstone’s flying fox population of the two islands were genetically isolated while no evidence of genetic differentiation was detected for the Comorian fruit bats between islands. Our results support the idea that natural habitat loss, especially the natural forest loss and fragmentation are the important factors impacting the distribution of the Livingstone’s flying fox by limiting its foraging area and reducing its potential roosting sites. On the contrary, the Comorian fruit bats seem to be favored by human activities probably because its diets are less specialized. By this study, we concluded that the Livingstone’s flying fox species and its habitat are of high priority in term of conservation at the Comoros archipelagos scale.

Keywords: Comoros islands, ecological niche, habitat loss, population genetics, fruit bats, conservation biology

Procedia PDF Downloads 262
491 Latitudinal Impact on Spatial and Temporal Variability of 7Be Activity Concentrations in Surface Air along Europe

Authors: M. A. Hernández-Ceballos, M. Marín-Ferrer, G. Cinelli, L. De Felice, T. Tollefsen, E. Nweke, P. V. Tognoli, S. Vanzo, M. De Cort

Abstract:

This study analyses the latitudinal impact of the spatial and temporal distribution on the cosmogenic isotope 7Be in surface air along Europe. The long-term database of the 6 sampling sites (Ivalo, Helsinki, Berlin, Freiburg, Sevilla and La Laguna), that regularly provide data to the Radioactivity Environmental Monitoring (REM) network managed by the Joint Research Centre (JRC) in Ispra, were used. The selection of the stations was performed attending to different factors, such as 1) heterogeneity in terms of latitude and altitude, and 2) long database coverage. The combination of these two parameters ensures a high degree of representativeness of the results. In the later, the temporal coverage varies between stations, being used in the present study sampling stations with a database more or less continuously from 1984 to 2011. The mean values of 7Be activity concentration presented a spatial distribution value ranging from 2.0 ± 0.9 mBq/m3 (Ivalo, north) to 4.8 ± 1.5 mBq/m3 (La Laguna, south). An increasing gradient with latitude was observed from the north to the south, 0.06 mBq/m3. However, there was no correlation with altitude, since all stations are sited within the atmospheric boundary layer. The analyses of the data indicated a dynamic range of 7Be activity for solar cycle and phase (maximum or minimum), having been observed different impact on stations according to their location. The results indicated a significant seasonal behavior, with the maximum concentrations occurring in the summer and minimum in the winter, although with differences in the values reached and in the month registered. Due to the large heterogeneity in the temporal pattern with which the individual radionuclide analyses were performed in each station, the 7Be monthly index was calculated to normalize the measurements and perform the direct comparison of monthly evolution among stations. Different intensity and evolution of the mean monthly index were observed. The knowledge of the spatial and temporal distribution of this natural radionuclide in the atmosphere is a key parameter for modeling studies of atmospheric processes, which are important phenomena to be taken into account in the case of a nuclear accident.

Keywords: Berilium-7, latitudinal impact in Europe, seasonal and monthly variability, solar cycle

Procedia PDF Downloads 333
490 Artificial Neural Network Approach for Vessel Detection Using Visible Infrared Imaging Radiometer Suite Day/Night Band

Authors: Takashi Yamaguchi, Ichio Asanuma, Jong G. Park, Kenneth J. Mackin, John Mittleman

Abstract:

In this paper, vessel detection using the artificial neural network is proposed in order to automatically construct the vessel detection model from the satellite imagery of day/night band (DNB) in visible infrared in the products of Imaging Radiometer Suite (VIIRS) on Suomi National Polar-orbiting Partnership (Suomi-NPP).The goal of our research is the establishment of vessel detection method using the satellite imagery of DNB in order to monitor the change of vessel activity over the wide region. The temporal vessel monitoring is very important to detect the events and understand the circumstances within the maritime environment. For the vessel locating and detection techniques, Automatic Identification System (AIS) and remote sensing using Synthetic aperture radar (SAR) imagery have been researched. However, each data has some lack of information due to uncertain operation or limitation of continuous observation. Therefore, the fusion of effective data and methods is important to monitor the maritime environment for the future. DNB is one of the effective data to detect the small vessels such as fishery ships that is difficult to observe in AIS. DNB is the satellite sensor data of VIIRS on Suomi-NPP. In contrast to SAR images, DNB images are moderate resolution and gave influence to the cloud but can observe the same regions in each day. DNB sensor can observe the lights produced from various artifact such as vehicles and buildings in the night and can detect the small vessels from the fishing light on the open water. However, the modeling of vessel detection using DNB is very difficult since complex atmosphere and lunar condition should be considered due to the strong influence of lunar reflection from cloud on DNB. Therefore, artificial neural network was applied to learn the vessel detection model. For the feature of vessel detection, Brightness Temperature at the 3.7 μm (BT3.7) was additionally used because BT3.7 can be used for the parameter of atmospheric conditions.

Keywords: artificial neural network, day/night band, remote sensing, Suomi National Polar-orbiting Partnership, vessel detection, Visible Infrared Imaging Radiometer Suite

Procedia PDF Downloads 234
489 Fueling Efficient Reporting And Decision-Making In Public Health With Large Data Automation In Remote Areas, Neno Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Julia Huggins, Fabien Munyaneza

Abstract:

Background: Partners In Health – Malawi introduced one of Operational Researches called Primary Health Care (PHC) Surveys in 2020, which seeks to assess progress of delivery of care in the district. The study consists of 5 long surveys, namely; Facility assessment, General Patient, Provider, Sick Child, Antenatal Care (ANC), primarily conducted in 4 health facilities in Neno district. These facilities include Neno district hospital, Dambe health centre, Chifunga and Matope. Usually, these annual surveys are conducted from January, and the target is to present final report by June. Once data is collected and analyzed, there are a series of reviews that take place before reaching final report. In the first place, the manual process took over 9 months to present final report. Initial findings reported about 76.9% of the data that added up when cross-checked with paper-based sources. Purpose: The aim of this approach is to run away from manually pulling the data, do fresh analysis, and reporting often associated not only with delays in reporting inconsistencies but also with poor quality of data if not done carefully. This automation approach was meant to utilize features of new technologies to create visualizations, reports, and dashboards in Power BI that are directly fished from the data source – CommCare hence only require a single click of a ‘refresh’ button to have the updated information populated in visualizations, reports, and dashboards at once. Methodology: We transformed paper-based questionnaires into electronic using CommCare mobile application. We further connected CommCare Mobile App directly to Power BI using Application Program Interface (API) connection as data pipeline. This provided chance to create visualizations, reports, and dashboards in Power BI. Contrary to the process of manually collecting data in paper-based questionnaires, entering them in ordinary spreadsheets, and conducting analysis every time when preparing for reporting, the team utilized CommCare and Microsoft Power BI technologies. We utilized validations and logics in CommCare to capture data with less errors. We utilized Power BI features to host the reports online by publishing them as cloud-computing process. We switched from sharing ordinary report files to sharing the link to potential recipients hence giving them freedom to dig deep into extra findings within Power BI dashboards and also freedom to export to any formats of their choice. Results: This data automation approach reduced research timelines from the initial 9 months’ duration to 5. It also improved the quality of the data findings from the original 76.9% to 98.9%. This brought confidence to draw conclusions from the findings that help in decision-making and gave opportunities for further researches. Conclusion: These results suggest that automating the research data process has the potential of reducing overall amount of time spent and improving the quality of the data. On this basis, the concept of data automation should be taken into serious consideration when conducting operational research for efficiency and decision-making.

Keywords: reporting, decision-making, power BI, commcare, data automation, visualizations, dashboards

Procedia PDF Downloads 113
488 D-Lysine Assisted 1-Ethyl-3-(3-Dimethylaminopropyl)Carbodiimide / N-Hydroxy Succinimide Initiated Crosslinked Collagen Scaffold with Controlled Structural and Surface Properties

Authors: G. Krishnamoorthy, S. Anandhakumar

Abstract:

The effect of D-Lysine (D-Lys) on collagen with 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide(EDC)/N-hydroxysuccinimide(NHS) initiated cross linking using experimental and modelling tools are evaluated. The results of the Coll-D-Lys-EDC/NHS scaffold also indicate an increase in the tensile strength (TS), percentage of elongation (% E), denaturation temperature (Td), and decrease the decomposition rate compared to L-Lys-EDC/NHS. Scanning electron microscopic (SEM) and atomic force microscopic (AFM) analyses revealed a well ordered with properly oriented and well-aligned structure of scaffold. The D-Lys stabilizes the scaffold against degradation by collagenase than L-Lys. The cell assay showed more than 98% fibroblast viability (NIH3T3) and improved cell adhesions, protein adsorption after 72h of culture when compared with native scaffold. Cell attachment after 74h was robust, with cytoskeletal analysis showing that the attached cells were aligned along the fibers assuming a spindle-shape appearance, despite, gene expression analyses revealed no apparent alterations in mRNA levels, although cell proliferation was not adversely affected. D-Lysine (D-Lys) plays a pivotal role in the self-assembly and conformation of collagen fibrils. The D-Lys assisted EDC/NHS initiated cross-linking induces the formation of an carboxamide by the activation of the side chain -COOH group, followed by aminolysis of the O-iso acylurea intermediates by the -NH2 groups are directly joined via an isopeptides bond. This leads to the formation of intra- and inter-helical cross links. Modeling studies indicated that D-Lys bind with collagen-like peptide (CLP) through multiple H-bonding and hydrophobic interactions. Orientational changes in collagenase on CLP-D-Lys are observed which may decrease its accessibility to degradation and stabilize CLP against the action of the former. D-Lys has lowest binding energy and improved fibrillar-assembly and staggered alignment without the undesired structural stiffness and aggregations. The proteolytic machinery is not well equipped to deal with Coll-D-Lys than Coll-L-Lys scaffold. The information derived from the present study could help in designing collagenolytically stable heterochiral collagen based scaffold for biomedical applications.

Keywords: collagen, collagenase, collagen like peptide, D-lysine, heterochiral collagen scaffold

Procedia PDF Downloads 384
487 Effects of Polymer Adsorption and Desorption on Polymer Flooding in Waterflooded Reservoir

Authors: Sukruthai Sapniwat, Falan Srisuriyachai

Abstract:

Polymer Flooding is one of the most well-known methods in Enhanced Oil Recovery (EOR) technology which can be implemented after either primary or secondary recovery, resulting in favorable conditions for the displacement mechanism in order to lower the residual oil in the reservoir. Polymer substances can lower the mobility ratio of the whole process by increasing the viscosity of injected water. Therefore, polymer flooding can increase volumetric sweep efficiency, which leads to a better recovery factor. Moreover, polymer adsorption onto rock surface can help decrease reservoir permeability contrast with high heterogeneity. Due to the reduction of the absolute permeability, effective permeability to water, representing flow ability of the injected fluid, is also reduced. Once polymer is adsorbed onto rock surface, polymer molecule can be desorbed when different fluids are injected. This study is performed to evaluate the effects of the adsorption and desorption process of polymer solutions to yield benefits on the oil recovery mechanism. A reservoir model is constructed by reservoir simulation program called STAR® commercialized by the Computer Modeling Group (CMG). Various polymer concentrations, starting times of polymer flooding process and polymer injection rates were evaluated with selected values of polymer desorption degrees including 0, 25, 50, 75 and 100%. The higher the value, the more adsorbed polymer molecules to return back to flowing fluid. According to the results, polymer desorption lowers polymer consumption, especially at low concentrations. Furthermore, starting time of polymer flooding and injection rate affect the oil production. The results show that waterflooding followed by earlier polymer flooding can increase the oil recovery factor while the higher injection rate also enhances the recovery. Polymer concentration is related to polymer consumption due to the two main benefits of polymer flooding control described above. Therefore, polymer slug size should be optimized based on polymer concentration. Polymer desorption causes polymer re-employment that is previously adsorbed onto rock surface, resulting in an increase of sweep efficiency in the further period of polymer flooding process. Even though waterflooding supports polymer injectivity, water cut at the producer can prematurely terminate the oil production. The injection rate decreases polymer adsorption due to decreased retention time of polymer flooding process.

Keywords: enhanced oil recovery technology, polymer adsorption and desorption, polymer flooding, reservoir simulation

Procedia PDF Downloads 322
486 Risk and Reliability Based Probabilistic Structural Analysis of Railroad Subgrade Using Finite Element Analysis

Authors: Asif Arshid, Ying Huang, Denver Tolliver

Abstract:

Finite Element (FE) method coupled with ever-increasing computational powers has substantially advanced the reliability of deterministic three dimensional structural analyses of a structure with uniform material properties. However, railways trackbed is made up of diverse group of materials including steel, wood, rock and soil, while each material has its own varying levels of heterogeneity and imperfections. It is observed that the application of probabilistic methods for trackbed structural analysis while incorporating the material and geometric variabilities is deeply underworked. The authors developed and validated a 3-dimensional FE based numerical trackbed model and in this study, they investigated the influence of variability in Young modulus and thicknesses of granular layers (Ballast and Subgrade) on the reliability index (-index) of the subgrade layer. The influence of these factors is accounted for by changing their Coefficients of Variance (COV) while keeping their means constant. These variations are formulated using Gaussian Normal distribution. Two failure mechanisms in subgrade namely Progressive Shear Failure and Excessive Plastic Deformation are examined. Preliminary results of risk-based probabilistic analysis for Progressive Shear Failure revealed that the variations in Ballast depth are the most influential factor for vertical stress at the top of subgrade surface. Whereas, in case of Excessive Plastic Deformations in subgrade layer, the variations in its own depth and Young modulus proved to be most important while ballast properties remained almost indifferent. For both these failure moods, it is also observed that the reliability index for subgrade failure increases with the increase in COV of ballast depth and subgrade Young modulus. The findings of this work is of particular significance in studying the combined effect of construction imperfections and variations in ground conditions on the structural performance of railroad trackbed and evaluating the associated risk involved. In addition, it also provides an additional tool to supplement the deterministic analysis procedures and decision making for railroad maintenance.

Keywords: finite element analysis, numerical modeling, probabilistic methods, risk and reliability analysis, subgrade

Procedia PDF Downloads 136
485 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 171
484 Long-Term Resilience Performance Assessment of Dual and Singular Water Distribution Infrastructures Using a Complex Systems Approach

Authors: Kambiz Rasoulkhani, Jeanne Cole, Sybil Sharvelle, Ali Mostafavi

Abstract:

Dual water distribution systems have been proposed as solutions to enhance the sustainability and resilience of urban water systems by improving performance and decreasing energy consumption. The objective of this study was to evaluate the long-term resilience and robustness of dual water distribution systems versus singular water distribution systems under various stressors such as demand fluctuation, aging infrastructure, and funding constraints. To this end, the long-term dynamics of these infrastructure systems was captured using a simulation model that integrates institutional agency decision-making processes with physical infrastructure degradation to evaluate the long-term transformation of water infrastructure. A set of model parameters that varies for dual and singular distribution infrastructure based on the system attributes, such as pipes length and material, energy intensity, water demand, water price, average pressure and flow rate, as well as operational expenditures, were considered and input in the simulation model. Accordingly, the model was used to simulate various scenarios of demand changes, funding levels, water price growth, and renewal strategies. The long-term resilience and robustness of each distribution infrastructure were evaluated based on various performance measures including network average condition, break frequency, network leakage, and energy use. An ecologically-based resilience approach was used to examine regime shifts and tipping points in the long-term performance of the systems under different stressors. Also, Classification and Regression Tree analysis was adopted to assess the robustness of each system under various scenarios. Using data from the City of Fort Collins, the long-term resilience and robustness of the dual and singular water distribution systems were evaluated over a 100-year analysis horizon for various scenarios. The results of the analysis enabled: (i) comparison between dual and singular water distribution systems in terms of long-term performance, resilience, and robustness; (ii) identification of renewal strategies and decision factors that enhance the long-term resiliency and robustness of dual and singular water distribution systems under different stressors.

Keywords: complex systems, dual water distribution systems, long-term resilience performance, multi-agent modeling, sustainable and resilient water systems

Procedia PDF Downloads 284
483 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 400
482 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 97
481 Improving the Utility of Social Media in Pharmacovigilance: A Mixed Methods Study

Authors: Amber Dhoot, Tarush Gupta, Andrea Gurr, William Jenkins, Sandro Pietrunti, Alexis Tang

Abstract:

Background: The COVID-19 pandemic has driven pharmacovigilance towards a new paradigm. Nowadays, more people than ever before are recognising and reporting adverse reactions from medications, treatments, and vaccines. In the modern era, with over 3.8 billion users, social media has become the most accessible medium for people to voice their opinions and so provides an opportunity to engage with more patient-centric and accessible pharmacovigilance. However, the pharmaceutical industry has been slow to incorporate social media into its modern pharmacovigilance strategy. This project aims to make social media a more effective tool in pharmacovigilance, and so reduce drug costs, improve drug safety and improve patient outcomes. This will be achieved by firstly uncovering and categorising the barriers facing the widespread adoption of social media in pharmacovigilance. Following this, the potential opportunities of social media will be explored. We will then propose realistic, practical recommendations to make social media a more effective tool for pharmacovigilance. Methodology: A comprehensive systematic literature review was conducted to produce a categorised summary of these barriers. This was followed by conducting 11 semi-structured interviews with pharmacovigilance experts to confirm the literature review findings whilst also exploring the unpublished and real-life challenges faced by those in the pharmaceutical industry. Finally, a survey of the general public (n = 112) ascertained public knowledge, perception, and opinion regarding the use of their social media data for pharmacovigilance purposes. This project stands out by offering perspectives from the public and pharmaceutical industry that fill the research gaps identified in the literature review. Results: Our results gave rise to several key analysis points. Firstly, inadequacies of current Natural Language Processing algorithms hinder effective pharmacovigilance data extraction from social media, and where data extraction is possible, there are significant questions over its quality. Social media also contains a variety of biases towards common drugs, mild adverse drug reactions, and the younger generation. Additionally, outdated regulations for social media pharmacovigilance do not align with new, modern General Data Protection Regulations (GDPR), creating ethical ambiguity about data privacy and level of access. This leads to an underlying mindset of avoidance within the pharmaceutical industry, as firms are disincentivised by the legal, financial, and reputational risks associated with breaking ambiguous regulations. Conclusion: Our project uncovered several barriers that prevent effective pharmacovigilance on social media. As such, social media should be used to complement traditional sources of pharmacovigilance rather than as a sole source of pharmacovigilance data. However, this project adds further value by proposing five practical recommendations that improve the effectiveness of social media pharmacovigilance. These include: prioritising health-orientated social media; improving technical capabilities through investment and strategic partnerships; setting clear regulatory guidelines using multi-stakeholder processes; creating an adverse drug reaction reporting interface inbuilt into social media platforms; and, finally, developing educational campaigns to raise awareness of the use of social media in pharmacovigilance. Implementation of these recommendations would speed up the efficient, ethical, and systematic adoption of social media in pharmacovigilance.

Keywords: adverse drug reaction, drug safety, pharmacovigilance, social media

Procedia PDF Downloads 79
480 1D/3D Modeling of a Liquid-Liquid Two-Phase Flow in a Milli-Structured Heat Exchanger/Reactor

Authors: Antoinette Maarawi, Zoe Anxionnaz-Minvielle, Pierre Coste, Nathalie Di Miceli Raimondi, Michel Cabassud

Abstract:

Milli-structured heat exchanger/reactors have been recently widely used, especially in the chemical industry, due to their enhanced performances in heat and mass transfer compared to conventional apparatuses. In our work, the ‘DeanHex’ heat exchanger/reactor with a 2D-meandering channel is investigated both experimentally and numerically. The square cross-sectioned channel has a hydraulic diameter of 2mm. The aim of our study is to model local physico-chemical phenomena (heat and mass transfer, axial dispersion, etc.) for a liquid-liquid two-phase flow in our lab-scale meandering channel, which represents the central part of the heat exchanger/reactor design. The numerical approach of the reactor is based on a 1D model for the flow channel encapsulated in a 3D model for the surrounding solid, using COMSOL Multiphysics V5.5. The use of the 1D approach to model the milli-channel reduces significantly the calculation time compared to 3D approaches, which are generally focused on local effects. Our 1D/3D approach intends to bridge the gap between the simulation at a small scale and the simulation at the reactor scale at a reasonable CPU cost. The heat transfer process between the 1D milli-channel and its 3D surrounding is modeled. The feasibility of this 1D/3D coupling was verified by comparing simulation results to experimental ones originated from two previous works. Temperature profiles along the channel axis obtained by simulation fit the experimental profiles for both cases. The next step is to integrate the liquid-liquid mass transfer model and to validate it with our experimental results. The hydrodynamics of the liquid-liquid two-phase system is modeled using the ‘mixture model approach’. The mass transfer behavior is represented by an overall volumetric mass transfer coefficient ‘kLa’ correlation obtained from our experimental results in the millimetric size meandering channel. The present work is a first step towards the scale-up of our ‘DeanHex’ expecting future industrialization of such equipment. Therefore, a generalized scaled-up model of the reactor comprising all the transfer processes will be built in order to predict the performance of the reactor in terms of conversion rate and energy efficiency at an industrial scale.

Keywords: liquid-liquid mass transfer, milli-structured reactor, 1D/3D model, process intensification

Procedia PDF Downloads 127
479 Design and Construction Demeanor of a Very High Embankment Using Geosynthetics

Authors: Mariya Dayana, Budhmal Jain

Abstract:

Kannur International Airport Ltd. (KIAL) is a new Greenfield airport project with airside development on an undulating terrain with an average height of 90m above Mean Sea Level (MSL) and a maximum height of 142m. To accommodate the desired Runway length and Runway End Safety Area (RESA) at both the ends along the proposed alignment, it resulted in 45.5 million cubic meters in cutting and filling. The insufficient availability of land for the construction of free slope embankment at RESA 07 end resulted in the design and construction of Reinforced Soil Slope (RSS) with a maximum slope of 65 degrees. An embankment fill of average 70m height with steep slopes located in high rainfall area is a unique feature of this project. The design and construction was challenging being asymmetrical with curves and bends. The fill was reinforced with high strength Uniaxial geogrids laid perpendicular to the slope. Weld mesh wrapped with coir mat acted as the facia units to protect it against surface failure. Face anchorage were also provided by wrapping the geogrids along the facia units where the slope angle was steeper than 45 degrees. Considering high rainfall received on this table top airport site, extensive drainage system was designed for the high embankment fill. Gabion wall up to 10m height were also designed and constructed along the boundary to accommodate the toe of the RSS fill beside the jeepable track at the base level. The design of RSS fill was done using ReSSA software and verified in PLAXIS 2D modeling. Both slip surface failure and wedge failure cases were considered in static and seismic analysis for local and global failure cases. The site won excavated laterite soil was used as the fill material for the construction. Extensive field and laboratory tests were conducted during the construction of RSS system for quality assurance. This paper represents a case study detailing the design and construction of a very high embankment using geosynthetics for the provision of Runway length and RESA area.

Keywords: airport, embankment, gabion, high strength uniaxial geogrid, kial, laterite soil, plaxis 2d

Procedia PDF Downloads 158
478 Macroeconomic Implications of Artificial Intelligence on Unemployment in Europe

Authors: Ahmad Haidar

Abstract:

Modern economic systems are characterized by growing complexity, and addressing their challenges requires innovative approaches. This study examines the implications of artificial intelligence (AI) on unemployment in Europe from a macroeconomic perspective, employing data modeling techniques to understand the relationship between AI integration and labor market dynamics. To understand the AI-unemployment nexus comprehensively, this research considers factors such as sector-specific AI adoption, skill requirements, workforce demographics, and geographical disparities. The study utilizes a panel data model, incorporating data from European countries over the last two decades, to explore the potential short-term and long-term effects of AI implementation on unemployment rates. In addition to investigating the direct impact of AI on unemployment, the study also delves into the potential indirect effects and spillover consequences. It considers how AI-driven productivity improvements and cost reductions might influence economic growth and, in turn, labor market outcomes. Furthermore, it assesses the potential for AI-induced changes in industrial structures to affect job displacement and creation. The research also highlights the importance of policy responses in mitigating potential negative consequences of AI adoption on unemployment. It emphasizes the need for targeted interventions such as skill development programs, labor market regulations, and social safety nets to enable a smooth transition for workers affected by AI-related job displacement. Additionally, the study explores the potential role of AI in informing and transforming policy-making to ensure more effective and agile responses to labor market challenges. In conclusion, this study provides a comprehensive analysis of the macroeconomic implications of AI on unemployment in Europe, highlighting the importance of understanding the nuanced relationships between AI adoption, economic growth, and labor market outcomes. By shedding light on these relationships, the study contributes valuable insights for policymakers, educators, and researchers, enabling them to make informed decisions in navigating the complex landscape of AI-driven economic transformation.

Keywords: artificial intelligence, unemployment, macroeconomic analysis, european labor market

Procedia PDF Downloads 70
477 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 125
476 Designing Offshore Pipelines Facing the Geohazard of Active Seismic Faults

Authors: Maria Trimintziou, Michael Sakellariou, Prodromos Psarropoulos

Abstract:

Nowadays, the exploitation of hydrocarbons reserves in deep seas and oceans, in combination with the need to transport hydrocarbons among countries, has made the design, construction and operation of offshore pipelines very significant. Under this perspective, it is evident that many more offshore pipelines are expected to be constructed in the near future. Since offshore pipelines are usually crossing extended areas, they may face a variety of geohazards that impose substantial permanent ground deformations (PGDs) to the pipeline and potentially threaten its integrity. In case of a geohazard area, there exist three options to proceed. The first option is to avoid the problematic area through rerouting, which is usually regarded as an unfavorable solution due to its high cost. The second is to apply (if possible) mitigation/protection measures in order to eliminate the geohazard itself. Finally, the last appealing option is to allow the pipeline crossing through the geohazard area, provided that the pipeline will have been verified against the expected PGDs. In areas with moderate or high seismicity the design of an offshore pipeline is more demanding due to the earthquake-related geohazards, such as landslides, soil liquefaction phenomena, and active faults. It is worthy to mention that although worldwide there is a great experience in offshore geotechnics and pipeline design, the experience in seismic design of offshore pipelines is rather limited due to the fact that most of the pipelines have been constructed in non-seismic regions (e.g. North Sea, West Australia, Gulf of Mexico, etc.). The current study focuses on the seismic design of offshore pipelines against active faults. After an extensive literature review of the provisions of the seismic norms worldwide and of the available analytical methods, the study simulates numerically (through finite-element modeling and strain-based criteria) the distress of offshore pipelines subjected to PGDs induced by active seismic faults at the seabed. Factors, such as the geometrical properties of the fault, the mechanical properties of the ruptured soil formations, and the pipeline characteristics, are examined. After some interesting conclusions regarding the seismic vulnerability of offshore pipelines, potential cost-effective mitigation measures are proposed taking into account constructability issues.

Keywords: offhore pipelines, seismic design, active faults, permanent ground deformations (PGDs)

Procedia PDF Downloads 581
475 The Relationship between Body Positioning and Badminton Smash Quality

Authors: Gongbing Shan, Shiming Li, Zhao Zhang, Bingjun Wan

Abstract:

Badminton originated in ancient civilizations in Europe and Asia more than 2000 years ago. Presently, it is played almost everywhere with estimated 220 million people playing badminton regularly, ranging from professionals to recreational players; and it is the second most played sport in the world after soccer. In Asia, the popularity of badminton and involvement of people surpass soccer. Unfortunately, scientific researches on badminton skills are hardly proportional to badminton’s popularity. A search of literature has shown that the literature body of biomechanical investigations is relatively small. One of the dominant skills in badminton is the forehand overhead smash, which consists of 1/5 attacks during games. Empirical evidences show that one has to adjust the body position in relation to the coming shuttlecock to produce a powerful and accurate smash. Therefore, positioning is a fundamental aspect influencing smash quality. A search of literature has shown that there is a dearth/lack of study on this fundamental aspect. The goals of this study were to determine the influence of positioning and training experience on smash quality in order to discover information that could help learn/acquire the skill. Using a 10-camera, 3D motion capture system (VICON MX, 200 frames/s) and 15-segment, full-body biomechanical model, 14 skilled and 15 novice players were measured and analyzed. Results have revealed that the body positioning has direct influence on the quality of a smash, especially on shuttlecock release angle and clearance height (passing over the net) of offensive players. The results also suggest that, for training a proper positioning, one could conduct a self-selected comfort position towards a statically hanged shuttlecock and then step one foot back – a practical reference marker for learning. This perceptional marker could be applied in guiding the learning and training of beginners. As one gains experience through repetitive training, improved limbs’ coordination would increase smash quality further. The researchers hope that the findings will benefit practitioners for developing effective training programs for beginners.

Keywords: 3D motion analysis, biomechanical modeling, shuttlecock release speed, shuttlecock release angle, clearance height

Procedia PDF Downloads 492
474 Simulation of Colombian Exchange Rate to Cover the Exchange Risk Using Financial Options Like Hedge Strategy

Authors: Natalia M. Acevedo, Luis M. Jimenez, Erick Lambis

Abstract:

Imperfections in the capital market are used to argue the relevance of the corporate risk management function. With corporate hedge, the value of the company is increased by reducing the volatility of the expected cash flow and making it possible to face a lower bankruptcy costs and financial difficulties, without sacrificing tax advantages for debt financing. With the propose to avoid exchange rate troubles over cash flows of Colombian exporting firms, this dissertation uses financial options, over exchange rate between Peso and Dollar, for realizing a financial hedge. In this study, a strategy of hedge is designed for an exporting company in Colombia with the objective of preventing fluctuations because, if the exchange rate down, the number of Colombian pesos that obtains the company by exports, is less than agreed. The exchange rate of Colombia is measured by the TRM (Representative Market Rate), representing the number of Colombian pesos for an American dollar. First, the TMR is modelled through the Geometric Brownian Motion, with this, the project price is simulated using Montecarlo simulations and finding the mean of TRM for three, six and twelve months. For financial hedging, currency options were used. The 6-month projection was covered with financial options on European-type currency with a strike price of $ 2,780.47 for each month; this value corresponds to the last value of the historical TRM. In the compensation of the options in each month, the price paid for the premium, calculated with the Black-Scholes method for currency options, was considered. Finally, with the modeling of prices and the Monte Carlo simulation, the effect of the exchange hedging with options on the exporting company was determined, this by means of the unit price estimate to which the dollars in the scenario without coverage were changed and scenario with coverage. After using the scenarios: is determinate that the TRM will have a bull trend and the exporting firm will be affected positively because they will get more pesos for each dollar. The results show that the financial options manage to reduce the exchange risk. The expected value with coverage is approximate to the expected value without coverage, but the 5% percentile with coverage is greater than without coverage. The foregoing indicates that in the worst scenarios the exporting companies will obtain better prices for the sale of the currencies if they cover.

Keywords: currency hedging, futures, geometric Brownian motion, options

Procedia PDF Downloads 123
473 Development of a Framework for Assessment of Market Penetration of Oil Sands Energy Technologies in Mining Sector

Authors: Saeidreza Radpour, Md. Ahiduzzaman, Amit Kumar

Abstract:

Alberta’s mining sector consumed 871.3 PJ in 2012, which is 67.1% of the energy consumed in the industry sector and about 40% of all the energy consumed in the province of Alberta. Natural gas, petroleum products, and electricity supplied 55.9%, 20.8%, and 7.7%, respectively, of the total energy use in this sector. Oil sands mining and upgrading to crude oil make up most of the mining energy sector activities in Alberta. Crude oil is produced from the oil sands either by in situ methods or by the mining and extraction of bitumen from oil sands ore. In this research, the factors affecting oil sands production have been assessed and a framework has been developed for market penetration of new efficient technologies in this sector. Oil sands production amount is a complex function of many different factors, broadly categorized into technical, economic, political, and global clusters. The results of developed and implemented statistical analysis in this research show that the importance of key factors affecting on oil sands production in Alberta is ranked as: Global energy consumption (94% consistency), Global crude oil price (86% consistency), and Crude oil export (80% consistency). A framework for modeling oil sands energy technologies’ market penetration (OSETMP) has been developed to cover related technical, economic and environmental factors in this sector. It has been assumed that the impact of political and social constraints is reflected in the model by changes of global oil price or crude oil price in Canada. The market share of novel in situ mining technologies with low energy and water use are assessed and calculated in the market penetration framework include: 1) Partial upgrading, 2) Liquid addition to steam to enhance recovery (LASER), 3) Solvent-assisted process (SAP), also called solvent-cyclic steam-assisted gravity drainage (SC-SAGD), 4) Cyclic solvent, 5) Heated solvent, 6) Wedge well, 7) Enhanced modified steam and Gas push (emsagp), 8) Electro-thermal dynamic stripping process (ET-DSP), 9) Harris electro-magnetic heating applications (EMHA), 10) Paraffin froth separation. The results of the study will show the penetration profile of these technologies over a long term planning horizon.

Keywords: appliances efficiency improvement, diffusion models, market penetration, residential sector

Procedia PDF Downloads 326
472 Advancing Hydrogen Production Through Additive Manufacturing: Optimising Structures of High Performance Electrodes

Authors: Fama Jallow, Melody Neaves, Professor Mcgregor

Abstract:

The quest for sustainable energy sources has driven significant interest in hydrogen production as a clean and efficient fuel. Alkaline water electrolysis (AWE) has emerged as a prominent method for generating hydrogen, necessitating the development of advanced electrode designs with improved performance characteristics. Additive manufacturing (AM) by laser powder bed fusion (LPBF) method presents an opportunity to tailor electrode microstructures and properties, enhancing their performance. This research proposes investigating the AM of electrodes with different lattice structures to optimize hydrogen production. The primary objective is to employ advanced modeling techniques to identify and select two optimal lattice structures for electrode fabrication. LPBF will be used to fabricate electrodes with precise control over lattice geometry, pore size, and distribution. The performance evaluation will encompass energy consumption and porosity analysis. AWE will assess energy efficiency, aiming to identify lattice structures with enhanced hydrogen production rates and reduced power requirements. Computed tomography (CT) scanning will analyze porosity to determine material integrity and mass transport characteristics. The research aims to bridge the gap between AM and hydrogen production by investigating lattice structures potential in electrode design. By systematically exploring lattice structures and their impact on performance, this study aims to provide valuable insights into the design and fabrication of highly efficient and cost-effective electrodes for AWE. The outcomes hold promise for advancing hydrogen production through AM. The research will have a significant impact on the development of sustainable energy sources. The findings from this study will help to improve the efficiency of AWE, making it a more viable option for hydrogen production. This could lead to a reduction in our reliance on fossil fuels, which would have a positive impact on the environment. The research is also likely to have a commercial impact. The findings could be used to develop new electrode designs that are more efficient and cost-effective. This could lead to the development of new hydrogen production technologies, which could have a significant impact on the energy market.

Keywords: hydrogen production, electrode, lattice structure, Africa

Procedia PDF Downloads 62
471 Durability Analysis of a Knuckle Arm Using VPG System

Authors: Geun-Yeon Kim, S. P. Praveen Kumar, Kwon-Hee Lee

Abstract:

A steering knuckle arm is the component that connects the steering system and suspension system. The structural performances such as stiffness, strength, and durability are considered in its design process. The former study suggested the lightweight design of a knuckle arm considering the structural performances and using the metamodel-based optimization. The six shape design variables were defined, and the optimum design was calculated by applying the kriging interpolation method. The finite element method was utilized to predict the structural responses. The suggested knuckle was made of the aluminum Al6082, and its weight was reduced about 60% in comparison with the base steel knuckle, satisfying the design requirements. Then, we investigated its manufacturability by performing foraging analysis. The forging was done as hot process, and the product was made through two-step forging. As a final step of its developing process, the durability is investigated by using the flexible dynamic analysis software, LS-DYNA and the pre and post processor, eta/VPG. Generally, a car make does not provide all the information with the part manufacturer. Thus, the part manufacturer has a limit in predicting the durability performance with the unit of full car. The eta/VPG has the libraries of suspension, tire, and road, which are commonly used parts. That makes a full car modeling. First, the full car is modeled by referencing the following information; Overall Length: 3,595mm, Overall Width: 1,595mm, CVW (Curve Vehicle Weight): 910kg, Front Suspension: MacPherson Strut, Rear Suspension: Torsion Beam Axle, Tire: 235/65R17. Second, the road is selected as the cobblestone. The road condition of the cobblestone is almost 10 times more severe than that of usual paved road. Third, the dynamic finite element analysis using the LS-DYNA is performed to predict the durability performance of the suggested knuckle arm. The life of the suggested knuckle arm is calculated as 350,000km, which satisfies the design requirement set up by the part manufacturer. In this study, the overall design process of a knuckle arm is suggested, and it can be seen that the developed knuckle arm satisfies the design requirement of the durability with the unit of full car. The VPG analysis is successfully performed even though it does not an exact prediction since the full car model is very rough one. Thus, this approach can be used effectively when the detail to full car is not given.

Keywords: knuckle arm, structural optimization, Metamodel, forging, durability, VPG (Virtual Proving Ground)

Procedia PDF Downloads 414
470 Identifying Environmental Adaptive Genetic Loci in Caloteropis Procera (Estabragh): Population Genetics and Landscape Genetic Analyses

Authors: Masoud Sheidaei, Mohammad-Reza Kordasti, Fahimeh Koohdar

Abstract:

Calotropis procera (Aiton) W.T.Aiton, (Apocynaceae), is an economically and medicinally important plant species which is an evergreen, perennial shrub growing in arid and semi-arid climates, and can tolerate very low annual rainfall (150 mm) and a dry season. The plant can also tolerate temperature ran off 20 to30°C and is not frost tolerant. This plant species prefers free-draining sandy soils but can grow also in alkaline and saline soils.It is found at a range of altitudes from exposed coastal sites to medium elevations up to 1300 m. Due to morpho-physiological adaptations of C. procera and its ability to tolerate various abiotic stresses. This taxa can compete with desirable pasture species and forms dense thickets that interfere with stock management, particularly mustering activities. Caloteropis procera grows only in southern part of Iran where in comprises a limited number of geographical populations. We used different population genetics and r landscape analysis to produce data on geographical populations of C. procera based on molecular genetic study using SCoT molecular markers. First, we used spatial principal components (sPCA), as it can analyze data in a reduced space and can be used for co-dominant markers as well as presence / absence data as is the case in SCoT molecular markers. This method also carries out Moran I and Mantel tests to reveal spatial autocorrelation and test for the occurrence of Isolation by distance (IBD). We also performed Random Forest analysis to identify the importance of spatial and geographical variables on genetic diversity. Moreover, we used both RDA (Redundency analysis), and LFMM (Latent factor mixed model), to identify the genetic loci significantly associated with geographical variables. A niche modellng analysis was carried our to predict present potential area for distribution of these plants and also the area present by the year 2050. The results obtained will be discussed in this paper.

Keywords: population genetics, landscape genetic, Calotreropis procera, niche modeling, SCoT markers

Procedia PDF Downloads 89
469 Modelling of Phase Transformation Kinetics in Post Heat-Treated Resistance Spot Weld of AISI 1010 Mild Steel

Authors: B. V. Feujofack Kemda, N. Barka, M. Jahazi, D. Osmani

Abstract:

Automobile manufacturers are constantly seeking means to reduce the weight of car bodies. The usage of several steel grades in auto body assembling has been found to be a good technique to enlighten vehicles weight. This few years, the usage of dual phase (DP) steels, transformation induced plasticity (TRIP) steels and boron steels in some parts of the auto body have become a necessity because of their lightweight. However, these steels are martensitic, when they undergo a fast heat treatment, the resultant microstructure is essential, made of martensite. Resistance spot welding (RSW), one of the most used techniques in assembling auto bodies, becomes problematic in the case of these steels. RSW being indeed a process were steel is heated and cooled in a very short period of time, the resulting weld nugget is mostly fully martensitic, especially in the case of DP, TRIP and boron steels but that also holds for plain carbon steels as AISI 1010 grade which is extensively used in auto body inner parts. Martensite in its turn must be avoided as most as possible when welding steel because it is the principal source of brittleness and it weakens weld nugget. Thus, this work aims to find a mean to reduce martensite fraction in weld nugget when using RSW for assembling. The prediction of phase transformation kinetics during RSW has been done. That phase transformation kinetics prediction has been made possible through the modelling of the whole welding process, and a technique called post weld heat treatment (PWHT) have been applied in order to reduce martensite fraction in the weld nugget. Simulation has been performed for AISI 1010 grade, and results show that the application of PWHT leads to the formation of not only martensite but also ferrite, bainite and pearlite during the cooling of weld nugget. Welding experiments have been done in parallel and micrographic analyses show the presence of several phases in the weld nugget. Experimental weld geometry and phase proportions are in good agreement with simulation results, showing here the validity of the model.

Keywords: resistance spot welding, AISI 1010, modeling, post weld heat treatment, phase transformation, kinetics

Procedia PDF Downloads 112
468 Mathematical Study of CO₂ Dispersion in Carbonated Water Injection Enhanced Oil Recovery Using Non-Equilibrium 2D Simulator

Authors: Ahmed Abdulrahman, Jalal Foroozesh

Abstract:

CO₂ based enhanced oil recovery (EOR) techniques have gained massive attention from major oil firms since they resolve the industry's two main concerns of CO₂ contribution to the greenhouse effect and the declined oil production. Carbonated water injection (CWI) is a promising EOR technique that promotes safe and economic CO₂ storage; moreover, it mitigates the pitfalls of CO₂ injection, which include low sweep efficiency, early CO₂ breakthrough, and the risk of CO₂ leakage in fractured formations. One of the main challenges that hinder the wide adoption of this EOR technique is the complexity of accurate modeling of the kinetics of CO₂ mass transfer. The mechanisms of CO₂ mass transfer during CWI include the slow and gradual cross-phase CO₂ diffusion from carbonated water (CW) to the oil phase and the CO₂ dispersion (within phase diffusion and mechanical mixing), which affects the oil physical properties and the spatial spreading of CO₂ inside the reservoir. A 2D non-equilibrium compositional simulator has been developed using a fully implicit finite difference approximation. The material balance term (k) was added to the governing equation to account for the slow cross-phase diffusion of CO₂ from CW to the oil within the gird cell. Also, longitudinal and transverse dispersion coefficients have been added to account for CO₂ spatial distribution inside the oil phase. The CO₂-oil diffusion coefficient was calculated using the Sigmund correlation, while a scale-dependent dispersivity was used to calculate CO₂ mechanical mixing. It was found that the CO₂-oil diffusion mechanism has a minor impact on oil recovery, but it tends to increase the amount of CO₂ stored inside the formation and slightly alters the residual oil properties. On the other hand, the mechanical mixing mechanism has a huge impact on CO₂ spatial spreading (accurate prediction of CO₂ production) and the noticeable change in oil physical properties tends to increase the recovery factor. A sensitivity analysis has been done to investigate the effect of formation heterogeneity (porosity, permeability) and injection rate, it was found that the formation heterogeneity tends to increase CO₂ dispersion coefficients, and a low injection rate should be implemented during CWI.

Keywords: CO₂ mass transfer, carbonated water injection, CO₂ dispersion, CO₂ diffusion, cross phase CO₂ diffusion, within phase CO2 diffusion, CO₂ mechanical mixing, non-equilibrium simulation

Procedia PDF Downloads 167
467 Prediction of Cardiovascular Markers Associated With Aromatase Inhibitors Side Effects Among Breast Cancer Women in Africa

Authors: Jean Paul M. Milambo

Abstract:

Purpose: Aromatase inhibitors (AIs) are indicated in the treatment of hormone-receptive breast cancer in postmenopausal women in various settings. Studies have shown cardiovascular events in some developed countries. To date the data is sparce for evidence-based recommendations in African clinical settings due to lack of cancer registries, capacity building and surveillance systems. Therefore, this study was conducted to assess the feasibility of HyBeacon® probe genotyping adjunctive to standard care for timely prediction and diagnosis of Aromatase inhibitors (AIs) associated adverse events in breast cancer survivors in Africa. Methods: Cross sectional study was conducted to assess the knowledge of POCT among six African countries using online survey and telephonically contacted. Incremental cost effectiveness ratio (ICER) was calculated, using diagnostic accuracy study. This was based on mathematical modeling. Results: One hundred twenty-six participants were considered for analysis (mean age = 61 years; SD = 7.11 years; 95%CI: 60-62 years). Comparison of genotyping from HyBeacon® probe technology to Sanger sequencing showed that sensitivity was reported at 99% (95% CI: 94.55% to 99.97%), specificity at 89.44% (95% CI: 87.25 to 91.38%), PPV at 51% (95%: 43.77 to 58.26%), and NPV at 99.88% (95% CI: 99.31 to 100.00%). Based on the mathematical model, the assumptions revealed that ICER was R7 044.55. Conclusion: POCT using HyBeacon® probe genotyping for AI-associated adverse events maybe cost effective in many African clinical settings. Integration of preventive measures for early detection and prevention guided by different subtype of breast cancer diagnosis with specific clinical, biomedical and genetic screenings may improve cancer survivorship. Feasibility of POCT was demonstrated but the implementation could be achieved by improving the integration of POCT within primary health cares, referral cancer hospitals with capacity building activities at different level of health systems. This finding is pertinent for a future envisioned implementation and global scale-up of POCT-based initiative as part of risk communication strategies with clear management pathways.

Keywords: breast cancer, diagnosis, point of care, South Africa, aromatase inhibitors

Procedia PDF Downloads 72
466 Quantitative Seismic Interpretation in the LP3D Concession, Central of the Sirte Basin, Libya

Authors: Tawfig Alghbaili

Abstract:

LP3D Field is located near the center of the Sirt Basin in the Marada Trough approximately 215 km south Marsa Al Braga City. The Marada Trough is bounded on the west by a major fault, which forms the edge of the Beda Platform, while on the east, a bounding fault marks the edge of the Zelten Platform. The main reservoir in the LP3D Field is Upper Paleocene Beda Formation. The Beda Formation is mainly limestone interbedded with shale. The reservoir average thickness is 117.5 feet. To develop a better understanding of the characterization and distribution of the Beda reservoir, quantitative seismic data interpretation has been done, and also, well logs data were analyzed. Six reflectors corresponding to the tops of the Beda, Hagfa Shale, Gir, Kheir Shale, Khalifa Shale, and Zelten Formations were picked and mapped. Special work was done on fault interpretation part because of the complexities of the faults at the structure area. Different attribute analyses were done to build up more understanding of structures lateral extension and to view a clear image of the fault blocks. Time to depth conversion was computed using velocity modeling generated from check shot and sonic data. The simplified stratigraphic cross-section was drawn through the wells A1, A2, A3, and A4-LP3D. The distribution and the thickness variations of the Beda reservoir along the study area had been demonstrating. Petrophysical analysis of wireline logging also was done and Cross plots of some petrophysical parameters are generated to evaluate the lithology of reservoir interval. Structure and Stratigraphic Framework was designed and run to generate different model like faults, facies, and petrophysical models and calculate the reservoir volumetric. This study concluded that the depth structure map of the Beda formation shows the main structure in the area of study, which is north to south faulted anticline. Based on the Beda reservoir models, volumetric for the base case has been calculated and it has STOIIP of 41MMSTB and Recoverable oil of 10MMSTB. Seismic attributes confirm the structure trend and build a better understanding of the fault system in the area.

Keywords: LP3D Field, Beda Formation, reservoir models, Seismic attributes

Procedia PDF Downloads 209