Search results for: numerical visualization
527 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement
Authors: Hu Zhenxing, Gao Jianxin
Abstract:
Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D
Procedia PDF Downloads 498526 Impact of Nanoparticles in Enhancement of Thermal Conductivity of Phase Change Materials in Thermal Energy Storage and Cooling of Concentrated Photovoltaics
Authors: Ismaila H. Zarma, Mahmoud Ahmed, Shinichi Ookawara, Hamdi Abo-Ali
Abstract:
Phase change materials (PCM) are an ideal thermal storage medium. They are characterized by a high latent heat, which allows them to store large amounts of energy when the material transitions into different physical states. Concentrated photovoltaic (CPV) systems are widely recognized as the most efficient form of Photovoltaic (PV) for thermal energy which can be stored in Phase Change Materials (PCM). However, PCMs often have a low thermal conductivity which leads to a slow transient response. This makes it difficult to quickly store and access the energy stored within the PCM based systems, so there is need to improve transient responses and increase the thermal conductivity. The present study aims to investigate and analyze the melting and solidification process of phase change materials (PCMs) enhanced by nanoparticle contained in a container. Heat flux from concentrated photovoltaic is applied in an attempt to analyze the thermal performance and the impact of nanoparticles. The work will be realized by using a two dimensional model which take into account the phase change phenomena based on the principle of enthalpy method. Numerical simulations have been performed to investigate heat and flow characteristics by using governing equations, to ascertain the impacts of the nanoparticle loading. The Rayleigh number, sub-cooling as well as the unsteady evolution of the melting front and the velocity and temperature fields were also observed. The predicted results exhibited a good agreement, showing thermal enhancement due to present of nanoparticle which leads to decreasing the melting time.Keywords: thermal energy storage, phase-change material, nanoparticle, concentrated photovoltaic
Procedia PDF Downloads 203525 Research of Seepage Field and Slope Stability Considering Heterogeneous Characteristics of Waste Piles: A Less Costly Way to Reduce High Leachate Levels and Avoid Accidents
Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong, Gan Lei, Xu Liqun
Abstract:
Due to the characteristics of high-heap and large-volume, the complex layers of waste and the high-water level of leachate, environmental pollution, and slope instability are easily produced. It is therefore of great significance to research the heterogeneous seepage field and stability of landfills. This paper focuses on the heterogeneous characteristics of the landfill piles and analyzes the seepage field and slope stability of the landfill using statistical and numerical analysis methods. The calculated results are compared with the field measurement and literature research data to verify the reliability of the model, which may provide the basis for the design, safe, and eco-friendly operation of the landfill. The main innovations are as follows: (1) The saturated-unsaturated seepage equation of heterogeneous soil is derived theoretically. The heterogeneous landfill is regarded as composed of infinite layers of homogeneous waste, and a method for establishing the heterogeneous seepage model is proposed. Then the formation law of the stagnant water level of heterogeneous landfills is studied. It is found that the maximum stagnant water level of landfills is higher when considering the heterogeneous seepage characteristics, which harms the stability of landfills. (2) Considering the heterogeneity weight and strength characteristics of waste, a method of establishing a heterogeneous stability model is proposed, and it is extended to the three-dimensional stability study. It is found that the distribution of heterogeneous characteristics has a great influence on the stability of landfill slope. During the operation and management of the landfill, the reservoir bank should also be considered while considering the capacity of the landfill.Keywords: heterogeneous characteristics, leachate levels, saturated-unsaturated seepage, seepage field, slope stability
Procedia PDF Downloads 251524 Probabilistic Crash Prediction and Prevention of Vehicle Crash
Authors: Lavanya Annadi, Fahimeh Jafari
Abstract:
Transportation brings immense benefits to society, but it also has its costs. Costs include such as the cost of infrastructure, personnel and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion and various indirect costs in terms of air transport. More research has been done to identify the various factors that affect road accidents, such as road infrastructure, traffic, sociodemographic characteristics, land use, and the environment. The aim of this research is to predict the probabilistic crash prediction of vehicles using machine learning due to natural and structural reasons by excluding spontaneous reasons like overspeeding etc., in the United States. These factors range from weather factors, like weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity to human made structures like road structure factors like bump, roundabout, no exit, turning loop, give away, etc. Probabilities are dissected into ten different classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes that happened in all states collected by the US government. To calculate the probability, multinomial expected value was used and assigned a classification label as the crash probability. We applied three different classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-deep insights through exploratory data analysis.Keywords: road safety, crash prediction, exploratory analysis, machine learning
Procedia PDF Downloads 111523 Lateral-Torsional Buckling of Steel Girder Systems Braced by Solid Web Crossbeams
Authors: Ruoyang Tang, Jianguo Nie
Abstract:
Lateral-torsional bracing members are critical to the stability of girder systems during the construction phase of steel-concrete composite bridges, and the interaction effect of multiple girders plays an essential role in the determination of buckling load. In this paper, an investigation is conducted on the lateral-torsional buckling behavior of the steel girder system which is composed of three or four I-shaped girders and braced by solid web crossbeams. The buckling load for such girder system is comprehensively analyzed and an analytical solution is developed for uniform pressure loading conditions. Furthermore, post-buckling analysis including initial geometric imperfections is performed and parametric studies in terms of bracing density, stiffness ratio as well as the number and spacing of girders are presented in order to find the optimal bracing plans for an arbitrary girder layout. The theoretical solution of critical load on account of local buckling mode shows good agreement with the numerical results in eigenvalue analysis. In addition, parametric analysis results show that both bracing density and stiffness ratio have a significant impact on the initial stiffness, global stability and failure mode of such girder system. Taking into consideration the effect of initial geometric imperfections, an increase in bracing density between adjacent girders can effectively improve the bearing capacity of the structure, and higher beam-girder stiffness ratio can result in a more ductile failure mode.Keywords: bracing member, construction stage, lateral-torsional buckling, steel girder system
Procedia PDF Downloads 124522 Assessing the Cumulative Impact of PM₂.₅ Emissions from Power Plants by Using the Hybrid Air Quality Model and Evaluating the Contributing Salient Factor in South Taiwan
Authors: Jackson Simon Lusagalika, Lai Hsin-Chih, Dai Yu-Tung
Abstract:
Particles with an aerodynamic diameter of 2.5 meters or less are referred to as "fine particulate matter" (PM₂.₅) are easily inhaled and can go deeper into the lungs than other particles in the atmosphere, where it may have detrimental health consequences. In this study, we use a hybrid model that combined CMAQ and AERMOD as well as initial meteorological fields from the Weather Research and Forecasting (WRF) model to study the impact of power plant PM₂.₅ emissions in South Taiwan since it frequently experiences higher PM₂.₅ levels. A specific date of March 3, 2022, was chosen as a result of a power outage that prompted the bulk of power plants to shut down. In some way, it is not conceivable anywhere in the world to turn off the power for the sole purpose of doing research. Therefore, this catastrophe involving a power outage and the shutdown of power plants offers a great occasion to evaluate the impact of air pollution driven by this power sector. As a result, four numerical experiments were conducted in the study using the Continuous Emission Data System (CEMS), assuming that the power plants continued to function normally after the power outage. The hybrid model results revealed that power plants have a minor impact in the study region. However, we examined the accumulation of PM₂.₅ in the study and discovered that once the vortex at 925hPa was established and moved to the north of Taiwan's coast, the study region experienced higher observed PM₂.₅ concentrations influenced by meteorological factors. This study recommends that decision-makers take into account not only control techniques, specifically emission reductions, but also the atmospheric and meteorological implications for future investigations.Keywords: PM₂.₅ concentration, powerplants, hybrid air quality model, CEMS, Vorticity
Procedia PDF Downloads 76521 Bifurcations of a System of Rotor-Ball Bearings with Waviness and Squeeze Film Dampers
Authors: Sina Modares Ahmadi, Mohamad Reza Ghazavi, Mandana Sheikhzad
Abstract:
Squeeze film damper systems (SFD) are often used in machines with high rotational speed to reduce non-periodic behavior by creating external damping. These types of systems are frequently used in aircraft gas turbine engines. There are some structural parameters which are of great importance in designing these kinds of systems, such as oil film thickness, C, and outer race mass, mo. Moreover, there is a crucial parameter associated with manufacturing process, under the title of waviness. Geometric imperfections are often called waviness if its wavelength is much longer than Hertzian contact width which is a considerable source of vibration in ball bearings. In this paper, a system of a flexible rotor and two ball bearings with floating ring squeeze film dampers and consideration of waviness has been modeled and solved by a numerical integration method, namely Runge-Kutta method to investigate the dynamic response of the system. The results show that by increasing the number of wave lobes, which is due to inappropriate manufacturing, non- periodic and chaotic behavior increases. This result reveals the importance of manufacturing accuracy. Moreover, as long as C< 1.5×10-4 m, by increasing the oil film thickness, unwanted vibrations and non-periodic behavior of the system have been reduced, On the other hand, when C>1.5×10-4 m, increasing the outer oil film thickness results in the increasing chaotic and non-periodic responses. This result shows that although the presence of oil film results in reduction the non-periodic and chaotic behaviors, but the oil film has an optimal thickness. In addition, with increasing mo, the disc displacement amplitude increases. This result reveals the importance of utilizing light materials in manufacturing the squeeze film dampers.Keywords: squeeze-film damper, waviness, ball bearing, bifurcation
Procedia PDF Downloads 381520 Topping Failure Analysis of Anti-Dip Bedding Rock Slopes Subjected to Crest Loads
Authors: Chaoyi Sun, Congxin Chen, Yun Zheng, Kaizong Xia, Wei Zhang
Abstract:
Crest loads are often encountered in hydropower, highway, open-pit and other engineering rock slopes. Toppling failure is one of the most common deformation failure types of anti-dip bedding rock slopes. Analysis on such failure of anti-dip bedding rock slopes subjected to crest loads has an important influence on engineering practice. Based on the step-by-step analysis approach proposed by Goodman and Bray, a geo-mechanical model was developed, and the related analysis approach was proposed for the toppling failure of anti-dip bedding rock slopes subjected to crest loads. Using the transfer coefficient method, a formulation was derived for calculating the residual thrust of slope toe and the support force required to meet the requirements of the slope stability under crest loads, which provided a scientific reference to design and support for such slopes. Through slope examples, the influence of crest loads on the residual thrust and sliding ratio coefficient was investigated for cases of different block widths and slope cut angles. The results show that there exists a critical block width for such slope. The influence of crest loads on the residual thrust is non-negligible when the block thickness is smaller than the critical value. Moreover, the influence of crest loads on the slope stability increases with the slope cut angle and the sliding ratio coefficient of anti-dip bedding rock slopes increases with the crest loads. Finally, the theoretical solutions and numerical simulations using Universal Distinct Element Code (UDEC) were compared, in which the consistent results show the applicability of both approaches.Keywords: anti-dip bedding rock slope, crest loads, stability analysis, toppling failure
Procedia PDF Downloads 179519 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI
Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil
Abstract:
The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency
Procedia PDF Downloads 409518 Assessment of a Coupled Geothermal-Solar Thermal Based Hydrogen Production System
Authors: Maryam Hamlehdar, Guillermo A. Narsilio
Abstract:
To enhance the feasibility of utilising geothermal hot sedimentary aquifers (HSAs) for clean hydrogen production, one approach is the implementation of solar-integrated geothermal energy systems. This detailed modelling study conducts a thermo-economic assessment of an advanced Organic Rankine Cycle (ORC)-based hydrogen production system that uses low-temperature geothermal reservoirs, with a specific focus on hot sedimentary aquifers (HSAs) over a 30-year period. In the proposed hybrid system, solar-thermal energy is used to raise the water temperature extracted from the geothermal production well. This temperature increase leads to a higher steam output, powering the turbine and subsequently enhancing the electricity output for running the electrolyser. Thermodynamic modeling of a parabolic trough solar (PTS) collector is developed and integrated with modeling for a geothermal-based configuration. This configuration includes a closed regenerator cycle (CRC), proton exchange membrane (PEM) electrolyser, and thermoelectric generator (TEG). Following this, the study investigates the impact of solar energy use on the temperature enhancement of the geothermal reservoir. It assesses the resulting consequences on the lifecycle performance of the hydrogen production system in comparison with a standalone geothermal system. The results indicate that, with the appropriate solar collector area, a combined solar-geothermal hydrogen production system outperforms a standalone geothermal system in both cost and rate of production. These findings underscore a solar-assisted geothermal hybrid system holds the potential to generate lower-cost hydrogen with enhanced efficiency, thereby boosting the appeal of numerous low to medium-temperature geothermal sources for hydrogen production.Keywords: clean hydrogen production, integrated solar-geothermal, low-temperature geothermal energy, numerical modelling
Procedia PDF Downloads 68517 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa
Authors: Samy A. Khalil, U. Ali Rahoma
Abstract:
The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa
Procedia PDF Downloads 98516 Mandatory Wellness Assessments for Medical Students at the University of Ottawa
Authors: Haykal. Kay-Anne
Abstract:
The health and well-being of students is a priority for the Faculty of Medicine at the University of Ottawa. The demands of medical studies are extreme, and many studies confirm that the prevalence of psychological distress is very high among medical students and that it is higher than that of the general population of the same age. The main goal is to identify risk factors for mental health among medical students at the University of Ottawa. The secondary objectives are to determine the variation of these risk factors according to demographic variables, as well as to determine if there is a change in the mental health of students during the 1st and 3rd years of their study. Medical students have a mandatory first and third-year wellness check meeting. This assessment includes a questionnaire on demographic information, mental health, and risk factors such as physical health, sleep, social support, financial stress, education and career, stress and drug use and/or alcohol. Student responses were converted to numerical values and analyzed statistically. The results show that 61% of the variation in the mean of the mental health score is explained by the following risk factors (R2 = 0.61, F (9.396) = 67.197, p < 0.01): lack of sleep and fatigue (β = 0.281, p < 0.001), lack of social support (β = 0.217, p <0.001), poor study or career development (β = 0.195, p < 0.001) and an increase stress and drug and alcohol use (β = -0.239, p < 0.001). No demographic variable has a significant effect on the presence of risk factors. In addition, fixed-effects regression demonstrated significantly lower mental health (p < 0.1) among first-year students (M = 0.587, SD = 0.072) than among third-year students (M = 0.719, SD = 0.071). This preliminary study indicates the need to continue data collection and analysis to increase the significance of the study results. As risk factors are present at the beginning of medical studies, it is important to offer resources to students very early in their medical studies and to have close monitoring and supervision.Keywords: assessment of mental health, medical students, risk factors for mental health, wellness assessment
Procedia PDF Downloads 123515 Potential Risks of Using Disconnected Composite Foundation Systems in Active Seismic Zones
Authors: Mohamed ElMasry, Ahmad Ragheb, Tareq AbdelAziz, Mohamed Ghazy
Abstract:
Choosing the suitable infrastructure system is becoming more challenging with the increase in demand for heavier structures contemporarily. This is the case where piled raft foundations have been widely used around the world to support heavy structures without extensive settlement. In the latter system, piles are rigidly connected to the raft, and most of the load goes to the soil layer on which the piles are bearing. In spite of that, when soil profiles contain thicker soft clay layers near the surface, or at relatively shallow depths, it is unfavorable to use the rigid piled raft foundation system. Consequently, the disconnected piled raft system was introduced as an alternative approach for the rigidly connected system. In this system, piles are disconnected from the raft using a cushion of soil, mostly of a granular interlayer. The cushion is used to redistribute the stresses among the piles and the subsoil. Piles are also used to stiffen the subsoil, and by this way reduce the settlement without being rigidly connected to the raft. However, the seismic loading effect on such disconnected foundation systems remains a problem, since the soil profiles may include thick clay layers which raise risks of amplification of the dynamic earthquake loads. In this paper, the effects of seismic behavior on the connected and disconnected piled raft systems are studied through a numerical model using Midas GTS NX Software. The study concerns the soil-structure interaction and the expected behavior of the systems. Advantages and disadvantages of each foundation approach are studied, and a comparison between the results are presented to show the effects of using disconnected piled raft systems in highly seismic zones. This was done by showing the excitation amplification in each of the foundation systems.Keywords: soil-structure interaction, disconnected piled-raft, risks, seismic zones
Procedia PDF Downloads 265514 PitMod: The Lorax Pit Lake Hydrodynamic and Water Quality Model
Authors: Silvano Salvador, Maryam Zarrinderakht, Alan Martin
Abstract:
Open pits, which are the result of mining, are filled by water over time until the water reaches the elevation of the local water table and generates mine pit lakes. There are several specific regulations about the water quality of pit lakes, and mining operations should keep the quality of groundwater above pre-defined standards. Therefore, an accurate, acceptable numerical model predicting pit lakes’ water balance and water quality is needed in advance of mine excavation. We carry on analyzing and developing the model introduced by Crusius, Dunbar, et al. (2002) for pit lakes. This model, called “PitMod”, simulates the physical and geochemical evolution of pit lakes over time scales ranging from a few months up to a century or more. Here, a lake is approximated as one-dimensional, horizontally averaged vertical layers. PitMod calculates the time-dependent vertical distribution of physical and geochemical pit lake properties, like temperature, salinity, conductivity, pH, trace metals, and dissolved oxygen, within each model layer. This model considers the effect of pit morphology, climate data, multiple surface and subsurface (groundwater) inflows/outflows, precipitation/evaporation, surface ice formation/melting, vertical mixing due to surface wind stress, convection, background turbulence and equilibrium geochemistry using PHREEQC and linking that to the geochemical reactions. PitMod, which is used and validated in over 50 mines projects since 2002, incorporates physical processes like those found in other lake models such as DYRESM (Imerito 2007). However, unlike DYRESM PitMod also includes geochemical processes, pit wall runoff, and other effects. In addition, PitMod is actively under development and can be customized as required for a particular site.Keywords: pit lakes, mining, modeling, hydrology
Procedia PDF Downloads 158513 Lateral Torsional Buckling Resistance of Trapezoidally Corrugated Web Girders
Authors: Annamária Käferné Rácz, Bence Jáger, Balázs Kövesdi, László Dunai
Abstract:
Due to the numerous advantages of steel corrugated web girders, its application field is growing for bridges as well as for buildings. The global stability behavior of such girders is significantly larger than those of conventional I-girders with flat web, thus the application of the structural steel material can be significantly reduced. Design codes and specifications do not provide clear and complete rules or recommendations for the determination of the lateral torsional buckling (LTB) resistance of corrugated web girders. Therefore, the authors made a thorough investigation regarding the LTB resistance of the corrugated web girders. Finite element (FE) simulations have been performed to develop new design formulas for the determination of the LTB resistance of trapezoidally corrugated web girders. FE model is developed considering geometrical and material nonlinear analysis using equivalent geometric imperfections (GMNI analysis). The equivalent geometric imperfections involve the initial geometric imperfections and residual stresses coming from rolling, welding and flame cutting. Imperfection sensitivity analysis was performed to determine the necessary magnitudes regarding only the first eigenmodes shape imperfections. By the help of the validated FE model, an extended parametric study is carried out to investigate the LTB resistance for different trapezoidal corrugation profiles. First, the critical moment of a specific girder was calculated by FE model. The critical moments from the FE calculations are compared to the previous analytical calculation proposals. Then, nonlinear analysis was carried out to determine the ultimate resistance. Due to the numerical investigations, new proposals are developed for the determination of the LTB resistance of trapezoidally corrugated web girders through a modification factor on the design method related to the conventional flat web girders.Keywords: corrugated web, lateral torsional buckling, critical moment, FE modeling
Procedia PDF Downloads 283512 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race
Authors: Joonas Pääkkönen
Abstract:
In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling
Procedia PDF Downloads 124511 Transformer Life Enhancement Using Dynamic Switching of Second Harmonic Feature in IEDs
Authors: K. N. Dinesh Babu, P. K. Gargava
Abstract:
Energization of a transformer results in sudden flow of current which is an effect of core magnetization. This current will be dominated by the presence of second harmonic, which in turn is used to segregate fault and inrush current, thus guaranteeing proper operation of the relay. This additional security in the relay sometimes obstructs or delays differential protection in a specific scenario, when the 2nd harmonic content was present during a genuine fault. This kind of scenario can result in isolation of the transformer by Buchholz and pressure release valve (PRV) protection, which is acted when fault creates more damage in transformer. Such delays involve a huge impact on the insulation failure, and chances of repairing or rectifying fault of problem at site become very dismal. Sometimes this delay can cause fire in the transformer, and this situation becomes havoc for a sub-station. Such occurrences have been observed in field also when differential relay operation was delayed by 10-15 ms by second harmonic blocking in some specific conditions. These incidences have led to the need for an alternative solution to eradicate such unwarranted delay in operation in future. Modern numerical relay, called as intelligent electronic device (IED), is embedded with advanced protection features which permit higher flexibility and better provisions for tuning of protection logic and settings. Such flexibility in transformer protection IEDs, enables incorporation of alternative methods such as dynamic switching of second harmonic feature for blocking the differential protection with additional security. The analysis and precautionary measures carried out in this case, have been simulated and discussed in this paper to ensure that similar solutions can be adopted to inhibit analogous issues in future.Keywords: differential protection, intelligent electronic device (IED), 2nd harmonic inhibit, inrush inhibit
Procedia PDF Downloads 299510 Real-Time Monitoring of Drinking Water Quality Using Advanced Devices
Authors: Amani Abdallah, Isam Shahrour
Abstract:
The quality of drinking water is a major concern of public health. The control of this quality is generally performed in the laboratory, which requires a long time. This type of control is not adapted for accidental pollution from sudden events, which can have serious consequences on population health. Therefore, it is of major interest to develop real-time innovative solutions for the detection of accidental contamination in drinking water systems This paper presents researches conducted within the SunRise Demonstrator for ‘Smart and Sustainable Cities’ with a particular focus on the supervision of the water quality. This work aims at (i) implementing a smart water system in a large water network (Campus of the University Lille1) including innovative equipment for real-time detection of abnormal events, such as those related to the contamination of drinking water and (ii) develop a numerical modeling of the contamination diffusion in the water distribution system. The first step included verification of the water quality sensors and their effectiveness on a network prototype of 50m length. This part included the evaluation of the efficiency of these sensors in the detection both bacterial and chemical contamination events in drinking water distribution systems. An on-line optical sensor integral with a laboratory-scale distribution system (LDS) was shown to respond rapidly to changes in refractive index induced by injected loads of chemical (cadmium, mercury) and biological contaminations (Escherichia coli). All injected substances were detected by the sensor; the magnitude of the response depends on the type of contaminant introduced and it is proportional to the injected substance concentration.Keywords: distribution system, drinking water, refraction index, sensor, real-time
Procedia PDF Downloads 354509 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks
Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang
Abstract:
The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.Keywords: femtocell networks, game theory, interference mitigation, spectrum allocation
Procedia PDF Downloads 156508 Historical Analysis of the Landscape Changes and the Eco-Environment Effects on the Coastal Zone of Bohai Bay, China
Authors: Juan Zhou, Lusan Liu, Yanzhong Zhu, Kuixuan Lin, Wenqian Cai, Yu Wang, Xing Wang
Abstract:
During the past few decades, there has been an increase in the number of coastal land reclamation projects for residential, commercial and industrial purposes in more and more coastal cities of China, which led to the destruction of the wetlands and loss of the sensitive marine habitats. Meanwhile, the influences and nature of these projects attract widespread public and academic concern. For identifying the trend of landscape (esp. Coastal reclamation) and ecological environment changes, understanding of which interacted, and offering a general science for the development of regional plans. In the paper, a case study was carried out in Bohai Bay area, based on the analysis of remote sensing data. Land use maps were created for 1954, 1970, 1981, 1990, 2000 and 2010. Landscape metrics were calculated and illustrated that the degree of reclamation changes was linked to the hydrodynamic environment and macrobenthos community. The results indicated that the worst of the loss of initial areas occurred during 1954-1970, with 65.6% lost mostly to salt field; to 2010, Coastal reclamation area increased more than 200km² as artificial landscape. The numerical simulation of tidal current field in 2003 and 2010 respectively showed that the flow velocity in offshore became faster (from 2-5 cm/s to 10-20 cm/s), and the flow direction seem to go astray. These significant changes of coastline were not conducive to the spread of pollutants and degradation. Additionally, the dominant macrobenthos analysis from 1958 to 2012 showed that Musculus senhousei (Benson, 1842) spread very fast and had been the predominant species in the recent years, which was a disturbance tolerant species.Keywords: Bohai Bay, coastal reclamation, landscape change, spatial patterns
Procedia PDF Downloads 290507 The Richtmyer-Meshkov Instability Impacted by the Interface with Different Components Distribution
Authors: Sheng-Bo Zhang, Huan-Hao Zhang, Zhi-Hua Chen, Chun Zheng
Abstract:
In this paper, the Richtmyer-Meshkov instability has been studied numerically by using the high-resolution Roe scheme based on the two-dimensional unsteady Euler equation, which was caused by the interaction between shock wave and the helium circular light gas cylinder with different component distributions. The numerical results further discuss the deformation process of the gas cylinder, the wave structure of the flow field and quantitatively analyze the characteristic dimensions (length, height, and central axial width) of the gas cylinder, the volume compression ratio of the cylinder over time. In addition, the flow mechanism of shock-driven interface gas mixing is analyzed from multiple perspectives by combining it with the flow field pressure, velocity, circulation, and gas mixing rate. Then the effects of different initial component distribution conditions on interface instability are investigated. The results show when the diffusion interface transit to the sharp interface, the reflection coefficient gradually increases on both sides of the interface. When the incident shock wave interacts with the cylinder, the transmission of the shock wave will transit from conventional transmission to unconventional transmission. At the same time, the reflected shock wave is gradually strengthened, and the transmitted shock wave is gradually weakened, which leads to an increase in the Richtmyer-Meshkov instability. Moreover, the Atwood number on both sides of the interface also increases as the diffusion interface transit to the sharp interface, which leads to an increase in the Rayleigh-Taylor instability and the Kelvin-Helmholtz instability. Therefore, the increase in instability will lead to an increase the circulation, resulting in an increase in the growth rate of gas mixing rate.Keywords: shock wave, He light cylinder, Richtmyer-Meshkov instability, Gaussian distribution
Procedia PDF Downloads 77506 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 41505 Analysis of Pressure Drop in a Concentrated Solar Collector with Direct Steam Production
Authors: Sara Sallam, Mohamed Taqi, Naoual Belouaggadia
Abstract:
Solar thermal power plants using parabolic trough collectors (PTC) are currently a powerful technology for generating electricity. Most of these solar power plants use thermal oils as heat transfer fluid. The latter is heated in the solar field and transfers the heat absorbed in an oil-water heat exchanger for the production of steam driving the turbines of the power plant. Currently, we are seeking to develop PTCs with direct steam generation (DSG). This process consists of circulating water under pressure in the receiver tube to generate steam directly into the solar loop. This makes it possible to reduce the investment and maintenance costs of the PTCs (the oil-water exchangers are removed) and to avoid the environmental risks associated with the use of thermal oils. The pressure drops in these systems are an important parameter to ensure their proper operation. The determination of these losses is complex because of the presence of the two phases, and most often we limit ourselves to describing them by models using empirical correlations. A comparison of these models with experimental data was performed. Our calculations focused on the evolution of the pressure of the liquid-vapor mixture along the receiver tube of a PTC-DSG for pressure values and inlet flow rates ranging respectively from 3 to 10 MPa, and from 0.4 to 0.6 kg/s. The comparison of the numerical results with experience allows us to demonstrate the validity of some models according to the pressures and the flow rates of entry in the PTC-DSG receiver tube. The analysis of these two parameters’ effects on the evolution of the pressure along the receiving tub, shows that the increase of the inlet pressure and the decrease of the flow rate lead to minimal pressure losses.Keywords: direct steam generation, parabolic trough collectors, Ppressure drop, empirical models
Procedia PDF Downloads 139504 Collapse Load Analysis of Reinforced Concrete Pile Group in Liquefying Soils under Lateral Loading
Authors: Pavan K. Emani, Shashank Kothari, V. S. Phanikanth
Abstract:
The ultimate load analysis of RC pile groups has assumed a lot of significance under liquefying soil conditions, especially due to post-earthquake studies of 1964 Niigata, 1995 Kobe and 2001 Bhuj earthquakes. The present study reports the results of numerical simulations on pile groups subjected to monotonically increasing lateral loads under design amounts of pile axial loading. The soil liquefaction has been considered through the non-linear p-y relationship of the soil springs, which can vary along the depth/length of the pile. This variation again is related to the liquefaction potential of the site and the magnitude of the seismic shaking. As the piles in the group can reach their extreme deflections and rotations during increased amounts of lateral loading, a precise modeling of the inelastic behavior of the pile cross-section is done, considering the complete stress-strain behavior of concrete, with and without confinement, and reinforcing steel, including the strain-hardening portion. The possibility of the inelastic buckling of the individual piles is considered in the overall collapse modes. The model is analysed using Riks analysis in finite element software to check the post buckling behavior and plastic collapse of piles. The results confirm the kinds of failure modes predicted by centrifuge test results reported by researchers on pile group, although the pile material used is significantly different from that of the simulation model. The extension of the present work promises an important contribution to the design codes for pile groups in liquefying soils.Keywords: collapse load analysis, inelastic buckling, liquefaction, pile group
Procedia PDF Downloads 162503 Relevance of Reliability Approaches to Predict Mould Growth in Biobased Building Materials
Authors: Lucile Soudani, Hervé Illy, Rémi Bouchié
Abstract:
Mould growth in living environments has been widely reported for decades all throughout the world. A higher level of moisture in housings can lead to building degradation, chemical component emissions from construction materials as well as enhancing mould growth within the envelope elements or on the internal surfaces. Moreover, a significant number of studies have highlighted the link between mould presence and the prevalence of respiratory diseases. In recent years, the proportion of biobased materials used in construction has been increasing, as seen as an effective lever to reduce the environmental impact of the building sector. Besides, bio-based materials are also hygroscopic materials: when in contact with the wet air of a surrounding environment, their porous structures enable a better capture of water molecules, thus providing a more suitable background for mould growth. Many studies have been conducted to develop reliable models to be able to predict mould appearance, growth, and decay over many building materials and external exposures. Some of them require information about temperature and/or relative humidity, exposure times, material sensitivities, etc. Nevertheless, several studies have highlighted a large disparity between predictions and actual mould growth in experimental settings as well as in occupied buildings. The difficulty of considering the influence of all parameters appears to be the most challenging issue. As many complex phenomena take place simultaneously, a preliminary study has been carried out to evaluate the feasibility to sadopt a reliability approach rather than a deterministic approach. Both epistemic and random uncertainties were identified specifically for the prediction of mould appearance and growth. Several studies published in the literature were selected and analysed, from the agri-food or automotive sectors, as the deployed methodology appeared promising.Keywords: bio-based materials, mould growth, numerical prediction, reliability approach
Procedia PDF Downloads 46502 Modeling of Thermally Induced Acoustic Emission Memory Effects in Heterogeneous Rocks with Consideration for Fracture Develo
Authors: Vladimir A. Vinnikov
Abstract:
The paper proposes a model of an inhomogeneous rock mass with initially random distribution of microcracks on mineral grain boundaries. It describes the behavior of cracks in a medium under the effect of thermal field, the medium heated instantaneously to a predetermined temperature. Crack growth occurs according to the concept of fracture mechanics provided that the stress intensity factor K exceeds the critical value of Kc. The modeling of thermally induced acoustic emission memory effects is based on the assumption that every event of crack nucleation or crack growth caused by heating is accompanied by a single acoustic emission event. Parameters of the thermally induced acoustic emission memory effect produced by cyclic heating and cooling (with the temperature amplitude increasing from cycle to cycle) were calculated for several rock texture types (massive, banded, and disseminated). The study substantiates the adaptation of the proposed model to humidity interference with the thermally induced acoustic emission memory effect. The influence of humidity on the thermally induced acoustic emission memory effect in quasi-homogeneous and banded rocks is estimated. It is shown that such modeling allows the structure and texture of rocks to be taken into account and the influence of interference factors on the distinctness of the thermally induced acoustic emission memory effect to be estimated. The numerical modeling can be used to obtain information about the thermal impacts on rocks in the past and determine the degree of rock disturbance by means of non-destructive testing.Keywords: degree of rock disturbance, non-destructive testing, thermally induced acoustic emission memory effects, structure and texture of rocks
Procedia PDF Downloads 263501 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.Keywords: harmonics, passive filter, power factor, power quality
Procedia PDF Downloads 306500 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: customer value, Huff's Gravity Model, POS, Retailer
Procedia PDF Downloads 123499 Influence of Reinforcement Stiffness on the Performance of Back-to-Back Reinforced Earth Wall upon Rainwater Infiltration
Authors: Gopika Rajagopal, Sudheesh Thiyyakkandi
Abstract:
Back-to-back reinforced earth (RE) walls are extensively used in these days as bridge abutments and highway ramps, owing to their cost efficiency and ease of construction. High quality select fill is the most suitable backfill material due to its excellent engineering properties and constructability. However, industries are compelled to use low quality, locally available soil because of its ample availability on site. However, several failure cases of such walls are reported, especially subsequent to rainfall events. The stiffness of reinforcement is one of the major factors affecting the performance of RE walls. The present study focused on analyzing the effect of reinforcement stiffness on the performance of complete select fill, complete marginal fill, and hybrid-fill (i.e., combination of select and marginal fills) back-to-back RE walls, immediately after construction and upon rainwater infiltration through finite element modelling. A constant width to height (W/H) ratio of 3 and height (H) of 6 m was considered for the numerical analysis and the stiffness of reinforcement layers was varied from 500 kN/m to 10000 kN/m. Results showed that reinforcement stiffness had a noticeable influence on the response of RE wall, subsequent to construction as well as rainwater infiltration. Facing displacement was found to decrease and maximum reinforcement tension and factor of safety were observed to increase with increasing the stiffness of reinforcement. However, beyond a stiffness of 5000 kN/m, no significant reduction in facing displacement was observed. The behavior of fully marginal fill wall considered in this study was found to be reasonable even after rainwater infiltration when the high stiffness reinforcement layers are used.Keywords: back-to-back reinforced earth wall, finite element modelling, rainwater infiltration, reinforcement stiffness
Procedia PDF Downloads 155498 Computation of Residual Stresses in Human Face Due to Growth
Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan
Abstract:
Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of the living tissues to the mechanical loads is necessary for a wide range of developing fields such as, designing of prosthetics and optimized surgery operations. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically growth and remodeling is one of the main sources. Extracting body organs from medical imaging, does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is the gravity since an organ grows under its influence from its birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. In this paper, we have implemented a computational framework based on fixed-point iteration to determine the residual stresses due to growth. Using nonlinear continuum mechanics and the concept of fictitious configuration we find the unknown stress-free reference configuration which is necessary for mechanical analysis. To illustrate the method, we apply it to a finite element model of healthy human face whose geometry has been extracted from medical images. We have computed the distribution of residual stress in facial tissues, which can overcome the effect of gravity and cause that tissues remain firm. Tissue wrinkles caused by aging could be a consequence of decreasing residual stress and not counteracting the gravity. Considering these stresses has important application in maxillofacial surgery. It helps the surgeons to predict the changes after surgical operations and their consequences.Keywords: growth, soft tissue, residual stress, finite element method
Procedia PDF Downloads 354