Search results for: finite element modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7092

Search results for: finite element modeling

4602 Dynamic Behavior of the Nanostructure of Load-Bearing Biological Materials

Authors: Mahan Qwamizadeh, Kun Zhou, Zuoqi Zhang, Yong Wei Zhang

Abstract:

Typical load-bearing biological materials like bone, mineralized tendon and shell, are biocomposites made from both organic (collagen) and inorganic (biomineral) materials. This amazing class of materials with intrinsic internally designed hierarchical structures show superior mechanical properties with regard to their weak components from which they are formed. Extensive investigations concentrating on static loading conditions have been done to study the biological materials failure. However, most of the damage and failure mechanisms in load-bearing biological materials will occur whenever their structures are exposed to dynamic loading conditions. The main question needed to be answered here is: What is the relation between the layout and architecture of the load-bearing biological materials and their dynamic behavior? In this work, a staggered model has been developed based on the structure of natural materials at nanoscale and Finite Element Analysis (FEA) has been used to study the dynamic behavior of the structure of load-bearing biological materials to answer why the staggered arrangement has been selected by nature to make the nanocomposite structure of most of the biological materials. The results showed that the staggered structures will efficiently attenuate the stress wave rather than the layered structure. Furthermore, such staggered architecture is effectively in charge of utilizing the capacity of the biostructure to resist both normal and shear loads. In this work, the geometrical parameters of the model like the thickness and aspect ratio of the mineral inclusions selected from the typical range of the experimentally observed feature sizes and layout dimensions of the biological materials such as bone and mineralized tendon. Furthermore, the numerical results validated with existing theoretical solutions. Findings of the present work emphasize on the significant effects of dynamic behavior on the natural evolution of load-bearing biological materials and can help scientists to design bioinspired materials in the laboratories.

Keywords: load-bearing biological materials, nanostructure, staggered structure, stress wave decay

Procedia PDF Downloads 439
4601 Modeling of Single Bay Precast Residential House Using Ruaumoko 2D Program

Authors: N. H. Hamid, N. M. Mohamed, S. A. Anuar

Abstract:

Precast residential houses are normally constructed in Malaysia using precast shear-key wall panel and precast wall panel are designed using BS8110 where there is no provision for earthquake. However, the safety of this house under moderate and strong earthquake is still questionable. Consequently, the full-scale of residential house are designed, constructed, tested and analyzed under in-plane lateral cyclic loading. Hysteresis loops are plotted based on the experimental work and compared with modeling of hysteresis loops using HYSTERES in RUAUMOKO 2D program. Modified Takeda hysteresis model is chosen to behave a similar pattern with experimental work. This program will display the earthquake excitations, spectral displacements, pseudo spectral acceleration, and deformation shape of the structure. It can be concluded that this building is suffering severe cracks and damage under moderate and severe earthquake.

Keywords: precast shear-key, hysteresis loops, spectral displacements, deformation shape

Procedia PDF Downloads 444
4600 Problem Solving in Chilean Higher Education: Figurations Prior in Interpretations of Cartesian Graphs

Authors: Verónica Díaz

Abstract:

A Cartesian graph, as a mathematical object, becomes a tool for configuration of change. Its best comprehension is done through everyday life problem-solving associated with its representation. Despite this, the current educational framework favors general graphs, without consideration of their argumentation. Students are required to find the mathematical function without associating it to the development of graphical language. This research describes the use made by students of configurations made prior to Cartesian graphs with regards to an everyday life problem related to a time and distance variation phenomenon. The theoretical framework describes the function conditions of study and their modeling. This is a qualitative, descriptive study involving six undergraduate case studies that were carried out during the first term in 2016 at University of Los Lagos. The research problem concerned the graphic modeling of a real person’s movement phenomenon, and two levels of analysis were identified. The first level aims to identify local and global graph interpretations; a second level describes the iconicity and referentiality degree of an image. According to the results, students were able to draw no figures before the Cartesian graph, highlighting the need for students to represent the context and the movement of which causes the phenomenon change. From this, they managed Cartesian graphs representing changes in position, therefore, achieved an overall view of the graph. However, the local view only indicates specific events in the problem situation, using graphic and verbal expressions to represent movement. This view does not enable us to identify what happens on the graph when the movement characteristics change based on possible paths in the person’s walking speed.

Keywords: cartesian graphs, higher education, movement modeling, problem solving

Procedia PDF Downloads 207
4599 The Effect of Context in Eliminating Interpretation Problems of Screen Subtitles for the Promotion of Intelligible Film Language

Authors: Ezzeldin M. T. Ali

Abstract:

Arguably viewers hardly benefit from screen subtitles due to the inconsistency between scenarios and their subtitles. Research in this area will provide an understanding of the association between these scenarios and subtitles via context. It attempts to eliminate the inconsistency existing between contexts and screen subtitles providing insights into the problem. Specifically, the study aims at examining the extent to which the understanding of screen subtitles largely depends on the force of linguistic and situational contexts. This is because the context is assumed to have a powerful effect on the interpretation of the source text. Both descriptive and experimental methods were adopted for data collection. These included a test and paper-pencil-questionnaires where participants provided their impressions about the role of context in eliminating interpretation problems of screen subtitles. Participants developed a good background about screen subtitles watching films. Results showed that context forms a powerful element in understanding screen subtitles. Results also revealed that communicative translation fits well screen translation boosting the contextual meaning. The association of context and communicative translation makes subtitles globally more economical and intelligible. Context forms a central element for film language to be intelligible.

Keywords: communicative translation, context, scenario, powerful, intellgible

Procedia PDF Downloads 149
4598 Stability Analysis of Slopes during Pile Driving

Authors: Yeganeh Attari, Gudmund Reidar Eiksund, Hans Peter Jostad

Abstract:

In Geotechnical practice, there is no standard method recognized by the industry to account for the reduction of safety factor of a slope as an effect of soil displacement and pore pressure build-up during pile installation. Pile driving disturbs causes large strains and generates excess pore pressures in a zone that can extend many diameters from the installed pile, resulting in a decrease of the shear strength of the surrounding soil. This phenomenon may cause slope failure. Moreover, dissipation of excess pore pressure set-up may cause weakening of areas outside the volume of soil remoulded during installation. Because of complex interactions between changes in mean stress and shearing, it is challenging to predict installation induced pore pressure response. Furthermore, it is a complex task to follow the rate and path of pore pressure dissipation in order to analyze slope stability. In cohesive soils it is necessary to implement soil models that account for strain softening in the analysis. In the literature, several cases of slope failure due to pile driving activities have been reported, for instance, a landslide in Gothenburg that resulted in a slope failure destroying more than thirty houses and Rigaud landslide in Quebec which resulted in loss of life. Up to now, several methods have been suggested to predict the effect of pile driving on total and effective stress, pore pressure changes and their effect on soil strength. However, this is still not well understood or agreed upon. In Norway, general approaches applied by geotechnical engineers for this problem are based on old empirical methods with little accurate theoretical background. While the limitations of such methods are discussed, this paper attempts to capture the reduction in the factor of safety of a slope during pile driving, using coupled Finite Element analysis and cavity expansion method. This is demonstrated by analyzing a case of slope failure due to pile driving in Norway.

Keywords: cavity expansion method, excess pore pressure, pile driving, slope failure

Procedia PDF Downloads 134
4597 Unsupervised Text Mining Approach to Early Warning System

Authors: Ichihan Tai, Bill Olson, Paul Blessner

Abstract:

Traditional early warning systems that alarm against crisis are generally based on structured or numerical data; therefore, a system that can make predictions based on unstructured textual data, an uncorrelated data source, is a great complement to the traditional early warning systems. The Chicago Board Options Exchange (CBOE) Volatility Index (VIX), commonly referred to as the fear index, measures the cost of insurance against market crash, and spikes in the event of crisis. In this study, news data is consumed for prediction of whether there will be a market-wide crisis by predicting the movement of the fear index, and the historical references to similar events are presented in an unsupervised manner. Topic modeling-based prediction and representation are made based on daily news data between 1990 and 2015 from The Wall Street Journal against VIX index data from CBOE.

Keywords: early warning system, knowledge management, market prediction, topic modeling.

Procedia PDF Downloads 322
4596 Analysis of Ecological Footprint of Residents for Urban Spatial Restructuring

Authors: Taehyun Kim, Hyunjoo Park, Taehyun Kim

Abstract:

Since the rapid economic development, Korea has recently entered a period of low growth due to population decline and aging. Due to the urbanization around the metropolitan area and the hollowing of local cities, the ecological capacity of a city is decreasing while ecological footprints are increasing, requiring a compact space plan for maintaining urban functions. The purpose of this study is to analyze the relationship between urban spatial structure and residents' ecological footprints for sustainable spatial planning. To do this, we try to analyze the relationship between intra-urban spatial structure, such as net/gross density and service accessibility, and resident ecological footprints of food, housing, transportation, goods and services through survey and structural equation modeling. The results of the study will be useful in establishing an implementation plan for sustainable development goals (SDGs), especially for sustainable cities and communities (SDG 11) and responsible consumption and production (SDG 12) in the future.

Keywords: ecological footprint, structural equation modeling, survey, sustainability, urban spatial structure

Procedia PDF Downloads 251
4595 Predicting Bridge Pier Scour Depth with SVM

Authors: Arun Goel

Abstract:

Prediction of maximum local scour is necessary for the safety and economical design of the bridges. A number of equations have been developed over the years to predict local scour depth using laboratory data and a few pier equations have also been proposed using field data. Most of these equations are empirical in nature as indicated by the past publications. In this paper, attempts have been made to compute local depth of scour around bridge pier in dimensional and non-dimensional form by using linear regression, simple regression and SVM (Poly and Rbf) techniques along with few conventional empirical equations. The outcome of this study suggests that the SVM (Poly and Rbf) based modeling can be employed as an alternate to linear regression, simple regression and the conventional empirical equations in predicting scour depth of bridge piers. The results of present study on the basis of non-dimensional form of bridge pier scour indicates the improvement in the performance of SVM (Poly and Rbf) in comparison to dimensional form of scour.

Keywords: modeling, pier scour, regression, prediction, SVM (Poly and Rbf kernels)

Procedia PDF Downloads 436
4594 Analysis of Vibratory Signals Based on Local Mean Decomposition (LMD) for Rolling Bearing Fault Diagnosis

Authors: Toufik Bensana, Medkour Mihoub, Slimane Mekhilef

Abstract:

The use of vibration analysis has been established as the most common and reliable method of analysis in the field of condition monitoring and diagnostics of rotating machinery. Rolling bearings cover a broad range of rotary machines and plays a crucial role in the modern manufacturing industry. Unfortunately, the vibration signals collected from a faulty bearing are generally nonstationary, nonlinear and with strong noise interference, so it is essential to obtain the fault features correctly. In this paper, a novel numerical analysis method based on local mean decomposition (LMD) is proposed. LMD decompose the signal into a series of product functions (PFs), each of which is the product of an envelope signal and a purely frequency modulated FM signal. The envelope of a PF is the instantaneous amplitude (IA), and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. After that, the fault characteristic frequency of the roller bearing can be extracted by performing spectrum analysis to the instantaneous amplitude of PF component containing dominant fault information. The results show the effectiveness of the proposed technique in fault detection and diagnosis of rolling element bearing.

Keywords: fault diagnosis, rolling element bearing, local mean decomposition, condition monitoring

Procedia PDF Downloads 376
4593 Influence of Visual Merchandising Elements on Instant Purchase

Authors: Pooja Sharma, Renu Jain, Alka David

Abstract:

The primary goal of this research is to comprehend the many features of visual merchandising (VM) and impulsive or instant purchasing behavior. It aims to explain the link between visual merchandising and customer purchasing behavior. The reviews were compiled from research articles, professional journal articles, and the opinions of many authors. It also discusses the impact of different internal and external VM elements on instant purchasing. The visual merchandising elements are divided into two sections: interior element (inside the display, spaces, and layout, fixtures, mannequins, attention-grabbing device) and outside element (outside display, space, and layout, fixture, mannequins, attention-grabbing device) (Window Display, Exterior signs, Marquees, Entrance, color, and texture). By focusing on selected clothing stores from the four markets of Bhopal city, we discovered that the exterior elements (window display, color, and texture) and interior elements (mannequins like dummies and fixtures such as lighting) have a significant positive impact on instant buying among the elements of Visual merchandising.

Keywords: instant purchase, visual merchandising, instant buying behavior, consumer behavior, window display, fixtures, mannequins, marquees

Procedia PDF Downloads 98
4592 On-Ice Force-Velocity Modeling Technical Considerations

Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra

Abstract:

Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.   

Keywords: ice-hockey, sprint, skating, power

Procedia PDF Downloads 88
4591 Numerical Investigation on Anchored Sheet Pile Quay Wall with Separated Relieving Platform

Authors: Mahmoud Roushdy, Mohamed El Naggar, Ahmed Yehia Abdelaziz

Abstract:

Anchored sheet pile has been used worldwide as front quay walls for decades. With the increase in vessel drafts and weights, those sheet pile walls need to be upgraded by increasing the depth of the dredging line in front of the wall. A system has recently been used to increase the depth in front of the wall by installing a separated platform supported on a deep foundation (so called Relieving Platform) behind the sheet pile wall. The platform is structurally separated from the front wall. This paper presents a numerical investigation utilizing finite element analysis on the behavior of separated relieve platforms installed within existing anchored sheet pile quay walls. The investigation was done in two steps: a verification step followed by a parametric study. In the verification step, the numerical model was verified based on field measurements performed by others. The validated model was extended within the parametric study to a series of models with different backfill soils, separation gap width, and number of pile rows supporting the platform. The results of the numerical investigation show that using stiff clay as backfill soil (neglecting consolidation) gives better performance for the front wall and the first pile row adjacent to sandy backfills. The degree of compaction of the sandy backfill slightly increases lateral deformations but reduces bending moment acting on pile rows, while the effect is minor on the front wall. In addition, the increase in the separation gap width gradually increases bending moments on the front wall regardless of the backfill soil type, while this effect is reversed on pile rows (gradually decrease). Finally, the paper studies the possibility of reducing the number of pile rows along with the separation to take advantage of the positive separation effect on piles.

Keywords: anchored sheet pile, relieving platform, separation gap, upgrade quay wall

Procedia PDF Downloads 73
4590 Estimation of the Upper Tail Dependence Coefficient for Insurance Loss Data Using an Empirical Copula-Based Approach

Authors: Adrian O'Hagan, Robert McLoughlin

Abstract:

Considerable focus in the world of insurance risk quantification is placed on modeling loss values from lines of business (LOBs) that possess upper tail dependence. Copulas such as the Joe, Gumbel and Student-t copula may be used for this purpose. The copula structure imparts a desired level of tail dependence on the joint distribution of claims from the different LOBs. Alternatively, practitioners may possess historical or simulated data that already exhibit upper tail dependence, through the impact of catastrophe events such as hurricanes or earthquakes. In these circumstances, it is not desirable to induce additional upper tail dependence when modeling the joint distribution of the loss values from the individual LOBs. Instead, it is of interest to accurately assess the degree of tail dependence already present in the data. The empirical copula and its associated upper tail dependence coefficient are presented in this paper as robust, efficient means of achieving this goal.

Keywords: empirical copula, extreme events, insurance loss reserving, upper tail dependence coefficient

Procedia PDF Downloads 272
4589 A Machine Learning-Based Analysis of Autism Prevalence Rates across US States against Multiple Potential Explanatory Variables

Authors: Ronit Chakraborty, Sugata Banerji

Abstract:

There has been a marked increase in the reported prevalence of Autism Spectrum Disorder (ASD) among children in the US over the past two decades. This research has analyzed the growth in state-level ASD prevalence against 45 different potentially explanatory factors, including socio-economic, demographic, healthcare, public policy, and political factors. The goal was to understand if these factors have adequate predictive power in modeling the differential growth in ASD prevalence across various states and if they do, which factors are the most influential. The key findings of this study include (1) the confirmation that the chosen feature set has considerable power in predicting the growth in ASD prevalence, (2) the identification of the most influential predictive factors, (3) given the nature of the most influential predictive variables, an indication that a considerable portion of the reported ASD prevalence differentials across states could be attributable to over and under diagnosis, and (4) identification of Florida as a key outlier state pointing to a potential under-diagnosis of ASD there.

Keywords: autism spectrum disorder, clustering, machine learning, predictive modeling

Procedia PDF Downloads 84
4588 Safety Analysis and Accident Modeling of Transportation in Srinagar City

Authors: Adinarayana Badveeti, Mohammad Shafi Mir

Abstract:

In Srinagar city, in India, road safety is an important aspect that creates ecological balance and social well being. A road accident creates a situation that leaves behind distress, sorrow, and sufferings. Therefore identification of causes of road accidents becomes highly essential for adopting necessary preventive measures against a critical event. The damage created by road accidents to large extent is unrepairable and therefore needs attention to eradicate this continuously increasing trend of awful 'epidemic'. Road accident in India is among the highest in the world, with at least approximately 142.000 people killed each year on the road. Kashmir region is an ecologically sensitive place but lacks necessary facilities and infrastructure regarding road transportation, ultimately resulting in the critical event-road accidents creating a major problem for common people in the region. The objective of this project is to study the safety aspect of Srinagar City and also model the accidents with different aspect that causes accidents and also to suggest the possible remedies for lessening/eliminating the road accidents.

Keywords: road safety, road accident, road infrastructure, accident modeling

Procedia PDF Downloads 240
4587 Studying the Theoretical and Laboratory Design of a Concrete Frame and Optimizing Its Design for Impact and Earthquake Resistance

Authors: Mehrdad Azimzadeh, Seyed Mohammadreza Jabbari, Mohammadreza Hosseinzadeh Alherd

Abstract:

This paper includes experimental results and analytical studies about increasing resistance of single-span reinforced concreted frames against impact factor and their modeling according to optimization methods and optimizing the behavior of these frames under impact loads. During this study, about 30 designs for different frames were modeled and made using specialized software like ANSYS and Sap and their behavior were examined under variable impacts. Then suitable strategies were offered for frames in terms of concrete mixing in order to optimize frame modeling. To reduce the weight of the frames, we had to use fine-grained stones. After designing about eight types of frames for each type of frames, three samples were designed with the aim of controlling the impact strength parameters, and a good shape of the frame was created for the impact resistance, which was a solid frame with muscular legs, and as a bond away from each other as much as possible with a 3 degree gradient in the upper part of the beam.

Keywords: optimization, reinforced concrete, optimization methods, impact load, earthquake

Procedia PDF Downloads 164
4586 Coupled Hydro-Geomechanical Modeling of Oil Reservoir Considering Non-Newtonian Fluid through a Fracture

Authors: Juan Huang, Hugo Ninanya

Abstract:

Oil has been used as a source of energy and supply to make materials, such as asphalt or rubber for many years. This is the reason why new technologies have been implemented through time. However, research still needs to continue increasing due to new challenges engineers face every day, just like unconventional reservoirs. Various numerical methodologies have been applied in petroleum engineering as tools in order to optimize the production of reservoirs before drilling a wellbore, although not all of these have the same efficiency when talking about studying fracture propagation. Analytical methods like those based on linear elastic fractures mechanics fail to give a reasonable prediction when simulating fracture propagation in ductile materials whereas numerical methods based on the cohesive zone method (CZM) allow to represent the elastoplastic behavior in a reservoir based on a constitutive model; therefore, predictions in terms of displacements and pressure will be more reliable. In this work, a hydro-geomechanical coupled model of horizontal wells in fractured rock was developed using ABAQUS; both extended element method and cohesive elements were used to represent predefined fractures in a model (2-D). A power law for representing the rheological behavior of fluid (shear-thinning, power index <1) through fractures and leak-off rate permeating to the matrix was considered. Results have been showed in terms of aperture and length of the fracture, pressure within fracture and fluid loss. It was showed a high infiltration rate to the matrix as power index decreases. A sensitivity analysis is conclusively performed to identify the most influential factor of fluid loss.

Keywords: fracture, hydro-geomechanical model, non-Newtonian fluid, numerical analysis, sensitivity analysis

Procedia PDF Downloads 190
4585 Numerical Investigation of Fluid Outflow through a Retinal Hole after Scleral Buckling

Authors: T. Walczak, J. K. Grabski, P. Fritzkowski, M. Stopa

Abstract:

Objectives of the study are i) to perform numerical simulations that permit an analysis of the dynamics of subretinal fluid when an implant has induced scleral intussusception and ii) assess the impact of the physical parameters of the model on the flow rate. Computer simulations were created using finite element method (FEM) based on a model that takes into account the interaction of a viscous fluid (subretinal fluid) with a hyperelastic body (retina). The purpose of the calculation was to investigate the dependence of the flow rate of subretinal fluid through a hole in the retina on different factors such as viscosity of subretinal fluid, material parameters of the retina, and the offset of the implant from the retina’s hole. These simulations were performed for different speeds of eye movement that reflect the behavior of the eye when reading, REM, and saccadic movements. Similar to other works in the field of subretinal fluid flow, it was assumed stationary, single sided, forced fluid flow in the considered area simulating the subretinal space. Additionally, a hyperelastic material model of the retina and parameterized geometry of the considered model was adopted. The calculations also examined the influence the direction of the force of gravity due to the position of the patient’s head on the trend of outflow of fluid. The simulations revealed that fluid outflow from the retina becomes significant with eyeball movement speed of 100°/sec. This speed is greater than in the case of reading but is four times less than saccadic movement. The increase of viscosity of the fluid increased beneficial effect. Further, the simulation results suggest that moderate eye movement speed is optimal and that the conventional prescription of the avoidance of routine eye movement following retinal detachment surgery should be relaxed. Additionally, to verify numerical results, some calculations were repeated with use of meshless method (method of fundamental solutions), which is relatively fast and easy to implement. The paper has been supported by 02/21/DSPB/3477 grant.

Keywords: CFD simulations, FEM analysis, meshless method, retinal detachment

Procedia PDF Downloads 330
4584 Comparison of Solar Radiation Models

Authors: O. Behar, A. Khellaf, K. Mohammedi, S. Ait Kaci

Abstract:

Up to now, most validation studies have been based on the MBE and RMSE, and therefore, focused only on long and short terms performance to test and classify solar radiation models. This traditional analysis does not take into account the quality of modeling and linearity. In our analysis we have tested 22 solar radiation models that are capable to provide instantaneous direct and global radiation at any given location Worldwide. We introduce a new indicator, which we named Global Accuracy Indicator (GAI) to examine the linear relationship between the measured and predicted values and the quality of modeling in addition to long and short terms performance. Note that the quality of model has been represented by the T-Statistical test, the model linearity has been given by the correlation coefficient and the long and short term performance have been respectively known by the MBE and RMSE. An important founding of this research is that the use GAI allows avoiding default validation when using traditional methodology that might results in erroneous prediction of solar power conversion systems performances.

Keywords: solar radiation model, parametric model, performance analysis, Global Accuracy Indicator (GAI)

Procedia PDF Downloads 331
4583 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 83
4582 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 52
4581 Trajectory Optimization of Re-Entry Vehicle Using Evolutionary Algorithm

Authors: Muhammad Umar Kiani, Muhammad Shahbaz

Abstract:

Performance of any vehicle can be predicted by its design/modeling and optimization. Design optimization leads to efficient performance. Followed by horizontal launch, the air launch re-entry vehicle undergoes a launch maneuver by introducing a carefully selected angle of attack profile. This angle of attack profile is the basic element to complete a specified mission. Flight program of said vehicle is optimized under the constraints of the maximum allowed angle of attack, lateral and axial loads and with the objective of reaching maximum altitude. The main focus of this study is the endo-atmospheric phase of the ascent trajectory. A three degrees of freedom trajectory model is simulated in MATLAB. The optimization process uses evolutionary algorithm, because of its robustness and efficient capacity to explore the design space in search of the global optimum. Evolutionary Algorithm based trajectory optimization also offers the added benefit of being a generalized method that may work with continuous, discontinuous, linear, and non-linear performance matrix. It also eliminates the requirement of a starting solution. Optimization is particularly beneficial to achieve maximum advantage without increasing the computational cost and affecting the output of the system. For the case of launch vehicles we are immensely anxious to achieve maximum performance and efficiency under different constraints. In a launch vehicle, flight program means the prescribed variation of vehicle pitching angle during the flight which has substantial influence reachable altitude and accuracy of orbit insertion and aerodynamic loading. Results reveal that the angle of attack profile significantly affects the performance of the vehicle.

Keywords: endo-atmospheric, evolutionary algorithm, efficient performance, optimization process

Procedia PDF Downloads 394
4580 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 415
4579 Fully Coupled Porous Media Model

Authors: Nia Mair Fry, Matthew Profit, Chenfeng Li

Abstract:

This work focuses on the development and implementation of a fully implicit-implicit, coupled mechanical deformation and porous flow, finite element software tool. The fully implicit software accurately predicts classical fundamental analytical solutions such as the Terzaghi consolidation problem. Furthermore, it can capture other analytical solutions less well known in the literature, such as Gibson’s sedimentation rate problem and Coussy’s problems investigating wellbore stability for poroelastic rocks. The mechanical volume strains are transferred to the porous flow governing equation in an implicit framework. This will overcome some of the many current industrial issues, which use explicit solvers for the mechanical governing equations and only implicit solvers on the porous flow side. This can potentially lead to instability and non-convergence issues in the coupled system, plus giving results with an accountable degree of error. The specification of a fully monolithic implicit-implicit coupled porous media code sees the solution of both seepage-mechanical equations in one matrix system, under a unified time-stepping scheme, which makes the problem definition much easier. When using an explicit solver, additional input such as the damping coefficient and mass scaling factor is required, which are circumvented with a fully implicit solution. Further, improved accuracy is achieved as the solution is not dependent on predictor-corrector methods for the pore fluid pressure solution, but at the potential cost of reduced stability. In testing of this fully monolithic porous media code, there is the comparison of the fully implicit coupled scheme against an existing staggered explicit-implicit coupled scheme solution across a range of geotechnical problems. These cases include 1) Biot coefficient calculation, 2) consolidation theory with Terzaghi analytical solution, 3) sedimentation theory with Gibson analytical solution, and 4) Coussy well-bore poroelastic analytical solutions.

Keywords: coupled, implicit, monolithic, porous media

Procedia PDF Downloads 126
4578 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field

Authors: Yana Snegireva

Abstract:

Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.

Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model

Procedia PDF Downloads 61
4577 Material Chemistry Level Deformation and Failure in Cementitious Materials

Authors: Ram V. Mohan, John Rivas-Murillo, Ahmed Mohamed, Wayne D. Hodo

Abstract:

Cementitious materials, an excellent example of highly complex, heterogeneous material systems, are cement-based systems that include cement paste, mortar, and concrete that are heavily used in civil infrastructure; though commonly used are one of the most complex in terms of the material morphology and structure than most materials, for example, crystalline metals. Processes and features occurring at the nanometer sized morphological structures affect the performance, deformation/failure behavior at larger length scales. In addition, cementitious materials undergo chemical and morphological changes gaining strength during the transient hydration process. Hydration in cement is a very complex process creating complex microstructures and the associated molecular structures that vary with hydration. A fundamental understanding can be gained through multi-scale level modeling for the behavior and properties of cementitious materials starting from the material chemistry level atomistic scale to further explore their role and the manifested effects at larger length and engineering scales. This predictive modeling enables the understanding, and studying the influence of material chemistry level changes and nanomaterial additives on the expected resultant material characteristics and deformation behavior. Atomistic-molecular dynamic level modeling is required to couple material science to engineering mechanics. Starting at the molecular level a comprehensive description of the material’s chemistry is required to understand the fundamental properties that govern behavior occurring across each relevant length scale. Material chemistry level models and molecular dynamics modeling and simulations are employed in our work to describe the molecular-level chemistry features of calcium-silicate-hydrate (CSH), one of the key hydrated constituents of cement paste, their associated deformation and failure. The molecular level atomic structure for CSH can be represented by Jennite mineral structure. Jennite has been widely accepted by researchers and is typically used to represent the molecular structure of the CSH gel formed during the hydration of cement clinkers. This paper will focus on our recent work on the shear and compressive deformation and failure behavior of CSH represented by Jennite mineral structure that has been widely accepted by researchers and is typically used to represent the molecular structure of CSH formed during the hydration of cement clinkers. The deformation and failure behavior under shear and compression loading deformation in traditional hydrated CSH; effect of material chemistry changes on the predicted stress-strain behavior, transition from linear to non-linear behavior and identify the on-set of failure based on material chemistry structures of CSH Jennite and changes in its chemistry structure will be discussed.

Keywords: cementitious materials, deformation, failure, material chemistry modeling

Procedia PDF Downloads 272
4576 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0

Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini

Abstract:

Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.

Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling

Procedia PDF Downloads 79
4575 Post-Earthquake Damage Detection Using System Identification with a Pair of Seismic Recordings

Authors: Lotfi O. Gargab, Ruichong R. Zhang

Abstract:

A wave-based framework is presented for modeling seismic motion in multistory buildings and using measured response for system identification which can be utilized to extract important information regarding structure integrity. With one pair of building response at two locations, a generalized model response is formulated based on wave propagation features and expressed as frequency and time response functions denoted, respectively, as GFRF and GIRF. In particular, GIRF is fundamental in tracking arrival times of impulsive wave motion initiated at response level which is dependent on local model properties. Matching model and measured-structure responses can help in identifying model parameters and infer building properties. To show the effectiveness of this approach, the Millikan Library in Pasadena, California is identified with recordings of the Yorba Linda earthquake of September 3, 2002.

Keywords: system identification, continuous-discrete mass modeling, damage detection, post-earthquake

Procedia PDF Downloads 359
4574 Numerical Study of the Influence of the Primary Stream Pressure on the Performance of the Ejector Refrigeration System Based on Heat Exchanger Modeling

Authors: Elhameh Narimani, Mikhail Sorin, Philippe Micheau, Hakim Nesreddine

Abstract:

Numerical models of the heat exchangers in ejector refrigeration system (ERS) were developed and validated with the experimental data. The models were based on the switched heat exchangers model using the moving boundary method, which were capable of estimating the zones’ lengths, the outlet temperatures of both sides and the heat loads at various experimental points. The developed models were utilized to investigate the influence of the primary flow pressure on the performance of an R245fa ERS based on its coefficient of performance (COP) and exergy efficiency. It was illustrated numerically and proved experimentally that increasing the primary flow pressure slightly reduces the COP while the exergy efficiency goes through a maximum before decreasing.

Keywords: Coefficient of Performance, COP, Ejector Refrigeration System, ERS, exergy efficiency (ηII), heat exchangers modeling, moving boundary method

Procedia PDF Downloads 187
4573 PWM Based Control of Dstatcom for Voltage Sag, Swell Mitigation in Distribution Systems

Authors: A. Assif

Abstract:

This paper presents the modeling of a prototype distribution static compensator (D-STATCOM) for voltage sag and swell mitigation in an unbalanced distribution system. Here the concept that an inverter can be used as generalized impedance converter to realize either inductive or capacitive reactance has been used to mitigate power quality issues of distribution networks. The D-STATCOM is here supposed to replace the widely used StaticVar Compensator (SVC). The scheme is based on the Voltage Source Converter (VSC) principle. In this model PWM based control scheme has been implemented to control the electronic valves of VSC. Phase shift control Algorithm method is used for converter control. The D-STATCOM injects a current into the system to mitigate the voltage sags. In this paper the modeling of D¬STATCOM has been designed using MATLAB SIMULINIC. Accordingly, simulations are first carried out to illustrate the use of D-STATCOM in mitigating voltage sag in a distribution system. Simulation results prove that the D-STATCOM is capable of mitigating voltage sag as well as improving power quality of a system.

Keywords: D-STATCOM, voltage sag, voltage source converter (VSC), phase shift control

Procedia PDF Downloads 328