Search results for: model estimation
16673 Application and Verification of Regression Model to Landslide Susceptibility Mapping
Authors: Masood Beheshtirad
Abstract:
Identification of regions having potential for landslide occurrence is one of the basic measures in natural resources management. Different landslide hazard mapping models are proposed based on the environmental condition and goals. In this research landslide hazard map using multiple regression model were provided and applicability of this model is investigated in Baghdasht watershed. Dependent variable is landslide inventory map and independent variables consist of information layers as Geology, slope, aspect, distance from river, distance from road, fault and land use. For doing this, existing landslides have been identified and an inventory map made. The landslide hazard map is based on the multiple regression provided. The level of similarity potential hazard classes and figures of this model were compared with the landslide inventory map in the SPSS environments. Results of research showed that there is a significant correlation between the potential hazard classes and figures with area of the landslides. The multiple regression model is suitable for application in the Baghdasht Watershed.Keywords: landslide, mapping, multiple model, regression
Procedia PDF Downloads 32516672 Sensor Registration in Multi-Static Sonar Fusion Detection
Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin
Abstract:
In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem
Procedia PDF Downloads 16916671 A Multi-Scale Contact Temperature Model for Dry Sliding Rough Surfaces
Authors: Jamal Choudhry, Roland Larsson, Andreas Almqvist
Abstract:
A multi-scale flash temperature model has been developed and validated against existing work. The core strength of the proposed model is that it can be adapted to predict flash contact temperatures occurring in various types of sliding systems. In this paper, it is used to investigate how different surface roughness parameters affect the flash temperatures. The results show that for decreasing Hurst exponents as well as increasing values of the high-frequency cut-off, the maximum flash temperature increases. It was also shown that the effect of surface roughness does not influence the average interface temperature. The model predictions were validated against data from an experiment conducted in a pin-on-disc machine. This also showed the importance of including a wear model when simulating flash temperature development in a sliding system.Keywords: multiscale, pin-on-disc, finite element method, flash temperature, surface roughness
Procedia PDF Downloads 11816670 Prediction of Soil Liquefaction by Using UBC3D-PLM Model in PLAXIS
Authors: A. Daftari, W. Kudla
Abstract:
Liquefaction is a phenomenon in which the strength and stiffness of a soil is reduced by earthquake shaking or other rapid cyclic loading. Liquefaction and related phenomena have been responsible for huge amounts of damage in historical earthquakes around the world. Modelling of soil behaviour is the main step in soil liquefaction prediction process. Nowadays, several constitutive models for sand have been presented. Nevertheless, only some of them can satisfy this mechanism. One of the most useful models in this term is UBCSAND model. In this research, the capability of this model is considered by using PLAXIS software. The real data of superstition hills earthquake 1987 in the Imperial Valley was used. The results of the simulation have shown resembling trend of the UBC3D-PLM model.Keywords: liquefaction, plaxis, pore-water pressure, UBC3D-PLM
Procedia PDF Downloads 31016669 Cytology and Flow Cytometry of Three Japanese Drosera Species
Authors: Santhita Tungkajiwangkoon, Yoshikazu Hoshi
Abstract:
Three Japaneses Drosera species are the good model to study genome organization with highly specialized morphological group for insect trapping, and has revealed anti-inflammatory, and antibacterial effects, so there must be a reason for botanists are so appealing in these plants. Cytology and Flow cytometry were used to investigate the genetic stability and ploidy estimation in three related species. The cytological and Flow cytometry analysis were done in Drosera rotundifolia L., Drosera spatulata Labill and Drosera tokaiensis. The cytological studies by fluorescence staining (DAPI) showed that D. tokaiensis was an alloploid (2n=6x=60, hexaploid) which is a natural hybrid polyploids of D. rotundifolia and D. spatulata. D. rotundifolia was a diploid with the middle size of metaphase chromosomes (2n=2x=20) as a paternal origin and D. spatulata was a tetraploid with small size of metaphase chromosome (2n=4x=40) as a maternal origin. We confirmed by Flow cytometry analysis to determine the ploidy level and DNA content of the plants. The 2C-DNA values of D. rotundiflolia were 2.8 pg, D. spatulata was 1.6 pg and D. tokaiensis was 3.9 pg. However, 2C- DNA values of D. tokaiensis should be related from their parents but in the present study the 2C-DNA values of D. tokaiensis was no relation from the theoretical of hybrids representing additive parental. Possibility of D. tokaiensis is a natural hybrid, which is also hybridization in natural evolution can cause the genome reduction in plant.Keywords: drosera, hybrid, cytology, flow cytometry
Procedia PDF Downloads 38416668 Numerical Analysis of Swirling Chamber Using Improved Delayed Detached Eddy Simulation Turbulence Model
Authors: Hamad M. Alhajeri
Abstract:
Swirling chamber is a promising cooling method for heavily thermally loaded parts like turbine blades due to the additional circumferential velocity and therefore improved turbulent mixing of the fluid. This paper investigates numerically the effect of turbulence model on the heat convection of the swirling chamber. Grid independence analysis is conducted to obtain the proper grid dimension. The work validated with experimental data available in the literature. Flow analysis using improved delayed detached eddy simulation turbulence model and Reynolds averaged Navier-Stokes k-ɛ turbulence model is carried. The flow characteristic near the exit is reformed when improved delayed detached eddy simulation model used.Keywords: gas turbine, Nusselt number, flow characteristics, heat transfer
Procedia PDF Downloads 20216667 Big Data Applications for the Transport Sector
Authors: Antonella Falanga, Armando Cartenì
Abstract:
Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, cloud computing, decision-making, mobility demand, transportation
Procedia PDF Downloads 6216666 Numerical Simulation of Wishart Diffusion Processes
Authors: Raphael Naryongo, Philip Ngare, Anthony Waititu
Abstract:
This paper deals with numerical simulation of Wishart processes for a single asset risky pricing model whose volatility is described by Wishart affine diffusion processes. The multi-factor specification of volatility will make the model more flexible enough to fit the stock market data for short or long maturities for better returns. The Wishart process is a stochastic process which is a positive semi-definite matrix-valued generalization of the square root process. The aim of the study is to model the log asset stock returns under the double Wishart stochastic volatility model. The solution of the log-asset return dynamics for Bi-Wishart processes will be obtained through Euler-Maruyama discretization schemes. The numerical results on the asset returns are compared to the existing models returns such as Heston stochastic volatility model and double Heston stochastic volatility modelKeywords: euler schemes, log-asset return, infinitesimal generator, wishart diffusion affine processes
Procedia PDF Downloads 37816665 Input-Output Analysis in Laptop Computer Manufacturing
Authors: H. Z. Ulukan, E. Demircioğlu, M. Erol Genevois
Abstract:
The scope of this paper and the aim of proposed model were to apply monetary Input –Output (I-O) analysis to point out the importance of reusing know-how and other requirements in order to reduce the production costs in a manufacturing process for a laptop computer. I-O approach using the monetary input-output model is employed to demonstrate the impacts of different factors in a manufacturing process. A sensitivity analysis showing the correlation between these different factors is also presented. It is expected that the recommended model would have an advantageous effect in the cost minimization process.Keywords: input-output analysis, monetary input-output model, manufacturing process, laptop computer
Procedia PDF Downloads 39116664 Space Vector PWM and Model Predictive Control for Voltage Source Inverter Control
Authors: Irtaza M. Syed, Kaamran Raahemifar
Abstract:
In this paper, we present a comparative assessment of Space Vector Pulse Width Modulation (SVPWM) and Model Predictive Control (MPC) for two-level three phase (2L-3P) Voltage Source Inverter (VSI). VSI with associated system is subjected to both control techniques and the results are compared. Matlab/Simulink was used to model, simulate and validate the control schemes. Findings of this study show that MPC is superior to SVPWM in terms of total harmonic distortion (THD) and implementation.Keywords: voltage source inverter, space vector pulse width modulation, model predictive control, comparison
Procedia PDF Downloads 50816663 ARIMA-GARCH, A Statistical Modeling for Epileptic Seizure Prediction
Authors: Salman Mohamadi, Seyed Mohammad Ali Tayaranian Hosseini, Hamidreza Amindavar
Abstract:
In this paper, we provide a procedure to analyze and model EEG (electroencephalogram) signal as a time series using ARIMA-GARCH to predict an epileptic attack. The heteroskedasticity of EEG signal is examined through the ARCH or GARCH, (Autore- gressive conditional heteroskedasticity, Generalized autoregressive conditional heteroskedasticity) test. The best ARIMA-GARCH model in AIC sense is utilized to measure the volatility of the EEG from epileptic canine subjects, to forecast the future values of EEG. ARIMA-only model can perform prediction, but the ARCH or GARCH model acting on the residuals of ARIMA attains a con- siderable improved forecast horizon. First, we estimate the best ARIMA model, then different orders of ARCH and GARCH modelings are surveyed to determine the best heteroskedastic model of the residuals of the mentioned ARIMA. Using the simulated conditional variance of selected ARCH or GARCH model, we suggest the procedure to predict the oncoming seizures. The results indicate that GARCH modeling determines the dynamic changes of variance well before the onset of seizure. It can be inferred that the prediction capability comes from the ability of the combined ARIMA-GARCH modeling to cover the heteroskedastic nature of EEG signal changes.Keywords: epileptic seizure prediction , ARIMA, ARCH and GARCH modeling, heteroskedasticity, EEG
Procedia PDF Downloads 40616662 Maturity Transformation Risk Factors in Islamic Banking: An Implication of Basel III Liquidity Regulations
Authors: Haroon Mahmood, Christopher Gan, Cuong Nguyen
Abstract:
Maturity transformation risk is highlighted as one of the major causes of recent global financial crisis. Basel III has proposed new liquidity regulations for transformation function of banks and hence to monitor this risk. Specifically, net stable funding ratio (NSFR) is introduced to enhance medium- and long-term resilience against liquidity shocks. Islamic banking is widely accepted in many parts of the world and contributes to a significant portion of the financial sector in many countries. Using a dataset of 68 fully fledged Islamic banks from 11 different countries, over a period from 2005 – 2014, this study has attempted to analyze various factors that may significantly affect the maturity transformation risk in these banks. We utilize 2-step system GMM estimation technique on unbalanced panel and find bank capital, credit risk, financing, size and market power are most significant among the bank specific factors. Also, gross domestic product and inflation are the significant macro-economic factors influencing this risk. However, bank profitability, asset efficiency, and income diversity are found insignificant in determining the maturity transformation risk in Islamic banking model.Keywords: Basel III, Islamic banking, maturity transformation risk, net stable funding ratio
Procedia PDF Downloads 41516661 Simulation of Uniaxial Ratcheting Behaviors of SA508-3 Steel at Elevated Temperature
Authors: Jun Tian, Yu Yang, Liping Zhang, Qianhua Kan
Abstract:
Experimental results show that SA 508-3 steel exhibits temperature dependent cyclic softening characteristic and obvious ratcheting behaviors, and dynamic strain age was observed at temperature range of 200 ºC to 350 ºC. Based on these observations, a temperature dependent cyclic plastic constitutive model was proposed by introducing the nonlinear cyclic softening and kinematic hardening rules, and the dynamic strain age was also considered into the constitutive model. Comparisons between experiments and simulations were carried out to validate the proposed model at elevated temperature.Keywords: constitutive model, elevated temperature, ratcheting, SA 508-3
Procedia PDF Downloads 30216660 Exploring the Energy Model of Cumulative Grief
Authors: Masica Jordan Alston, Angela N. Bullock, Angela S. Henderson, Stephanie Strianse, Sade Dunn, Joseph Hackett, Alaysia Black Hackett, Marcus Mason
Abstract:
The Energy Model of Cumulative Grief was created in 2018. The Energy Model of Cumulative Grief utilizes historic models of grief stage theories. The innovative model is additionally unique due to its focus on cultural responsiveness. The Energy Model of Cumulative Grief helps to train practitioners who work with clients dealing with grief and loss. This paper assists in introducing the world to this innovative model and exploring how this model positively impacted a convenience sample of 140 practitioners and individuals experiencing grief and loss. Respondents participated in Webinars provided by the National Grief and Loss Center of America (NGLCA). Participants in this cross-sectional research design study completed one of three Grief and Loss Surveys created by the Grief and Loss Centers of America. Data analysis for this study was conducted via SPSS and Survey Hero to examine survey results for respondents. Results indicate that the Energy Model of Cumulative Grief was an effective resource for participants in addressing grief and loss. The majority of participants found the Webinars to be helpful and a conduit to providing them with higher levels of hope. The findings suggest that using The Energy Model of Cumulative Grief is effective in providing culturally responsive grief and loss resources to practitioners and clients. There are far reaching implications with the use of technology to provide hope to those suffering from grief and loss worldwide through The Energy Model of Cumulative Grief.Keywords: grief, loss, grief energy, grieving brain
Procedia PDF Downloads 8416659 An Investigation of Influential Factors in Adopting the Cloud Computing in Saudi Arabia: An Application of Technology Acceptance Model
Authors: Shayem Saleh ALresheedi, Lu Song Feng, Abdulaziz Abdulwahab M. Fatani
Abstract:
Cloud computing is an emerging concept in the technological sphere. Its development enables many applications to avail information online and on demand. It is becoming an essential element for businesses due to its ability to diminish the costs of IT infrastructure and is being adopted in Saudi Arabia. However, there exist many factors that affect its adoption. Several researchers in the field have ignored the study of the TAM model for identifying the relevant factors and their impact for adopting of cloud computing. This study focuses on evaluating the acceptability of cloud computing and analyzing its impacting factors using Technology Acceptance Model (TAM) of technology adoption in Saudi Arabia. It suggests a model to examine the influential factors of the TAM model along with external factors of technical support in adapting the cloud computing. The proposed model has been tested through the use of multiple hypotheses based on calculation tools and collected data from customers through questionnaires. The findings of the study prove that the TAM model along with external factors can be applied in measuring the expected adoption of cloud computing. The study presents an investigation of influential factors and further recommendation in adopting cloud computing in Saudi Arabia.Keywords: cloud computing, acceptability, adoption, determinants
Procedia PDF Downloads 19316658 Utilization of an Object Oriented Tool to Perform Model-Based Safety Analysis According to Extended Failure System Models
Authors: Royia Soliman, Salma ElAnsary, Akram Amin Abdellatif, Florian Holzapfel
Abstract:
Model-Based Safety Analysis (MBSA) is an approach in which the system and safety engineers share a common system model created using a model-based development process. The model can also be extended by the failure modes of the system components. There are two famous approaches for the addition of fault behaviors to system models. The first one is to enclose the failure into the system design directly. The second approach is to develop a fault model separately from the system model, thus combining both independent models for safety analysis. This paper introduces a hybrid approach of MBSA. The approach tries to use informal abstracted models to investigate failure behaviors. The approach will combine various concepts such as directed graph traversal, event lists and Constraint Satisfaction Problems (CSP). The approach is implemented using an Object Oriented programming language. The components are abstracted to its failure logic and relationships of connected components. The implemented approach is tested on various flight control systems, including electrical and multi-domain examples. The various tests are analyzed, and a comparison to different approaches is represented.Keywords: flight control systems, model based safety analysis, safety assessment analysis, system modelling
Procedia PDF Downloads 16416657 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures
Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk
Abstract:
From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach
Procedia PDF Downloads 20216656 An Alternative Stratified Cox Model for Correlated Variables in Infant Mortality
Authors: K. A. Adeleke
Abstract:
Often in epidemiological research, introducing stratified Cox model can account for the existence of interactions of some inherent factors with some major/noticeable factors. This research work aimed at modelling correlated variables in infant mortality with the existence of some inherent factors affecting the infant survival function. An alternative semiparametric Stratified Cox model is proposed with a view to take care of multilevel factors that have interactions with others. This, however, was used as a tool to model infant mortality data from Nigeria Demographic and Health Survey (NDHS) with some multilevel factors (Tetanus, Polio, and Breastfeeding) having correlation with main factors (Sex, Size, and Mode of Delivery). Asymptotic properties of the estimators are also studied via simulation. The tested model via data showed good fit and performed differently depending on the levels of the interaction of the strata variable Z*. An evidence that the baseline hazard functions and regression coefficients are not the same from stratum to stratum provides a gain in information as against the usage of Cox model. Simulation result showed that the present method produced better estimates in terms of bias, lower standard errors, and or mean square errors.Keywords: stratified Cox, semiparametric model, infant mortality, multilevel factors, cofounding variables
Procedia PDF Downloads 55716655 Non-Universality in Barkhausen Noise Signatures of Thin Iron Films
Authors: Arnab Roy, P. S. Anil Kumar
Abstract:
We discuss angle dependent changes to the Barkhausen noise signatures of thin epitaxial Fe films upon altering the angle of the applied field. We observe a sub-critical to critical phase transition in the hysteresis loop of the sample upon increasing the out-of-plane component of the applied field. The observations are discussed in the light of simulations of a 2D Gaussian Random Field Ising Model with references to a reducible form of the Random Anisotropy Ising Model.Keywords: Barkhausen noise, Planar Hall effect, Random Field Ising Model, Random Anisotropy Ising Model
Procedia PDF Downloads 38816654 AER Model: An Integrated Artificial Society Modeling Method for Cloud Manufacturing Service Economic System
Authors: Deyu Zhou, Xiao Xue, Lizhen Cui
Abstract:
With the increasing collaboration among various services and the growing complexity of user demands, there are more and more factors affecting the stable development of the cloud manufacturing service economic system (CMSE). This poses new challenges to the evolution analysis of the CMSE. Many researchers have modeled and analyzed the evolution process of CMSE from the perspectives of individual learning and internal factors influencing the system, but without considering other important characteristics of the system's individuals (such as heterogeneity, bounded rationality, etc.) and the impact of external environmental factors. Therefore, this paper proposes an integrated artificial social model for the cloud manufacturing service economic system, which considers both the characteristics of the system's individuals and the internal and external influencing factors of the system. The model consists of three parts: the Agent model, environment model, and rules model (Agent-Environment-Rules, AER): (1) the Agent model considers important features of the individuals, such as heterogeneity and bounded rationality, based on the adaptive behavior mechanisms of perception, action, and decision-making; (2) the environment model describes the activity space of the individuals (real or virtual environment); (3) the rules model, as the driving force of system evolution, describes the mechanism of the entire system's operation and evolution. Finally, this paper verifies the effectiveness of the AER model through computational and experimental results.Keywords: cloud manufacturing service economic system (CMSE), AER model, artificial social modeling, integrated framework, computing experiment, agent-based modeling, social networks
Procedia PDF Downloads 7916653 Improving Post Release Outcomes
Authors: Michael Airton
Abstract:
This case study examines the development of a new service delivery model for prisons that focuses on using NGO’s to provide more effective case management and post release support functions. The model includes the co-design of the service delivery model and innovative commercial agreements that encourage embedded service providers within the prison and continuity of services post release with outcomes based payment mechanisms. The collaboration of prison staff, probation and parole officers and NGO’s is critical to the success of the model and its ability to deliver value and positive outcomes in relation to desistance from offending.Keywords: collaborative service delivery, desistance, non-government organisations, post release support services
Procedia PDF Downloads 39016652 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals
Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou
Abstract:
In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.Keywords: continuous wavelet transform, convolution neural net-work, gated recurrent unit, health indicators, remaining useful life
Procedia PDF Downloads 13316651 Speeding up Nonlinear Time History Analysis of Base-Isolated Structures Using a Nonlinear Exponential Model
Authors: Nicolò Vaiana, Giorgio Serino
Abstract:
The nonlinear time history analysis of seismically base-isolated structures can require a significant computational effort when the behavior of each seismic isolator is predicted by adopting the widely used differential equation Bouc-Wen model. In this paper, a nonlinear exponential model, able to simulate the response of seismic isolation bearings within a relatively large displacements range, is described and adopted in order to reduce the numerical computations and speed up the nonlinear dynamic analysis. Compared to the Bouc-Wen model, the proposed one does not require the numerical solution of a nonlinear differential equation for each time step of the analysis. The seismic response of a 3d base-isolated structure with a lead rubber bearing system subjected to harmonic earthquake excitation is simulated by modeling each isolator using the proposed analytical model. The comparison of the numerical results and computational time with those obtained by modeling the lead rubber bearings using the Bouc-Wen model demonstrates the good accuracy of the proposed model and its capability to reduce significantly the computational effort of the analysis.Keywords: base isolation, computational efficiency, nonlinear exponential model, nonlinear time history analysis
Procedia PDF Downloads 38416650 Optimal Location of the I/O Point in the Parking System
Authors: Jing Zhang, Jie Chen
Abstract:
In this paper, we deal with the optimal I/O point location in an automated parking system. In this system, the S/R machine (storage and retrieve machine) travels independently in vertical and horizontal directions. Based on the characteristics of the parking system and the basic principle of AS/RS system (Automated Storage and Retrieval System), we obtain the continuous model in units of time. For the single command cycle using the randomized storage policy, we calculate the probability density function for the system travel time and thus we develop the travel time model. And we confirm that the travel time model shows a good performance by comparing with discrete case. Finally in this part, we establish the optimal model by minimizing the expected travel time model and it is shown that the optimal location of the I/O point is located at the middle of the left-hand above corner.Keywords: parking system, optimal location, response time, S/R machine
Procedia PDF Downloads 40916649 A Non-Linear Eddy Viscosity Model for Turbulent Natural Convection in Geophysical Flows
Authors: J. P. Panda, K. Sasmal, H. V. Warrior
Abstract:
Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.Keywords: Eddy viscosity, turbulence modeling, GOTM, CFD
Procedia PDF Downloads 20216648 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 27816647 Two Quasiparticle Rotor Model for Deformed Nuclei
Authors: Alpana Goel, Kawalpreet Kalra
Abstract:
The study of level structures of deformed nuclei is the most complex topic in nuclear physics. For the description of level structure, a simple model is good enough to bring out the basic features which may then be further refined. The low lying level structures of these nuclei can, therefore, be understood in terms of Two Quasiparticle plus axially symmetric Rotor Model (TQPRM). The formulation of TQPRM for deformed nuclei has been presented. The analysis of available experimental data on two quasiparticle rotational bands of deformed nuclei present unusual features like signature dependence, odd-even staggering, signature inversion and signature reversal in two quasiparticle rotational bands of deformed nuclei. These signature effects are well discussed within the framework of TQPRM. The model is well efficient in reproducing the large odd-even staggering and anomalous features observed in even-even and odd-odd deformed nuclei. The effect of particle-particle and the Coriolis coupling is well established from the model. Detailed description of the model with implications to deformed nuclei is presented in the paper.Keywords: deformed nuclei, signature effects, signature inversion, signature reversal
Procedia PDF Downloads 15816646 Pressure Distribution, Load Capacity, and Thermal Effect with Generalized Maxwell Model in Journal Bearing Lubrication
Authors: M. Guemmadi, A. Ouibrahim
Abstract:
This numerical investigation aims to evaluate how a viscoelastic lubricant described by a generalized Maxwell model, affects the pressure distribution, the load capacity and thermal effect in a journal bearing lubrication. We use for the purpose the CFD package software completed by adapted user define functions (UDFs) to solve the coupled equations of momentum, of energy and of the viscoelastic model (generalized Maxwell model). Two parameters, viscosity and relaxation time are involved to show how viscoelasticity substantially affect the pressure distribution, the load capacity and the thermal transfer by comparison to Newtonian lubricant. These results were also compared with the available published results.Keywords: journal bearing, lubrication, Maxwell model, viscoelastic fluids, computational modelling, load capacity
Procedia PDF Downloads 54216645 Design of an Electric Vehicle Model with a Dynamo Drive Setup Using Model-Based Development (MBD) (EV Using MBD)
Authors: Gondu Vykunta Rao, Madhuri Bayya, Aruna Bharathi M., Paramesw Chidamparam, B. Murali
Abstract:
The increase in software content in today’s electric vehicles is increasing attention to having vast, unique topographies from low emission to high efficiency, whereas the chemical batteries have huge short comes, such as limited cycle life, power density, and cost. As for understanding and visualization, the companies are turning toward the virtual vehicle to test their design in software which is known as a simulation in the loop (SIL). In this project, in addition to the electric vehicle (EV) technology, we are adding a dynamo with the vehicle for regenerative braking. Traditionally the principle of dynamos is used in lighting the purpose of the bicycle. Here by using the same mechanism, we are running the vehicle as well as charging the vehicle from system-level simulation to the model in the loop and then to the Hardware in Loop (HIL) by using model-based development.Keywords: electric vehicle, simulation in the loop (SIL), model in loop (MIL), hardware in loop (HIL), dynamos, model-based development (MBD), permanent magnet synchronous motor (PMSM), current control (CC), field-oriented control (FOC), regenerative braking
Procedia PDF Downloads 12216644 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 316