Search results for: threshold models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7207

Search results for: threshold models

4807 Construction and Validation of a Hybrid Lumbar Spine Model for the Fast Evaluation of Intradiscal Pressure and Mobility

Authors: Dicko Ali Hamadi, Tong-Yette Nicolas, Gilles Benjamin, Faure Francois, Palombi Olivier

Abstract:

A novel hybrid model of the lumbar spine, allowing fast static and dynamic simulations of the disc pressure and the spine mobility, is introduced in this work. Our contribution is to combine rigid bodies, deformable finite elements, articular constraints, and springs into a unique model of the spine. Each vertebra is represented by a rigid body controlling a surface mesh to model contacts on the facet joints and the spinous process. The discs are modeled using a heterogeneous tetrahedral finite element model. The facet joints are represented as elastic joints with six degrees of freedom, while the ligaments are modeled using non-linear one-dimensional elastic elements. The challenge we tackle is to make these different models efficiently interact while respecting the principles of Anatomy and Mechanics. The mobility, the intradiscal pressure, the facet joint force and the instantaneous center of rotation of the lumbar spine are validated against the experimental and theoretical results of the literature on flexion, extension, lateral bending as well as axial rotation. Our hybrid model greatly simplifies the modeling task and dramatically accelerates the simulation of pressure within the discs, as well as the evaluation of the range of motion and the instantaneous centers of rotation, without penalizing precision. These results suggest that for some types of biomechanical simulations, simplified models allow far easier modeling and faster simulations compared to usual full-FEM approaches without any loss of accuracy.

Keywords: hybrid, modeling, fast simulation, lumbar spine

Procedia PDF Downloads 298
4806 Lessons of Passive Environmental Design in the Sarabhai and Shodan Houses by Le Corbusier

Authors: Juan Sebastián Rivera Soriano, Rosa Urbano Gutiérrez

Abstract:

The Shodan House and the Sarabhai House (Ahmedabad, India, 1954 and 1955, respectively) are considered some of the most important works of Le Corbusier produced in the last stage of his career. There are some academic publications that study the compositional and formal aspects of their architectural design, but there is no in-depth investigation into how the climatic conditions of this region were a determining factor in the design decisions implemented in these projects. This paper argues that Le Corbusier developed a specific architectural design strategy for these buildings based on scientific research on climate in the Indian context. This new language was informed by a pioneering study and interpretation of climatic data as a design methodology that would even involve the development of new design tools. This study investigated whether their use of climatic data meets values and levels of accuracy obtained with contemporary instruments and tools, such as Energy Plus weather data files and Climate Consultant. It also intended to find out if Le Corbusier's office’s intentions and decisions were indeed appropriate and efficient for those climate conditions by assessing these projects using BIM models and energy performance simulations from Design Builder. Accurate models were built using original historical data through archival research. The outcome is to provide a new understanding of the environment of these houses through the combination of modern building science and architectural history. The results confirm that in these houses, it was achieved a model of low energy consumption. This paper contributes new evidence not only on exemplary modern architecture concerned with environmental performance but also on how it developed progressive thinking in this direction.

Keywords: bioclimatic architecture, Le Corbusier, Shodan, Sarabhai Houses

Procedia PDF Downloads 48
4805 Dissolution Kinetics of Chevreul’s Salt in Ammonium Cloride Solutions

Authors: Mustafa Sertçelik, Turan Çalban, Hacali Necefoğlu, Sabri Çolak

Abstract:

In this study, Chevreul’s salt solubility and its dissolution kinetics in ammonium chloride solutions were investigated. Chevreul’s salt that we used in the studies was obtained by using the optimum conditions (ammonium sulphide concentration; 0,4 M, copper sulphate concentration; 0,25 M, temperature; 60°C, stirring speed; 600 rev/min, pH; 4 and reaction time; 15 mins) determined by T. Çalban et al. Chevreul’s salt solubility in ammonium chloride solutions and the kinetics of dissolution were investigated. The selected parameters that affect solubility were reaction temperature, concentration of ammonium chloride, stirring speed, and solid/liquid ratio. Correlation of experimental results had been achieved using linear regression implemented in the statistical package program statistica. The effect of parameters on Chevreul’s salt solubility was examined and integrated rate expression of dissolution rate was found using kinetic models in solid-liquid heterogeneous reactions. The results revealed that the dissolution rate of Chevreul’s salt was decreasing while temperature, concentration of ammonium chloride and stirring speed were increasing. On the other hand, dissolution rate was found to be decreasing with the increase of solid/liquid ratio. Based on result of the applications of the obtained experimental results to the kinetic models, we can deduce that Chevreul’s salt dissolution rate is controlled by diffusion through the ash (or product layer). Activation energy of the reaction of dissolution was found as 74.83 kJ/mol. The integrated rate expression along with the effects of parameters on Chevreul's salt solubility was found to be as follows: 1-3(1-X)2/3+2(1-X)= [2,96.1013.(CA)3,08 .(S/L)-038.(W)1,23 e-9001,2/T].t

Keywords: Chevreul's salt, copper, ammonium chloride, ammonium sulphide, dissolution kinetics

Procedia PDF Downloads 291
4804 Research Analysis of Urban Area Expansion Based on Remote Sensing

Authors: Sheheryar Khan, Weidong Li, Fanqian Meng

Abstract:

The Urban Heat Island (UHI) effect is one of the foremost problems out of other ecological and socioeconomic issues in urbanization. Due to this phenomenon that human-made urban areas have replaced the rural landscape with the surface that increases thermal conductivity and urban warmth; as a result, the temperature in the city is higher than in the surrounding rural areas. To affect the evidence of this phenomenon in the Zhengzhou city area, an observation of the temperature variations in the urban area is done through a scientific method that has been followed. Landsat 8 satellite images were taken from 2013 to 2015 to calculate the effect of Urban Heat Island (UHI) along with the NPP-VRRIS night-time remote sensing data to analyze the result for a better understanding of the center of the built-up area. To further support the evidence, the correlation between land surface temperatures and the normalized difference vegetation index (NDVI) was calculated using the Red band 4 and Near-infrared band 5 of the Landsat 8 data. Mono-window algorithm was applied to retrieve the land surface temperature (LST) distribution from the Landsat 8 data using Band 10 and 11 accordingly to convert the top-of-atmosphere radiance (TOA) and to convert the satellite brightness temperature. Along with Landsat 8 data, NPP-VIIRS night-light data is preprocessed to get the research area data. The analysis between Landsat 8 data and NPP night-light data was taken to compare the output center of the Built-up area of Zhengzhou city.

Keywords: built-up area, land surface temperature, mono-window algorithm, NDVI, remote sensing, threshold method, Zhengzhou

Procedia PDF Downloads 128
4803 Kýklos Dimensional Geometry: Entity Specific Core Measurement System

Authors: Steven D. P Moore

Abstract:

A novel method referred to asKýklos(Ky) dimensional geometry is proposed as an entity specific core geometric dimensional measurement system. Ky geometric measures can constructscaled multi-dimensionalmodels using regular and irregular sets in IRn. This entity specific-derived geometric measurement system shares similar fractal methods in which a ‘fractal transformation operator’ is applied to a set S to produce a union of N copies. The Kýklos’ inputs use 1D geometry as a core measure. One-dimensional inputs include the radius interval of a circle/sphere or the semiminor/semimajor axes intervals of an ellipse or spheroid. These geometric inputs have finite values that can be measured by SI distance units. The outputs for each interval are divided and subdivided 1D subcomponents with a union equal to the interval geometry/length. Setting a limit of subdivision iterations creates a finite value for each 1Dsubcomponent. The uniqueness of this method is captured by allowing the simplest 1D inputs to define entity specific subclass geometric core measurements that can also be used to derive length measures. Current methodologies for celestial based measurement of time, as defined within SI units, fits within this methodology, thus combining spatial and temporal features into geometric core measures. The novel Ky method discussed here offers geometric measures to construct scaled multi-dimensional structures, even models. Ky classes proposed for consideration include celestial even subatomic. The application of this offers incredible possibilities, for example, geometric architecture that can represent scaled celestial models that incorporates planets (spheroids) and celestial motion (elliptical orbits).

Keywords: Kyklos, geometry, measurement, celestial, dimension

Procedia PDF Downloads 155
4802 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 115
4801 Climate Changes in Albania and Their Effect on Cereal Yield

Authors: Lule Basha, Eralda Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.

Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest

Procedia PDF Downloads 76
4800 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales

Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias

Abstract:

Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.

Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline

Procedia PDF Downloads 68
4799 Assessment of Climate Change Impact on Meteorological Droughts

Authors: Alireza Nikbakht Shahbazi

Abstract:

There are various factors that affect climate changes; drought is one of those factors. Investigation of efficient methods for estimating climate change impacts on drought should be assumed. The aim of this paper is to investigate climate change impacts on drought in Karoon3 watershed located south-western Iran in the future periods. The atmospheric general circulation models (GCM) data under Intergovernmental Panel on Climate Change (IPCC) scenarios should be used for this purpose. In this study, watershed drought under climate change impacts will be simulated in future periods (2011 to 2099). Standard precipitation index (SPI) as a drought index was selected and calculated using mean monthly precipitation data in Karoon3 watershed. SPI was calculated in 6, 12 and 24 months periods. Statistical analysis on daily precipitation and minimum and maximum daily temperature was performed. LRAS-WG5 was used to determine the feasibility of future period's meteorological data production. Model calibration and verification was performed for the base year (1980-2007). Meteorological data simulation for future periods under General Circulation Models and climate change IPCC scenarios was performed and then the drought status using SPI under climate change effects analyzed. Results showed that differences between monthly maximum and minimum temperature will decrease under climate change and spring precipitation shall increase while summer and autumn rainfall shall decrease. The precipitation occurs mainly between January and May in future periods and summer or autumn precipitation decline and lead up to short term drought in the study region. Normal and wet SPI category is more frequent in B1 and A2 emissions scenarios than A1B.

Keywords: climate change impact, drought severity, drought frequency, Karoon3 watershed

Procedia PDF Downloads 229
4798 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees

Authors: M. Eskandarighadi, C. R. McGann

Abstract:

It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.

Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation

Procedia PDF Downloads 137
4797 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 271
4796 Effects of Nitrogen Addition on Litter Decomposition and Nutrient Release in a Temperate Grassland in Northern China

Authors: Lili Yang, Jirui Gong, Qinpu Luo, Min Liu, Bo Yang, Zihe Zhang

Abstract:

Anthropogenic activities have increased nitrogen (N) inputs to grassland ecosystems. Knowledge of the impact of N addition on litter decomposition is critical to understand ecosystem carbon cycling and their responses to global climate change. The aim of this study was to investigate the effects of N addition and litter types on litter decomposition of a semi-arid temperate grassland during growing and non-growing seasons in Inner Mongolia, northern China, and to identify the relation between litter decomposition and C: N: P stoichiometry in the litter-soil continuum. Six levels of N addition were conducted: CK, N1 (0 g Nm−2 yr−1), N2 (2 g Nm−2 yr−1), N3 (5 g Nm−2 yr−1), N4 (10 g Nm−2 yr−1) and N5 (25 g Nm−2 yr−1). Litter decomposition rates and nutrient release differed greatly among N addition gradients and litter types. N addition promoted litter decomposition of S. grandis, but exhibited no significant influence on L. chinensis litter, indicating that the S. grandis litter decomposition was more sensitive to N addition than L. chinensis. The critical threshold for N addition to promote mixed litter decomposition was 10 -25g Nm−2 yr−1. N addition altered the balance of C: N: P stoichiometry between litter, soil and microbial biomass. During decomposition progress, the L. chinensis litter N: P was higher in N2-N4 plots compared to CK, while the S. grandis litter C: N was lower in N3 and N4 plots, indicating that litter N or P content doesn’t satisfy microbial decomposers with the increasing of N addition. As a result, S. grandis litter exhibited net N immobilization, while L. chinensis litter net P immobilization. Mixed litter C: N: P stoichiometry satisfied the demand of microbial decomposers, showed net mineralization during the decomposition process. With the increasing N deposition in the future, mixed litter would potentially promote C and nutrient cycling in grassland ecosystem by increasing litter decomposition and nutrient release.

Keywords: C: N: P stoichiometry, litter decomposition, nitrogen addition, nutrient release

Procedia PDF Downloads 471
4795 The Impact of Digital Inclusive Finance on the High-Quality Development of China's Export Trade

Authors: Yao Wu

Abstract:

In the context of financial globalization, China has put forward the policy goal of high-quality development, and the digital economy, with its advantage of information resources, is driving China's export trade to achieve high-quality development. Due to the long-standing financing constraints of small and medium-sized export enterprises, how to expand the export scale of small and medium-sized enterprises has become a major threshold for the development of China's export trade. This paper firstly adopts the hierarchical analysis method to establish the evaluation system of high-quality development of China's export trade; secondly, the panel data of 30 provinces in China from 2011 to 2018 are selected for empirical analysis to establish the impact model of digital inclusive finance on the high-quality development of China's export trade; based on the analysis of heterogeneous enterprise trade model, a mediating effect model is established to verify the mediating role of credit constraint in the development of high-quality export trade in China. Based on the above analysis, this paper concludes that inclusive digital finance, with its unique digital and inclusive nature, alleviates the credit constraint problem among SMEs, enhances the binary marginal effect of SMEs' exports, optimizes their export scale and structure, and promotes the high-quality development of regional and even national export trade. Finally, based on the findings of this paper, we propose insights and suggestions for inclusive digital finance to promote the high-quality development of export trade.

Keywords: digital inclusive finance, high-quality development of export trade, fixed effects, binary marginal effects

Procedia PDF Downloads 77
4794 Numerical and Sensitivity Analysis of Modeling the Newcastle Disease Dynamics

Authors: Nurudeen Oluwasola Lasisi

Abstract:

Newcastle disease is a highly contagious disease of birds caused by a para-myxo virus. In this paper, we presented Novel quarantine-adjusted incident and linear incident of Newcastle disease model equations. We considered the dynamics of transmission and control of Newcastle disease. The existence and uniqueness of the solutions were obtained. The existence of disease-free points was shown, and the model threshold parameter was examined using the next-generation operator method. The sensitivity analysis was carried out in order to identify the most sensitive parameters of the disease transmission. This revealed that as parameters β,ω, and ᴧ increase while keeping other parameters constant, the effective reproduction number R_ev increases. This implies that the parameters increase the endemicity of the infection of individuals. More so, when the parameters μ,ε,γ,δ_1, and α increase, while keeping other parameters constant, the effective reproduction number R_ev decreases. This implies the parameters decrease the endemicity of the infection as they have negative indices. Analytical results were numerically verified by the Differential Transformation Method (DTM) and quantitative views of the model equations were showcased. We established that as contact rate (β) increases, the effective reproduction number R_ev increases, as the effectiveness of drug usage increases, the R_ev decreases and as the quarantined individual decreases, the R_ev decreases. The results of the simulations showed that the infected individual increases when the susceptible person approaches zero, also the vaccination individual increases when the infected individual decreases and simultaneously increases the recovery individual.

Keywords: disease-free equilibrium, effective reproduction number, endemicity, Newcastle disease model, numerical, Sensitivity analysis

Procedia PDF Downloads 33
4793 Regret-Regression for Multi-Armed Bandit Problem

Authors: Deyadeen Ali Alshibani

Abstract:

In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.

Keywords: optimal, bandit problem, optimization, dynamic programming

Procedia PDF Downloads 441
4792 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement

Authors: Hu Zhenxing, Gao Jianxin

Abstract:

Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.

Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D

Procedia PDF Downloads 486
4791 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 67
4790 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms

Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager

Abstract:

This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.

Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties

Procedia PDF Downloads 46
4789 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion

Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan

Abstract:

In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.

Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion

Procedia PDF Downloads 204
4788 Survival Analysis of Identifying the Risk Factors of Affecting the First Recurrence Time of Breast Cancer: The Case of Tigray, Ethiopia

Authors: Segen Asayehegn

Abstract:

Introduction: In Tigray, Ethiopia, next to cervical cancer, breast cancer is one of the most common cancer health problems for women. Objectives: This article is proposed to identify the prospective and potential risk factors affecting the time-to-first-recurrence of breast cancer patients in Tigray, Ethiopia. Methods: The data were taken from the patient’s medical record that registered from January 2010 to January 2020. The study considered a sample size of 1842 breast cancer patients. Powerful non-parametric and parametric shared frailty survival regression models (FSRM) were applied, and model comparisons were performed. Results: Out of 1842 breast cancer patients, about 1290 (70.02%) recovered/cured the disease. The median cure time from breast cancer is found at 12.8 months. The model comparison suggested that the lognormal parametric shared a frailty survival regression model predicted that treatment, stage of breast cancer, smoking habit, and marital status significantly affects the first recurrence of breast cancer. Conclusion: Factors like treatment, stages of cancer, and marital status were improved while smoking habits worsened the time to cure breast cancer. Recommendation: Thus, the authors recommend reducing breast cancer health problems, the regional health sector facilities need to be improved. More importantly, concerned bodies and medical doctors should emphasize the identified factors during treatment. Furthermore, general awareness programs should be given to the community on the identified factors.

Keywords: acceleration factor, breast cancer, Ethiopia, shared frailty survival models, Tigray

Procedia PDF Downloads 125
4787 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: deep learning, data mining, gender predication, MOOCs

Procedia PDF Downloads 125
4786 On the Perceived Awareness of Physical Education Teachers on Adoptable ICTs for PE

Authors: Tholokuhle T. Ntshakala, Seraphin D. Eyono Obono

Abstract:

Nations are still finding it quite difficult to win mega sport competitions despite the major contribution of sport to society in terms of social and economic development, personal health, and in education. Even though the world of sports has been transformed into a huge global economy, it is important to note that the first step of sport is usually its introduction to children at school through physical education or PE. In other words, nations who do not win mega sport competitions also suffer from a weak and neglected PE system. This problem of the neglect of PE systems is the main motivation of this research aimed at examining the factors affecting the perceived awareness of physical education teachers on the ICT's that are adoptable for the teaching and learning of physical education. Two types of research objectives will materialize this aim: relevant theories will be identified in relation to the analysis of the perceived ICT awareness of PE teachers and subsequent models will be compiled and designed from existing literature; the empirical testing of such theories and models will also be achieved through the survey of PE teachers from the Camperdown magisterial district of the KwaZulu-Natal province of South Africa. The main hypothesis at the heart of this study is the relationship between the demographics of PE teachers, their behavior both as individuals and as social entities, and their perceived awareness of the ICTs that are adoptable for PE, as postulated by existing literature; except that this study categorizes human behavior under performance expectancy, computer attitude, and social influence. This hypothesis was partially confirmed by the survey conducted by this research in the sense that performance expectancy and teachers’ age, gender, computer usage, and class size were found to be the only factors affecting their awareness of ICT's for physical education.

Keywords: human behavior, ICT Awareness, physical education, teachers

Procedia PDF Downloads 252
4785 Study on the Impact of Power Fluctuation, Hydrogen Utilization, and Fuel Cell Stack Orientation on the Performance Sensitivity of PEM Fuel Cell

Authors: Majid Ali, Xinfang Jin, Victor Eniola, Henning Hoene

Abstract:

The performance of proton exchange membrane (PEM) fuel cells is sensitive to several factors, including power fluctuations, hydrogen utilization, and the quality orientation of the fuel cell stack. In this study, we investigate the impact of these factors on the performance of a PEM fuel cell. We start by analyzing the power fluctuations that are typical in renewable energy systems and their effects on the 50 Watt fuel cell's performance. Next, we examine the hydrogen utilization rate (0-1000 mL/min) and its impact on the cell's efficiency and durability. Finally, we investigate the quality orientation (three different positions) of the fuel cell stack, which can significantly affect the cell's lifetime and overall performance. The basis of our analysis is the utilization of experimental results, which have been further validated by comparing them with simulations and manufacturer results. Our results indicate that power fluctuations can cause significant variations in the fuel cell's voltage and current, leading to a reduction in its performance. Moreover, we show that increasing the hydrogen utilization rate beyond a certain threshold can lead to a decrease in the fuel cell's efficiency. Finally, our analysis demonstrates that the orientation of the fuel cell stack can affect its performance and lifetime due to non-uniform distribution of reactants and products. In summary, our study highlights the importance of considering power fluctuations, hydrogen utilization, and quality orientation in designing and optimizing PEM fuel cell systems. The findings of this study can be useful for researchers and engineers working on the development of fuel cell systems for various applications, including transportation, stationary power generation, and portable devices.

Keywords: fuel cell, proton exchange membrane, renewable energy, power fluctuation, experimental

Procedia PDF Downloads 117
4784 Structural Characterization of TIR Domains Interaction

Authors: Sara Przetocka, Krzysztof Żak, Grzegorz Dubin, Tadeusz Holak

Abstract:

Toll-like receptors (TLRs) play central role in the innate immune response and inflammation by recognizing pathogen-associated molecular patterns (PAMPs). A fundamental basis of TLR signalling is dependent upon the recruitment and association of adaptor molecules that contain the structurally conserved Toll/interleukin-1 receptor (TIR) domain. MyD88 (myeloid differentiation primary response gene 88) is the universal adaptor for TLRs and cooperates with Mal (MyD88 adapter-like protein, also known as TIRAP) in TLR4 response which is predominantly used in inflammation, host defence and carcinogenesis. Up to date two possible models of MyD88, Mal and TLR4 interactions have been proposed. The aim of our studies is to confirm or abolish presented models and accomplish the full structural characterisation of TIR domains interaction. Using molecular cloning methods we obtained several construct of MyD88 and Mal TIR domain with GST or 6xHis tag. Gel filtration method as well as pull-down analysis confirmed that recombinant TIR domains from MyD88 and Mal are binding in complexes. To examine whether obtained complexes are homo- or heterodimers we carried out cross-linking reaction of TIR domains with BS3 compound combined with mass spectrometry. To investigate which amino acid residues are involved in this interaction the NMR titration experiments were performed. 15N MyD88-TIR solution was complemented with non-labelled Mal-TIR. The results undoubtedly indicate that MyD88-TIR interact with Mal-TIR. Moreover 2D spectra demonstrated that simultaneously Mal-TIR self-dimerization occurs which is necessary to create proper scaffold for Mal-TIR and MyD88-TIR interaction. Final step of this study will be crystallization of MyD88 and Mal TIR domains complex. This crystal structure and characterisation of its interface will have an impact in understanding the TLR signalling pathway and possibly will be used in development of new anti-cancer treatment.

Keywords: cancer, MyD88, TIR domains, Toll-like receptors

Procedia PDF Downloads 279
4783 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying

Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra

Abstract:

Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.

Keywords: FT-NIR, pasta, moisture determination, food engineering

Procedia PDF Downloads 247
4782 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy

Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan

Abstract:

For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.

Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue

Procedia PDF Downloads 354
4781 Challenges and Pedagogical Strategies in Teaching Chemical Bonding: Perspectives from Moroccan Educators

Authors: Sara atibi, Azzeddine Atibi, Salim Ahmed, Khadija El Kababi

Abstract:

The concept of chemical bonding is fundamental in chemistry education, ubiquitous in school curricula, and essential to numerous topics in the field. Mastery of this concept enables students to predict and explain the physical and chemical properties of substances. However, chemical bonding is often regarded as one of the most complex concepts for secondary and higher education students to comprehend, due to the underlying complex theory and the use of abstract models. Teachers also encounter significant challenges in conveying this concept effectively. This study aims to identify the difficulties and alternative conceptions faced by Moroccan secondary school students in learning about chemical bonding, as well as the pedagogical strategies employed by teachers to overcome these obstacles. A survey was conducted involving 150 Moroccan secondary school physical science teachers, using a structured questionnaire comprising closed, open-ended, and multiple-choice questions. The results reveal frequent student misconceptions, such as the octet rule, molecular geometry, and molecular polarity. Contributing factors to these misconceptions include the abstract nature of the concepts, the use of models, and teachers' difficulties in explaining certain aspects of chemical bonding. The study proposes improvements for teaching chemical bonding, such as integrating information and communication technologies (ICT), diversifying pedagogical tools, and considering students' pre-existing conceptions. These recommendations aim to assist teachers, curriculum developers, and textbook authors in making chemistry more accessible and in addressing students' misconceptions.

Keywords: chemical bonding, alternative conceptions, chemistry education, pedagogical strategies

Procedia PDF Downloads 8
4780 Understanding Space, Citizenship and Assimilation in the Context of Migration in North-Eastern Region of India

Authors: Mukunda Upadhyay, Rakesh Mishra, Rajni Singh

Abstract:

This paper is an attempt to understand the abstract concept of space, citizenship and migration in the north-eastern region. In the twentieth century, researchers and thinkers related citizenship and migration on national models. The national models of jus sulis and jus sangunis provide scope of space and rights to only those who are either born in the territory or either share the common descent. Space ensures rights and citizenship ensures space and for many migrants, citizenship is the ultimate goal in the host country. Migrants with the intention of settling down in the destination region, begin to adapt and assimilate in their new homes. In many cases, migrants may also retain the culture and values of the place of origin. In such cases the difference in the degree of retention and assimilation may determine the chances of conflict between the host society and migrants. Such conflicts are fueled by political aspirations of few individuals on both the sides. The North-Eastern part of India is a mixed community with many linguistic and religious groups sharing a common Geo-political space. Every community has its own unique history, culture and identity. Since the last half of the nineteenth century, this region has been experiencing both internal migration from other states and immigration from the neighboring countries which has resulted in the interactions of various cultures and ethnicities. With the span of time, migration has taken bitter form with problems concentrated around acquiring rights through space and citizenship. Political tensions resulted by host hostility and migrants resistance has ruined the social order in few areas. In order to resolve these issues in this area proper intervention has to be carried out by the involvement of the National and International community.

Keywords: space, citizenship, assimilation, migration, rights

Procedia PDF Downloads 405
4779 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 115
4778 Analytical Modelling of the Moment-Rotation Behavior of Top and Seat Angle Connection with Stiffeners

Authors: Merve Sagiroglu

Abstract:

The earthquake-resistant steel structure design is required taking into account the behavior of beam-column connections besides the basic properties of the structure such as material and geometry. Beam-column connections play an important role in the behavior of frame systems. Taking into account the behaviour of connection in analysis and design of steel frames is important due to presenting the actual behavior of frames. So, the behavior of the connections should be well known. The most important force which transmitted by connections in the structural system is the moment. The rotational deformation is customarily expressed as a function of the moment in the connection. So, the moment-rotation curves are the best expression of behaviour of the beam-to-column connections. The designed connections form various moment-rotation curves according to the elements of connection and the shape of placement. The only way to achieve this curve is with real-scale experiments. The experiments of some connections have been carried out partially and are formed in the databank. It has been formed the models using this databank to express the behavior of connection. In this study, theoretical studies have been carried out to model a real behavior of the top and seat angles connections with angles. Two stiffeners in the top and seat angle to increase the stiffness of the connection, and two stiffeners in the beam web to prevent local buckling are used in this beam-to-column connection. Mathematical models have been performed using the database of the beam-to-column connection experiments previously by authors. Using the data of the tests, it has been aimed that analytical expressions have been developed to obtain the moment-rotation curve for the connection details whose test data are not available. The connection has been dimensioned in various shapes and the effect of the dimensions of the connection elements on the behavior has been examined.

Keywords: top and seat angle connection, stiffener, moment-rotation curves, analytical study

Procedia PDF Downloads 165