Search results for: GARCHX models
5267 Behavior Factors Evaluation for Reinforced Concrete Structures
Authors: Muhammad Rizwan, Naveed Ahmad, Akhtar Naeem Khan
Abstract:
Seismic behavior factors are evaluated for the performance assessment of low rise reinforced concrete RC frame structures based on experimental study of unidirectional dynamic shake table testing of two 1/3rd reduced scaled two storey frames, with a code confirming special moment resisting frame (SMRF) model and a noncompliant model of similar characteristics but built in low strength concrete .The models were subjected to a scaled accelerogram record of 1994 Northridge earthquake to deformed the test models to final collapse stage in order to obtain the structural response parameters. The fully compliant model was observed with more stable beam-sway response, experiencing beam flexure yielding and ground-storey column base yielding upon subjecting to 100% of the record. The response modification factor - R factor obtained for the code complaint and deficient prototype structures were 7.5 and 4.5 respectively, which is about 10% and 40% less than the UBC-97 specified value for special moment resisting reinforced concrete frame structures.Keywords: Northridge 1994 earthquake, reinforced concrete frame, response modification factor, shake table testing
Procedia PDF Downloads 1695266 Determination of Inflow Performance Relationship for Naturally Fractured Reservoirs: Numerical Simulation Study
Authors: Melissa Ramirez, Mohammad Awal
Abstract:
The Inflow Performance Relationship (IPR) of a well is a relation between the oil production rate and flowing bottom-hole pressure. This relationship is an important tool for petroleum engineers to understand and predict the well performance. In the petroleum industry, IPR correlations are used to design and evaluate well completion, optimizing well production, and designing artificial lift. The most commonly used IPR correlations models are Vogel and Wiggins, these models are applicable to homogeneous and isotropic reservoir data. In this work, a new IPR model is developed to determine inflow performance relationship of oil wells in a naturally fracture reservoir. A 3D black-oil reservoir simulator is used to develop the oil mobility function for the studied reservoir. Based on simulation runs, four flow rates are run to record the oil saturation and calculate the relative permeability for a naturally fractured reservoir. The new method uses the result of a well test analysis along with permeability and pressure-volume-temperature data in the fluid flow equations to obtain the oil mobility function. Comparisons between the new method and two popular correlations for non-fractured reservoirs indicate the necessity for developing and using an IPR correlation specifically developed for a fractured reservoir.Keywords: inflow performance relationship, mobility function, naturally fractured reservoir, well test analysis
Procedia PDF Downloads 2785265 Approach for the Mathematical Calculation of the Damping Factor of Railway Bridges with Ballasted Track
Authors: Andreas Stollwitzer, Lara Bettinelli, Josef Fink
Abstract:
The expansion of the high-speed rail network over the past decades has resulted in new challenges for engineers, including traffic-induced resonance vibrations of railway bridges. Excessive resonance-induced speed-dependent accelerations of railway bridges during high-speed traffic can lead to negative consequences such as fatigue symptoms, distortion of the track, destabilisation of the ballast bed, and potentially even derailment. A realistic prognosis of bridge vibrations during high-speed traffic must not only rely on the right choice of an adequate calculation model for both bridge and train but first and foremost on the use of dynamic model parameters which reflect reality appropriately. However, comparisons between measured and calculated bridge vibrations are often characterised by considerable discrepancies, whereas dynamic calculations overestimate the actual responses and therefore lead to uneconomical results. This gap between measurement and calculation constitutes a complex research issue and can be traced to several causes. One major cause is found in the dynamic properties of the ballasted track, more specifically in the persisting, substantial uncertainties regarding the consideration of the ballasted track (mechanical model and input parameters) in dynamic calculations. Furthermore, the discrepancy is particularly pronounced concerning the damping values of the bridge, as conservative values have to be used in the calculations due to normative specifications and lack of knowledge. By using a large-scale test facility, the analysis of the dynamic behaviour of ballasted track has been a major research topic at the Institute of Structural Engineering/Steel Construction at TU Wien in recent years. This highly specialised test facility is designed for isolated research of the ballasted track's dynamic stiffness and damping properties – independent of the bearing structure. Several mechanical models for the ballasted track consisting of one or more continuous spring-damper elements were developed based on the knowledge gained. These mechanical models can subsequently be integrated into bridge models for dynamic calculations. Furthermore, based on measurements at the test facility, model-dependent stiffness and damping parameters were determined for these mechanical models. As a result, realistic mechanical models of the railway bridge with different levels of detail and sufficiently precise characteristic values are available for bridge engineers. Besides that, this contribution also presents another practical application of such a bridge model: Based on the bridge model, determination equations for the damping factor (as Lehr's damping factor) can be derived. This approach constitutes a first-time method that makes the damping factor of a railway bridge calculable. A comparison of this mathematical approach with measured dynamic parameters of existing railway bridges illustrates, on the one hand, the apparent deviation between normatively prescribed and in-situ measured damping factors. On the other hand, it is also shown that a new approach, which makes it possible to calculate the damping factor, provides results that are close to reality and thus raises potentials for minimising the discrepancy between measurement and calculation.Keywords: ballasted track, bridge dynamics, damping, model design, railway bridges
Procedia PDF Downloads 1635264 Effects of Machining Parameters on the Surface Roughness and Vibration of the Milling Tool
Authors: Yung C. Lin, Kung D. Wu, Wei C. Shih, Jui P. Hung
Abstract:
High speed and high precision machining have become the most important technology in manufacturing industry. The surface roughness of high precision components is regarded as the important characteristics of the product quality. However, machining chatter could damage the machined surface and restricts the process efficiency. Therefore, selection of the appropriate cutting conditions is of importance to prevent the occurrence of chatter. In addition, vibration of the spindle tool also affects the surface quality, which implies the surface precision can be controlled by monitoring the vibration of the spindle tool. Based on this concept, this study was aimed to investigate the influence of the machining conditions on the surface roughness and the vibration of the spindle tool. To this end, a series of machining tests were conducted on aluminum alloy. In tests, the vibration of the spindle tool was measured by using the acceleration sensors. The surface roughness of the machined parts was examined using white light interferometer. The response surface methodology (RSM) was employed to establish the mathematical models for predicting surface finish and tool vibration, respectively. The correlation between the surface roughness and spindle tool vibration was also analyzed by ANOVA analysis. According to the machining tests, machined surface with or without chattering was marked on the lobes diagram as the verification of the machining conditions. Using multivariable regression analysis, the mathematical models for predicting the surface roughness and tool vibrations were developed based on the machining parameters, cutting depth (a), feed rate (f) and spindle speed (s). The predicted roughness is shown to agree well with the measured roughness, an average percentage of errors of 10%. The average percentage of errors of the tool vibrations between the measurements and the predictions of mathematical model is about 7.39%. In addition, the tool vibration under various machining conditions has been found to have a positive influence on the surface roughness (r=0.78). As a conclusion from current results, the mathematical models were successfully developed for the predictions of the surface roughness and vibration level of the spindle tool under different cutting condition, which can help to select appropriate cutting parameters and to monitor the machining conditions to achieve high surface quality in milling operation.Keywords: machining parameters, machining stability, regression analysis, surface roughness
Procedia PDF Downloads 2305263 Simulation of Scaled Model of Tall Multistory Structure: Raft Foundation for Experimental and Numerical Dynamic Studies
Authors: Omar Qaftan
Abstract:
Earthquakes can cause tremendous loss of human life and can result in severe damage to a several of civil engineering structures especially the tall buildings. The response of a multistory structure subjected to earthquake loading is a complex task, and it requires to be studied by physical and numerical modelling. For many circumstances, the scale models on shaking table may be a more economical option than the similar full-scale tests. A shaking table apparatus is a powerful tool that offers a possibility of understanding the actual behaviour of structural systems under earthquake loading. It is required to use a set of scaling relations to predict the behaviour of the full-scale structure. Selecting the scale factors is the most important steps in the simulation of the prototype into the scaled model. In this paper, the principles of scaling modelling procedure are explained in details, and the simulation of scaled multi-storey concrete structure for dynamic studies is investigated. A procedure for a complete dynamic simulation analysis is investigated experimentally and numerically with a scale factor of 1/50. The frequency domain accounting and lateral displacement for both numerical and experimental scaled models are determined. The procedure allows accounting for the actual dynamic behave of actual size porotype structure and scaled model. The procedure is adapted to determine the effects of the tall multi-storey structure on a raft foundation. Four generated accelerograms were used as inputs for the time history motions which are in complying with EC8. The output results of experimental works expressed regarding displacements and accelerations are compared with those obtained from a conventional fixed-base numerical model. Four-time history was applied in both experimental and numerical models, and they concluded that the experimental has an acceptable output accuracy in compare with the numerical model output. Therefore this modelling methodology is valid and qualified for different shaking table experiments tests.Keywords: structure, raft, soil, interaction
Procedia PDF Downloads 1355262 A Study on the Assessment of Prosthetic Infection after Total Knee Replacement Surgery
Authors: Chun-Lang Chang, Chun-Kai Liu
Abstract:
In this study, the patients that have undergone total knee replacement surgery from the 2010 National Health Insurance database were adopted as the study participants. The important factors were screened and selected through literature collection and interviews with physicians. Through the Cross Entropy Method (CE), Genetic Algorithm Logistic Regression (GALR), and Particle Swarm Optimization (PSO), the weights of the factors were obtained. In addition, the weights of the respective algorithms, coupled with the Excel VBA were adopted to construct the Case Based Reasoning (CBR) system. The results through statistical tests show that the GALR and PSO produced no significant differences, and the accuracy of both models were above 97%. Moreover, the area under the curve of ROC for these two models also exceeded 0.87. This study shall serve as a reference for medical staff as an assistance for clinical assessment of infections in order to effectively enhance medical service quality and efficiency, avoid unnecessary medical waste, and substantially contribute to resource allocations in medical institutions.Keywords: Case Based Reasoning, Cross Entropy Method, Genetic Algorithm Logistic Regression, Particle Swarm Optimization, Total Knee Replacement Surgery
Procedia PDF Downloads 3205261 Forecasting Age-Specific Mortality Rates and Life Expectancy at Births for Malaysian Sub-Populations
Authors: Syazreen N. Shair, Saiful A. Ishak, Aida Y. Yusof, Azizah Murad
Abstract:
In this paper, we forecast age-specific Malaysian mortality rates and life expectancy at births by gender and ethnic groups including Malay, Chinese and Indian. Two mortality forecasting models are adopted the original Lee-Carter model and its recent modified version, the product ratio coherent model. While the first forecasts the mortality rates for each subpopulation independently, the latter accounts for the relationship between sub-populations. The evaluation of both models is performed using the out-of-sample forecast errors which are mean absolute percentage errors (MAPE) for mortality rates and mean forecast errors (MFE) for life expectancy at births. The best model is then used to perform the long-term forecasts up to the year 2030, the year when Malaysia is expected to become an aged nation. Results suggest that in terms of overall accuracy, the product ratio model performs better than the original Lee-Carter model. The association of lower mortality group (Chinese) in the subpopulation model can improve the forecasts of high mortality groups (Malay and Indian).Keywords: coherent forecasts, life expectancy at births, Lee-Carter model, product-ratio model, mortality rates
Procedia PDF Downloads 2165260 Application of Stochastic Models to Annual Extreme Streamflow Data
Authors: Karim Hamidi Machekposhti, Hossein Sedghi
Abstract:
This study was designed to find the best stochastic model (using of time series analysis) for annual extreme streamflow (peak and maximum streamflow) of Karkheh River at Iran. The Auto-regressive Integrated Moving Average (ARIMA) model used to simulate these series and forecast those in future. For the analysis, annual extreme streamflow data of Jelogir Majin station (above of Karkheh dam reservoir) for the years 1958–2005 were used. A visual inspection of the time plot gives a little increasing trend; therefore, series is not stationary. The stationarity observed in Auto-Correlation Function (ACF) and Partial Auto-Correlation Function (PACF) plots of annual extreme streamflow was removed using first order differencing (d=1) in order to the development of the ARIMA model. Interestingly, the ARIMA(4,1,1) model developed was found to be most suitable for simulating annual extreme streamflow for Karkheh River. The model was found to be appropriate to forecast ten years of annual extreme streamflow and assist decision makers to establish priorities for water demand. The Statistical Analysis System (SAS) and Statistical Package for the Social Sciences (SPSS) codes were used to determinate of the best model for this series.Keywords: stochastic models, ARIMA, extreme streamflow, Karkheh river
Procedia PDF Downloads 1465259 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing
Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto
Abstract:
Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence
Procedia PDF Downloads 3815258 Parameters Adjustment of the Modified UBCSand Constitutive Model for the Potentially Liquefiable Sands of Santiago de Cali-Colombia
Authors: Daniel Rosero, Johan S. Arana, Sebastian Arango, Alejandro Cruz, Isabel Gomez-Gutierrez, Peter Thomson
Abstract:
Santiago de Cali is located in the southwestern Colombia in a high seismic hazard zone. About 50% of the city is on the banks of the Cauca River, which is the second most important hydric affluent in the country and whose alluvial deposits contain potentially liquefiable sands. Among the methods used to study a site's liquefaction potential is the finite elements method which use constitutive models to simulate the soil response for different load types. Among the different constitutive models, the Modified UBCSand stands out to study the seismic behavior of sands, and especially the liquefaction phenomenon. In this paper, the dynamic behavior of a potentially liquefiable sand of Santiago de Cali is studied by cyclic triaxial and CPTu tests. Subsequently, the behavior of the sand is simulated using the Modified UBCSand constitutive model, whose parameters are calibrated using the results of cyclic triaxial and CPTu tests. The above with the aim of analyze the constitutive model applicability for studying the geotechnical problems associated to liquefaction in the city.Keywords: constitutive model, cyclic triaxial test, dynamic behavior, liquefiable sand, modified ubcsand
Procedia PDF Downloads 2715257 Some Accuracy Related Aspects in Two-Fluid Hydrodynamic Sub-Grid Modeling of Gas-Solid Riser Flows
Authors: Joseph Mouallem, Seyed Reza Amini Niaki, Norman Chavez-Cussy, Christian Costa Milioli, Fernando Eduardo Milioli
Abstract:
Sub-grid closures for filtered two-fluid models (fTFM) useful in large scale simulations (LSS) of riser flows can be derived from highly resolved simulations (HRS) with microscopic two-fluid modeling (mTFM). Accurate sub-grid closures require accurate mTFM formulations as well as accurate correlation of relevant filtered parameters to suitable independent variables. This article deals with both of those issues. The accuracy of mTFM is touched by assessing the impact of gas sub-grid turbulence over HRS filtered predictions. A gas turbulence alike effect is artificially inserted by means of a stochastic forcing procedure implemented in the physical space over the momentum conservation equation of the gas phase. The correlation issue is touched by introducing a three-filtered variable correlation analysis (three-marker analysis) performed under a variety of different macro-scale conditions typical or risers. While the more elaborated correlation procedure clearly improved accuracy, accounting for gas sub-grid turbulence had no significant impact over predictions.Keywords: fluidization, gas-particle flow, two-fluid model, sub-grid models, filtered closures
Procedia PDF Downloads 1225256 Visualizing the Commercial Activity of a City by Analyzing the Data Information in Layers
Authors: Taras Agryzkov, Jose L. Oliver, Leandro Tortosa, Jose Vicent
Abstract:
This paper aims to demonstrate how network models can be used to understand and to deal with some aspects of urban complexity. As it is well known, the Theory of Architecture and Urbanism has been using for decades’ intellectual tools based on the ‘sciences of complexity’ as a strategy to propose theoretical approaches about cities and about architecture. In this sense, it is possible to find a vast literature in which for instance network theory is used as an instrument to understand very diverse questions about cities: from their commercial activity to their heritage condition. The contribution of this research consists in adding one step of complexity to this process: instead of working with one single primal graph as it is usually done, we will show how new network models arise from the consideration of two different primal graphs interacting in two layers. When we model an urban network through a mathematical structure like a graph, the city is usually represented by a set of nodes and edges that reproduce its topology, with the data generated or extracted from the city embedded in it. All this information is normally displayed in a single layer. Here, we propose to separate the information in two layers so that we can evaluate the interaction between them. Besides, both layers may be composed of structures that do not have to coincide: from this bi-layer system, groups of interactions emerge, suggesting reflections and in consequence, possible actions.Keywords: graphs, mathematics, networks, urban studies
Procedia PDF Downloads 1795255 Predictive Value of ¹⁸F-Fluorodeoxyglucose Accumulation in Visceral Fat Activity to Detect Epithelial Ovarian Cancer Metastases
Authors: A. F. Suleimanov, A. B. Saduakassova, V. S. Pokrovsky, D. V. Vinnikov
Abstract:
Relevance: Epithelial ovarian cancer (EOC) is the most lethal gynecological malignancy, with relapse occurring in about 70% of advanced cases with poor prognoses. The aim of the study was to evaluate functional visceral fat activity (VAT) evaluated by ¹⁸F-fluorodeoxyglucose (¹⁸F-FDG) positron emission tomography/computed tomography (PET/CT) as a predictor of metastases in epithelial ovarian cancer (EOC). Materials and methods: We assessed 53 patients with histologically confirmed EOC who underwent ¹⁸F-FDG PET/CT after a surgical treatment and courses of chemotherapy. Age, histology, stage, and tumor grade were recorded. Functional VAT activity was measured by maximum standardized uptake value (SUVₘₐₓ) using ¹⁸F-FDG PET/CT and tested as a predictor of later metastases in eight abdominal locations (RE – Epigastric Region, RLH – Left Hypochondriac Region, RRL – Right Lumbar Region, RU – Umbilical Region, RLL – Left Lumbar Region, RRI – Right Inguinal Region, RP – Hypogastric (Pubic) Region, RLI – Left Inguinal Region) and pelvic cavity (P) in the adjusted regression models. We also identified the best areas under the curve (AUC) for SUVₘₐₓ with the corresponding sensitivity (Se) and specificity (Sp). Results: In both adjusted-for regression models and ROC analysis, ¹⁸F-FDG accumulation in RE (cut-off SUVₘₐₓ 1.18; Se 64%; Sp 64%; AUC 0.669; p = 0.035) could predict later metastases in EOC patients, as opposed to age, sex, primary tumor location, tumor grade, and histology. Conclusions: VAT SUVₘₐₓ is significantly associated with later metastases in EOC patients and can be used as their predictor.Keywords: ¹⁸F-FDG, PET/CT, EOC, predictive value
Procedia PDF Downloads 625254 Uncovering Underwater Communication for Multi-Robot Applications via CORSICA
Authors: Niels Grataloup, Micael S. Couceiro, Manousos Valyrakis, Javier Escudero, Patricia A. Vargas
Abstract:
This paper benchmarks the possible underwater communication technologies that can be integrated into a swarm of underwater robots by proposing an underwater robot simulator named CORSICA (Cross platfORm wireleSs communICation simulator). Underwater exploration relies increasingly on the use of mobile robots, called Autonomous Underwater Vehicles (AUVs). These robots are able to reach goals in harsh underwater environments without resorting to human divers. The introduction of swarm robotics in these scenarios would facilitate the accomplishment of complex tasks with lower costs. However, swarm robotics requires implementation of communication systems to be operational and have a non-deterministic behaviour. Inter-robot communication is one of the key challenges in swarm robotics, especially in underwater scenarios, as communication must cope with severe restrictions and perturbations. This paper starts by presenting a list of the underwater propagation models of acoustic and electromagnetic waves, it also reviews existing transmitters embedded in current robots and simulators. It then proposes CORSICA, which allows validating the choices in terms of protocol and communication strategies, whether they are robot-robot or human-robot interactions. This paper finishes with a presentation of possible integration according to the literature review, and the potential to get CORSICA at an industrial level.Keywords: underwater simulator, robot-robot underwater communication, swarm robotics, transceiver and communication models
Procedia PDF Downloads 2995253 Domain Driven Design vs Soft Domain Driven Design Frameworks
Authors: Mohammed Salahat, Steve Wade
Abstract:
This paper presents and compares the SSDDD “Systematic Soft Domain Driven Design Framework” to DDD “Domain Driven Design Framework” as a soft system approach of information systems development. The framework use SSM as a guiding methodology within which we have embedded a sequence of design tasks based on the UML leading to the implementation of a software system using the Naked Objects framework. This framework has been used in action research projects that have involved the investigation and modelling of business processes using object-oriented domain models and the implementation of software systems based on those domain models. Within this framework, Soft Systems Methodology (SSM) is used as a guiding methodology to explore the problem situation and to develop the domain model using UML for the given business domain. The framework is proposed and evaluated in our previous works, a comparison between SSDDD and DDD is presented in this paper, to show how SSDDD improved DDD as an approach to modelling and implementing business domain perspectives for Information Systems Development. The comparison process, the results, and the improvements are presented in the following sections of this paper.Keywords: domain-driven design, soft domain-driven design, naked objects, soft language
Procedia PDF Downloads 2965252 Studying on Pile Seismic Operation with Numerical Method by Using FLAC 3D Software
Authors: Hossein Motaghedi, Kaveh Arkani, Siavash Salamatpoor
Abstract:
Usually the piles are important tools for safety and economical design of high and heavy structures. For this aim the response of single pile under dynamic load is so effective. Also, the agents which have influence on single pile response are properties of pile geometrical, soil and subjected loads. In this study the finite difference numerical method and by using FLAC 3D software is used for evaluation of single pile behavior under peak ground acceleration (PGA) of El Centro earthquake record in California (1940). The results of this models compared by experimental results of other researchers and it will be seen that the results of this models are approximately coincide by experimental data's. For example the maximum moment and displacement in top of the pile is corresponding to the other experimental results of pervious researchers. Furthermore, in this paper is tried to evaluate the effective properties between soil and pile. The results is shown that by increasing the pile diagonal, the pile top displacement will be decreased. As well as, by increasing the length of pile, the top displacement will be increased. Also, by increasing the stiffness ratio of pile to soil, the produced moment in pile body will be increased and the taller piles have more interaction by soils and have high inertia. So, these results can help directly to optimization design of pile dimensions.Keywords: pile seismic response, interaction between soil and pile, numerical analysis, FLAC 3D
Procedia PDF Downloads 3865251 Giftedness Cloud Model: A Psychological and Ecological Vision of Giftedness Concept
Authors: Rimeyah H. S. Almutairi, Alaa Eldin A. Ayoub
Abstract:
The aim of this study was to identify empirical and theoretical studies that explored giftedness theories and identification. In order to assess and synthesize the mechanisms, outcomes, and impacts of gifted identification models. Thus, we sought to provide an evidence-informed answer to how does current giftedness theories work and effectiveness. In order to develop a model that incorporates the advantages of existing models and avoids their disadvantages as much as possible. We conducted a systematic literature review (SLR). The disciplined analysis resulted in a final sample consisting of 30 appropriate searches. The results indicated that: (a) there is no uniform and consistent definition of Giftedness; (b) researchers are using several non-consistent criteria to detect gifted, and (d) The detection of talent is largely limited to early ages, and there is obvious neglect of adults. This study contributes to the development of Giftedness Cloud Model (GCM) which defined as a model that attempts to interpretation giftedness within an interactive psychological and ecological framework. GCM aims to help a talented to reach giftedness core and manifestation talent in creative productivity or invention. Besides that, GCM suggests classifying giftedness into four levels of mastery, excellence, creative productivity, and manifestation. In addition, GCM presents an idea to distinguish between talent and giftedness.Keywords: giftedness cloud model, talent, systematic literature review, giftedness concept
Procedia PDF Downloads 1655250 Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision
Authors: Lianzhong Zhang, Chao Huang
Abstract:
Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values.Keywords: SAR, sea-land segmentation, deep learning, transformer
Procedia PDF Downloads 1795249 Awareness in the Code of Ethics for Nurse Educators among Nurse Educators, Nursing Students and Professional Nurses at the Royal Thai Army, Thailand
Authors: Wallapa Boonrod
Abstract:
Thai National Education Act 1999 required all educational institutions received external quality evaluation at least once every five years. The purpose of this study was to compare the awareness in the code of ethics for nurse educators among nurse educators, professional nurses, and nursing students under The Royal Thai Army Nurse College. The sample consisted of 51 of nurse educators 200 nursing students and 340 professional nurses from Army nursing college and hospital by stratified random sampling techniques. The descriptive statistics indicated that the nurse educators, nursing students and professional nurses had different levels of awareness in the 9 roles of nurse educators: Nurse, Reliable Sacrifice, Intelligence, Giver, Nursing Skills, Teaching Responsibility, Unbiased Care, Tie to Organization, and Role Model. The code of ethics for nurse educators (CENE) measurement models from the awareness of nurse educators, professional nurses, and nursing students were well fitted with the empirical data. The CENE models from them were invariant in forms, but variant in factor loadings. Thai Army nurse educators strive to create a learning environment that nurtures the highest nursing potential and standards in their nursing students.Keywords: awareness of the code of ethics for nurse educators, nursing college and hospital under The Royal Thai Army, Thai Army nurse educators, professional nurses
Procedia PDF Downloads 4495248 Landslide Susceptibility Mapping: A Comparison between Logistic Regression and Multivariate Adaptive Regression Spline Models in the Municipality of Oudka, Northern of Morocco
Authors: S. Benchelha, H. C. Aoudjehane, M. Hakdaoui, R. El Hamdouni, H. Mansouri, T. Benchelha, M. Layelmam, M. Alaoui
Abstract:
The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).Keywords: landslide susceptibility mapping, regression logistic, multivariate adaptive regression spline, Oudka, Taounate
Procedia PDF Downloads 1865247 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies
Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong
Abstract:
To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation
Procedia PDF Downloads 1385246 System for the Detecting of Fake Profiles on Online Social Networks Using Machine Learning and the Bio-Inspired Algorithms
Authors: Sekkal Nawel, Mahammed Nadir
Abstract:
The proliferation of online activities on Online Social Networks (OSNs) has captured significant user attention. However, this growth has been hindered by the emergence of fraudulent accounts that do not represent real individuals and violate privacy regulations within social network communities. Consequently, it is imperative to identify and remove these profiles to enhance the security of OSN users. In recent years, researchers have turned to machine learning (ML) to develop strategies and methods to tackle this issue. Numerous studies have been conducted in this field to compare various ML-based techniques. However, the existing literature still lacks a comprehensive examination, especially considering different OSN platforms. Additionally, the utilization of bio-inspired algorithms has been largely overlooked. Our study conducts an extensive comparison analysis of various fake profile detection techniques in online social networks. The results of our study indicate that supervised models, along with other machine learning techniques, as well as unsupervised models, are effective for detecting false profiles in social media. To achieve optimal results, we have incorporated six bio-inspired algorithms to enhance the performance of fake profile identification results.Keywords: machine learning, bio-inspired algorithm, detection, fake profile, system, social network
Procedia PDF Downloads 665245 Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks
Authors: Jacqueline Rose T. Alipo-on, Francesca Isabelle F. Escobar, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar Al Dahoul
Abstract:
Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.Keywords: heartbeat classification, convolutional neural network, electrocardiogram signals, generative adversarial networks, long short-term memory, ResNet-50
Procedia PDF Downloads 1275244 Predictive Modelling Approaches in Food Processing and Safety
Authors: Amandeep Sharma, Digvaijay Verma, Ruplal Choudhary
Abstract:
Food processing is an activity across the globe that help in better handling of agricultural produce, including dairy, meat, and fish. The operations carried out in the food industry includes raw material quality authenticity; sorting and grading; processing into various products using thermal treatments – heating, freezing, and chilling; packaging; and storage at the appropriate temperature to maximize the shelf life of the products. All this is done to safeguard the food products and to ensure the distribution up to the consumer. The approaches to develop predictive models based on mathematical or statistical tools or empirical models’ development has been reported for various milk processing activities, including plant maintenance and wastage. Recently AI is the key factor for the fourth industrial revolution. AI plays a vital role in the food industry, not only in quality and food security but also in different areas such as manufacturing, packaging, and cleaning. A new conceptual model was developed, which shows that smaller sample size as only spectra would be required to predict the other values hence leads to saving on raw materials and chemicals otherwise used for experimentation during the research and new product development activity. It would be a futuristic approach if these tools can be further clubbed with the mobile phones through some software development for their real time application in the field for quality check and traceability of the product.Keywords: predictive modlleing, ann, ai, food
Procedia PDF Downloads 815243 Ontology Mapping with R-GNN for IT Infrastructure: Enhancing Ontology Construction and Knowledge Graph Expansion
Authors: Andrey Khalov
Abstract:
The rapid growth of unstructured data necessitates advanced methods for transforming raw information into structured knowledge, particularly in domain-specific contexts such as IT service management and outsourcing. This paper presents a methodology for automatically constructing domain ontologies using the DOLCE framework as the base ontology. The research focuses on expanding ITIL-based ontologies by integrating concepts from ITSMO, followed by the extraction of entities and relationships from domain-specific texts through transformers and statistical methods like formal concept analysis (FCA). In particular, this work introduces an R-GNN-based approach for ontology mapping, enabling more efficient entity extraction and ontology alignment with existing knowledge bases. Additionally, the research explores transfer learning techniques using pre-trained transformer models (e.g., DeBERTa-v3-large) fine-tuned on synthetic datasets generated via large language models such as LLaMA. The resulting ontology, termed IT Ontology (ITO), is evaluated against existing methodologies, highlighting significant improvements in precision and recall. This study advances the field of ontology engineering by automating the extraction, expansion, and refinement of ontologies tailored to the IT domain, thus bridging the gap between unstructured data and actionable knowledge.Keywords: ontology mapping, knowledge graphs, R-GNN, ITIL, NER
Procedia PDF Downloads 135242 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach
Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton
Abstract:
Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.Keywords: competition, growth, model, thinning
Procedia PDF Downloads 1245241 Leveraging Natural Language Processing for Legal Artificial Intelligence: A Longformer Approach for Taiwanese Legal Cases
Abstract:
Legal artificial intelligence (LegalAI) has been increasing applications within legal systems, propelled by advancements in natural language processing (NLP). Compared with general documents, legal case documents are typically long text sequences with intrinsic logical structures. Most existing language models have difficulty understanding the long-distance dependencies between different structures. Another unique challenge is that while the Judiciary of Taiwan has released legal judgments from various levels of courts over the years, there remains a significant obstacle in the lack of labeled datasets. This deficiency makes it difficult to train models with strong generalization capabilities, as well as accurately evaluate model performance. To date, models in Taiwan have yet to be specifically trained on judgment data. Given these challenges, this research proposes a Longformer-based pre-trained language model explicitly devised for retrieving similar judgments in Taiwanese legal documents. This model is trained on a self-constructed dataset, which this research has independently labeled to measure judgment similarities, thereby addressing a void left by the lack of an existing labeled dataset for Taiwanese judgments. This research adopts strategies such as early stopping and gradient clipping to prevent overfitting and manage gradient explosion, respectively, thereby enhancing the model's performance. The model in this research is evaluated using both the dataset and the Average Entropy of Offense-charged Clustering (AEOC) metric, which utilizes the notion of similar case scenarios within the same type of legal cases. Our experimental results illustrate our model's significant advancements in handling similarity comparisons within extensive legal judgments. By enabling more efficient retrieval and analysis of legal case documents, our model holds the potential to facilitate legal research, aid legal decision-making, and contribute to the further development of LegalAI in Taiwan.Keywords: legal artificial intelligence, computation and language, language model, Taiwanese legal cases
Procedia PDF Downloads 715240 Women in the Soviet Press during the Great Patriotic War (1941-1945)
Authors: Nani Manvelishvili
Abstract:
Soviet propaganda tried to shape common public opinion through Soviet Press. The activation of propaganda gained special importance to increase the fighting ability of the military and people behind the front During the Great Patriotic war (1941-1945). The state propaganda used unnecessary intervention in Press and created characters who were supposed to be role models for society. The new female role models were identified, which were supported by the authorities. The representation of the mother, warrior woman, working woman, victim, feminine woman, etc., in the works aimed to raise the fighting ability of the Soviet citizen and incite patriotism. This paper analyzes the soviet Press (The newspaper “Komunisti”) that was written and published during the Great Patriotic war in Soviet Georgia. The study aims to find propagandistic content in Press that used Soviet ideology during the Great Patriotic war. We analyzed the Soviet Newspaper "Komunisti," published during wartime. Soviet Press had the most significant impact on the formation of public opinion. The Soviet government actively used this resource to increase combat capability. While at the beginning of the war, women were supposed to replace men, propaganda by the end of the war moved to reassert conservative gender politics. Women returned to their traditional roles.Keywords: Great Patriotic War, Soviet Georgia, women in war, women's history, Soviet press
Procedia PDF Downloads 985239 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection
Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay
Abstract:
With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.Keywords: Credit Card Fraud Detection, User Authentication, Behavioral Biometrics, Machine Learning, Literature Survey
Procedia PDF Downloads 1195238 Phenomenological Ductile Fracture Criteria Applied to the Cutting Process
Authors: František Šebek, Petr Kubík, Jindřich Petruška, Jiří Hůlka
Abstract:
Present study is aimed on the cutting process of circular cross-section rods where the fracture is used to separate one rod into two pieces. Incorporating the phenomenological ductile fracture model into the explicit formulation of finite element method, the process can be analyzed without the necessity of realizing too many real experiments which could be expensive in case of repetitive testing in different conditions. In the present paper, the steel AISI 1045 was examined and the tensile tests of smooth and notched cylindrical bars were conducted together with biaxial testing of the notched tube specimens to calibrate material constants of selected phenomenological ductile fracture models. These were implemented into the Abaqus/Explicit through user subroutine VUMAT and used for cutting process simulation. As the calibration process is based on variables which cannot be obtained directly from experiments, numerical simulations of fracture tests are inevitable part of the calibration. Finally, experiments regarding the cutting process were carried out and predictive capability of selected fracture models is discussed. Concluding remarks then make the summary of gained experience both with the calibration and application of particular ductile fracture criteria.Keywords: ductile fracture, phenomenological criteria, cutting process, explicit formulation, AISI 1045 steel
Procedia PDF Downloads 453