Search results for: prediction model accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19937

Search results for: prediction model accuracy

18497 Shear Stress and Effective Structural Stress ‎Fields of an Atherosclerotic Coronary Artery

Authors: Alireza Gholipour, Mergen H. Ghayesh, Anthony Zander, Stephen J. Nicholls, Peter J. Psaltis

Abstract:

A three-dimensional numerical model of an atherosclerotic coronary ‎artery is developed for the determination of high-risk situation and ‎hence heart attack prediction. Employing the finite element method ‎‎(FEM) using ANSYS, fluid-structure interaction (FSI) model of the ‎artery is constructed to determine the shear stress distribution as well ‎as the von Mises stress field. A flexible model for an atherosclerotic ‎coronary artery conveying pulsatile blood is developed incorporating ‎three-dimensionality, artery’s tapered shape via a linear function for ‎artery wall distribution, motion of the artery, blood viscosity via the ‎non-Newtonian flow theory, blood pulsation via use of one-period ‎heartbeat, hyperelasticity via the Mooney-Rivlin model, viscoelasticity ‎via the Prony series shear relaxation scheme, and micro-calcification ‎inside the plaque. The material properties used to relate the stress field ‎to the strain field have been extracted from clinical data from previous ‎in-vitro studies. The determined stress fields has potential to be used as ‎a predictive tool for plaque rupture and dissection.‎ The results show that stress concentration due to micro-calcification ‎increases the von Mises stress significantly; chance of developing a ‎crack inside the plaque increases. Moreover, the blood pulsation varies ‎the stress distribution substantially for some cases.‎

Keywords: atherosclerosis, fluid-structure interaction‎, coronary arteries‎, pulsatile flow

Procedia PDF Downloads 177
18496 Examining the Development of Complexity, Accuracy and Fluency in L2 Learners' Writing after L2 Instruction

Authors: Khaled Barkaoui

Abstract:

Research on second-language (L2) learning tends to focus on comparing students with different levels of proficiency at one point in time. However, to understand L2 development, we need more longitudinal research. In this study, we adopt a longitudinal approach to examine changes in three indicators of L2 ability, complexity, accuracy, and fluency (CAF), as reflected in the writing of L2 learners when writing on different tasks before and after a period L2 instruction. Each of 85 Chinese learners of English at three levels of English language proficiency responded to two writing tasks (independent and integrated) before and after nine months of English-language study in China. Each essay (N= 276) was analyzed in terms of numerous CAF indices using both computer coding and human rating: number of words written, number of errors per 100 words, ratings of error severity, global syntactic complexity (MLS), complexity by coordination (T/S), complexity by subordination (C/T), clausal complexity (MLC), phrasal complexity (NP density), syntactic variety, lexical density, lexical variation, lexical sophistication, and lexical bundles. Results were then compared statistically across tasks, L2 proficiency levels, and time. Overall, task type had significant effects on fluency and some syntactic complexity indices (complexity by coordination, structural variety, clausal complexity, phrase complexity) and lexical density, sophistication, and bundles, but not accuracy. L2 proficiency had significant effects on fluency, accuracy, and lexical variation, but not syntactic complexity. Finally, fluency, frequency of errors, but not accuracy ratings, syntactic complexity indices (clausal complexity, global complexity, complexity by subordination, phrase complexity, structural variety) and lexical complexity (lexical density, variation, and sophistication) exhibited significant changes after instruction, particularly for the independent task. We discuss the findings and their implications for assessment, instruction, and research on CAF in the context of L2 writing.

Keywords: second language writing, Fluency, accuracy, complexity, longitudinal

Procedia PDF Downloads 156
18495 Kinetics of Hydrogen Sulfide Removal from Biogas Using Biofilm on Packed Bed of Salak Fruit Seeds

Authors: Retno A. S. Lestari, Wahyudi B. Sediawan, Siti Syamsiah, Sarto

Abstract:

Sulfur-oxidizing bacteria were isolated and then grown on salak fruit seeds forming a biofilm on the surface. Their performances in sulfide removal were experimentally observed. In doing so, the salak fruit seeds containing biofilm were then used as packing material in a cylinder. Biogas obtained from biological treatment, which contains 27.95 ppm of hydrogen sulfide was flown through the packed bed. The hydrogen sulfide from the biogas was absorbed in the biofilm and then degraded by the microbes in the biofilm. The hydrogen sulfide concentrations at a various axial position and various times were analyzed. A set of simple kinetics model for the rate of the sulfide removal and the bacterial growth was proposed. Since the biofilm is very thin, the sulfide concentration in the Biofilm at a certain axial position is assumed to be uniform. The simultaneous ordinary differential equations obtained were then solved numerically using Runge-Kutta method. The values of the parameters were also obtained by curve-fitting. The accuracy of the model proposed was tested by comparing the calculation results using the model with the experimental data obtained. It turned out that the model proposed can describe the removal of sulfide liquid using bio-filter in the packed bed. The biofilter could remove 89,83 % of the hydrogen sulfide in the feed at 2.5 hr of operation and biogas flow rate of 30 L/hr.

Keywords: sulfur-oxidizing bacteria, salak fruit seeds, biofilm, packing material, biogas

Procedia PDF Downloads 225
18494 A Semantic and Concise Structure to Represent Human Actions

Authors: Tobias Strübing, Fatemeh Ziaeetabar

Abstract:

Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.

Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis

Procedia PDF Downloads 129
18493 Concussion Prediction for Speed Skater Impacting on Crash Mats by Computer Simulation Modeling

Authors: Yilin Liao, Hewen Li, Paula McConvey

Abstract:

Concussion for speed skaters often occurs when skaters fall on the ice and impact the crash mats during practices and competition races. Gaining insight into the impact of interactions is of essential interest as it is directly related to skaters’ potential health risks and injuries. Precise concussion measurements are challenging and very difficult, making computer simulation the only reliable way to analyze accidents. This research aims to create the crash mat and skater’s multi-body model using Solidworks, develop a computer simulation model for skater-mat impact using ANSYS software, and predict the skater’s concussion degree by evaluating the “head injury criteria” (HIC) through the resulting accelerations. The developed method and results help understand the relationship between impact parameters and concussion risk for speed skaters and inform the design of crash mats and skating rink layouts more specifically by considering athletes’ health risks.

Keywords: computer simulation modeling, concussion, impact, speed skater

Procedia PDF Downloads 145
18492 Image Reconstruction Method Based on L0 Norm

Authors: Jianhong Xiang, Hao Xiang, Linyu Wang

Abstract:

Compressed sensing (CS) has a wide range of applications in sparse signal reconstruction. Aiming at the problems of low recovery accuracy and long reconstruction time of existing reconstruction algorithms in medical imaging, this paper proposes a corrected smoothing L0 algorithm based on compressed sensing (CSL0). First, an approximate hyperbolic tangent function (AHTF) that is more similar to the L0 norm is proposed to approximate the L0 norm. Secondly, in view of the "sawtooth phenomenon" in the steepest descent method and the problem of sensitivity to the initial value selection in the modified Newton method, the use of the steepest descent method and the modified Newton method are jointly optimized to improve the reconstruction accuracy. Finally, the CSL0 algorithm is simulated on various images. The results show that the algorithm proposed in this paper improves the reconstruction accuracy of the test image by 0-0. 98dB.

Keywords: smoothed L0, compressed sensing, image processing, sparse reconstruction

Procedia PDF Downloads 121
18491 Phytoadaptation in Desert Soil Prediction Using Fuzzy Logic Modeling

Authors: S. Bouharati, F. Allag, M. Belmahdi, M. Bounechada

Abstract:

In terms of ecology forecast effects of desertification, the purpose of this study is to develop a predictive model of growth and adaptation of species in arid environment and bioclimatic conditions. The impact of climate change and the desertification phenomena is the result of combined effects in magnitude and frequency of these phenomena. Like the data involved in the phytopathogenic process and bacteria growth in arid soil occur in an uncertain environment because of their complexity, it becomes necessary to have a suitable methodology for the analysis of these variables. The basic principles of fuzzy logic those are perfectly suited to this process. As input variables, we consider the physical parameters, soil type, bacteria nature, and plant species concerned. The result output variable is the adaptability of the species expressed by the growth rate or extinction. As a conclusion, we prevent the possible strategies for adaptation, with or without shifting areas of plantation and nature adequate vegetation.

Keywords: climate changes, dry soil, phytopathogenicity, predictive model, fuzzy logic

Procedia PDF Downloads 327
18490 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 152
18489 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: instance selection, data reduction, MapReduce, kNN

Procedia PDF Downloads 257
18488 Validation of the Linear Trend Estimation Technique for Prediction of Average Water and Sewerage Charge Rate Prices in the Czech Republic

Authors: Aneta Oblouková, Eva Vítková

Abstract:

The article deals with the issue of water and sewerage charge rate prices in the Czech Republic. The research is specifically focused on the analysis of the development of the average prices of water and sewerage charge rate in the Czech Republic in the years 1994-2021 and on the validation of the chosen methodology relevant for the prediction of the development of the average prices of water and sewerage charge rate in the Czech Republic. The research is based on data collection. The data for this research was obtained from the Czech Statistical Office. The aim of the paper is to validate the relevance of the mathematical linear trend estimate technique for the calculation of the predicted average prices of water and sewerage charge rates. The real values of the average prices of water and sewerage charge rates in the Czech Republic in the years 1994-2018 were obtained from the Czech Statistical Office and were converted into a mathematical equation. The same type of real data was obtained from the Czech Statistical Office for the years 2019-2021. Prediction of the average prices of water and sewerage charge rates in the Czech Republic in the years 2019-2021 were also calculated using a chosen method -a linear trend estimation technique. The values obtained from the Czech Statistical Office and the values calculated using the chosen methodology were subsequently compared. The research result is a validation of the chosen mathematical technique to be a suitable technique for this research.

Keywords: Czech Republic, linear trend estimation, price prediction, water and sewerage charge rate

Procedia PDF Downloads 121
18487 The Influence of the Concentration and Temperature on the Rheological Behavior of Carbonyl-Methylcellulose

Authors: Mohamed Rabhi, Kouider Halim Benrahou

Abstract:

The rheological properties of the carbonyl-methylcellulose (CMC), of different concentrations (25000, 50000, 60000, 80000 and 100000 ppm) and different temperatures were studied. We found that the rheological behavior of all CMC solutions presents a pseudo-plastic behavior, it follows the model of Ostwald-de Waele. The objective of this work is the modeling of flow by the CMC Cross model. The Cross model gives us the variation of the viscosity according to the shear rate. This model allowed us to adjust more clearly the rheological characteristics of CMC solutions. A comparison between the Cross model and the model of Ostwald was made. Cross the model fitting parameters were determined by a numerical simulation to make an approach between the experimental curve and those given by the two models. Our study has shown that the model of Cross, describes well the flow of "CMC" for low concentrations.

Keywords: CMC, rheological modeling, Ostwald model, cross model, viscosity

Procedia PDF Downloads 409
18486 Factors Affecting Slot Machine Performance in an Electronic Gaming Machine Facility

Authors: Etienne Provencal, David L. St-Pierre

Abstract:

A facility exploiting only electronic gambling machines (EGMs) opened in 2007 in Quebec City, Canada under the name of Salons de Jeux du Québec (SdjQ). This facility is one of the first worldwide to rely on that business model. This paper models the performance of such EGMs. The interest from a managerial point of view is to identify the variables that can be controlled or influenced so that a comprehensive model can help improve the overall performance of the business. The EGM individual performance model contains eight different variables under study (Game Title, Progressive jackpot, Bonus Round, Minimum Coin-in, Maximum Coin-in, Denomination, Slant Top and Position). Using data from Quebec City’s SdjQ, a linear regression analysis explains 90.80% of the EGM performance. Moreover, results show a behavior slightly different than that of a casino. The addition of GameTitle as a factor to predict the EGM performance is one of the main contributions of this paper. The choice of the game (GameTitle) is very important. Games having better position do not have significantly better performance than games located elsewhere on the gaming floor. Progressive jackpots have a positive and significant effect on the individual performance of EGMs. The impact of BonusRound on the dependent variable is significant but negative. The effect of Denomination is significant but weakly negative. As expected, the Language of an EGMS does not impact its individual performance. This paper highlights some possible improvements by indicating which features are performing well. Recommendations are given to increase the performance of the EGMs performance.

Keywords: EGM, linear regression, model prediction, slot operations

Procedia PDF Downloads 258
18485 Mathematical Based Forecasting of Heart Attack

Authors: Razieh Khalafi

Abstract:

Myocardial infarction (MI) or acute myocardial infarction (AMI), commonly known as a heart attack, occurs when blood flow stops to part of the heart causing damage to the heart muscle. An ECG can often show evidence of a previous heart attack or one that's in progress. The patterns on the ECG may indicate which part of your heart has been damaged, as well as the extent of the damage. In chaos theory, the correlation dimension is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. In this research by considering ECG signal as a random walk we work on forecasting the oncoming heart attack by analyzing the ECG signals using the correlation dimension. In order to test the model a set of ECG signals for patients before and after heart attack was used and the strength of model for forecasting the behavior of these signals were checked. Results shows this methodology can forecast the ECG and accordingly heart attack with high accuracy.

Keywords: heart attack, ECG, random walk, correlation dimension, forecasting

Procedia PDF Downloads 545
18484 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case

Authors: Lukas Reznak, Maria Reznakova

Abstract:

Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.

Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany

Procedia PDF Downloads 252
18483 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 300
18482 3D Model of Rain-Wind Induced Vibration of Inclined Cable

Authors: Viet-Hung Truong, Seung-Eock Kim

Abstract:

Rain–wind induced vibration of inclined cable is a special aerodynamic phenomenon because it is easily influenced by many factors, especially the distribution of rivulet and wind velocity. This paper proposes a new 3D model of inclined cable, based on single degree-of-freedom model. Aerodynamic forces are firstly established and verified with the existing results from a 2D model. The 3D model of inclined cable is developed. The 3D model is then applied to assess the effects of wind velocity distribution and the continuity of rivulets on the cable. Finally, an inclined cable model with small sag is investigated.

Keywords: 3D model, rain - wind induced vibration, rivulet, analytical model

Procedia PDF Downloads 494
18481 Numerical Crashworthiness Investigations of a Full-Scale Composite Fuselage Section

Authors: Redouane Lombarkia

Abstract:

To apply a new material model developed and validated for plain weave fabric CFRP composites usually used in stanchions in sub-cargo section in aircrafts. This work deals with the development of a numerical model of the fuselage section of commercial aircraft based on the pure explicit finite element method FEM within Abaqus/Explicit commercial code. The aim of this work is the evaluation of the energy absorption capabilities of a full-scale composite fuselage section, including sub-cargo stanchions, Drop tests were carried out from a free fall height of about 5 m and impact velocity of about 6 m∕s. To asses, the prediction efficiency of the proposed numerical modeling procedure, a comparison with literature existed experimental results was performed. We demonstrate the efficiency of the proposed methodology to well capture crash damage mechanisms compared to experimental results

Keywords: crashworthiness, fuselage section, finite elements method (FEM), stanchions, specific energy absorption SEA

Procedia PDF Downloads 100
18480 The Impact of the Cross Race Effect on Eyewitness Identification

Authors: Leah Wilck

Abstract:

Eyewitness identification is arguably one of the most utilized practices within our legal system; however, exoneration cases indicate that this practice may lead to accuracy and conviction errors. The purpose of this study was to examine the effects of the cross-race effect, the phenomena in which people are able to more easily and accurately identify faces from within their racial category, on the accuracy of eyewitness identification. Participants watched three separate videos of a perpetrator trying to steal a bicycle. In each video, the perpetrator was of a different race and gender. Participants watched a video where the perpetrator was a Black male, a White male, and a White female. Following the completion of watching each video, participants were asked to recall everything they could about the perpetrator they witnessed. The initial results of the study did not find the expected cross-race effect impacted the eyewitness identification accuracy. These surprising results are discussed in terms of cross-race bias and recognition theory as well as applied implications.

Keywords: cross race effect, eyewitness identification, own-race bias, racial profiling

Procedia PDF Downloads 166
18479 A Method for Identifying Unusual Transactions in E-commerce Through Extended Data Flow Conformance Checking

Authors: Handie Pramana Putra, Ani Dijah Rahajoe

Abstract:

The proliferation of smart devices and advancements in mobile communication technologies have permeated various facets of life with the widespread influence of e-commerce. Detecting abnormal transactions holds paramount significance in this realm due to the potential for substantial financial losses. Moreover, the fusion of data flow and control flow assumes a critical role in the exploration of process modeling and data analysis, contributing significantly to the accuracy and security of business processes. This paper introduces an alternative approach to identify abnormal transactions through a model that integrates both data and control flows. Referred to as the Extended Data Petri net (DPNE), our model encapsulates the entire process, encompassing user login to the e-commerce platform and concluding with the payment stage, including the mobile transaction process. We scrutinize the model's structure, formulate an algorithm for detecting anomalies in pertinent data, and elucidate the rationale and efficacy of the comprehensive system model. A case study validates the responsive performance of each system component, demonstrating the system's adeptness in evaluating every activity within mobile transactions. Ultimately, the results of anomaly detection are derived through a thorough and comprehensive analysis.

Keywords: database, data analysis, DPNE, extended data flow, e-commerce

Procedia PDF Downloads 60
18478 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 113
18477 Comparative Study of Titanium and Polyetheretherketone Cranial Implant Using Finite Element Model

Authors: Khaja Moiduddin, Sherif Mohammed Elseufy, Hisham Alkhalefah

Abstract:

Recent advances in three-dimensional (3D) printing, medical imaging, and implant design may alter how craniomaxillofacial surgeons construct individualized treatments using patient data. By utilizing medical image data, medical professionals can obtain detailed information about a patient's injuries, enabling them to conduct a thorough preoperative assessment while ensuring the implant's accuracy. However, selecting the right implant material requires careful consideration of various mechanical properties. This study aims to compare the two commonly used implant material for cranial reconstruction which includes titanium (Ti6Al4V) and Polyetheretherketone (PEEK). Biomechanical analysis was performed to study the implant behavior, by keeping the implant design and fixation constant in both cases. A finite element model was created and analyzed under loading conditions. The finite element analysis proves that although Ti6Al4V is stronger than PEEK but, its mechanical strength is adequate to bear the loads of the adjacent bone tissue.

Keywords: cranial reconstruction, titanium implants, PEEK, finite element model

Procedia PDF Downloads 71
18476 Numerical Approach of RC Structural MembersExposed to Fire and After-Cooling Analysis

Authors: Ju-young Hwang, Hyo-Gyoung Kwak, Hong Jae Yim

Abstract:

This paper introduces a numerical analysis method for reinforced-concrete (RC) structures exposed to fire and compares the result with experimental results. The proposed analysis method for RC structure under the high temperature consists of two procedures. First step is to decide the temperature distribution across the section through the heat transfer analysis by using the time-temperature curve. After determination of the temperature distribution, the nonlinear analysis is followed. By considering material and geometrical non-linearity with the temperature distribution, nonlinear analysis predicts the behavior of RC structure under the fire by the exposed time. The proposed method is validated by the comparison with the experimental results. Finally, Prediction model to describe the status of after-cooling concrete can also be introduced based on the results of additional experiment. The product of this study is expected to be embedded for smart structure monitoring system against fire in u-City.

Keywords: RC structures, heat transfer analysis, nonlinear analysis, after-cooling concrete model

Procedia PDF Downloads 372
18475 A Large Dataset Imputation Approach Applied to Country Conflict Prediction Data

Authors: Benjamin Leiby, Darryl Ahner

Abstract:

This study demonstrates an alternative stochastic imputation approach for large datasets when preferred commercial packages struggle to iterate due to numerical problems. A large country conflict dataset motivates the search to impute missing values well over a common threshold of 20% missingness. The methodology capitalizes on correlation while using model residuals to provide the uncertainty in estimating unknown values. Examination of the methodology provides insight toward choosing linear or nonlinear modeling terms. Static tolerances common in most packages are replaced with tailorable tolerances that exploit residuals to fit each data element. The methodology evaluation includes observing computation time, model fit, and the comparison of known values to replaced values created through imputation. Overall, the country conflict dataset illustrates promise with modeling first-order interactions while presenting a need for further refinement that mimics predictive mean matching.

Keywords: correlation, country conflict, imputation, stochastic regression

Procedia PDF Downloads 124
18474 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 128
18473 A Study of the Performance Parameter for Recommendation Algorithm Evaluation

Authors: C. Rana, S. K. Jain

Abstract:

The enormous amount of Web data has challenged its usage in efficient manner in the past few years. As such, a range of techniques are applied to tackle this problem; prominent among them is personalization and recommender system. In fact, these are the tools that assist user in finding relevant information of web. Most of the e-commerce websites are applying such tools in one way or the other. In the past decade, a large number of recommendation algorithms have been proposed to tackle such problems. However, there have not been much research in the evaluation criteria for these algorithms. As such, the traditional accuracy and classification metrics are still used for the evaluation purpose that provides a static view. This paper studies how the evolution of user preference over a period of time can be mapped in a recommender system using a new evaluation methodology that explicitly using time dimension. We have also presented different types of experimental set up that are generally used for recommender system evaluation. Furthermore, an overview of major accuracy metrics and metrics that go beyond the scope of accuracy as researched in the past few years is also discussed in detail.

Keywords: collaborative filtering, data mining, evolutionary, clustering, algorithm, recommender systems

Procedia PDF Downloads 419
18472 The Role of Artificial Intelligence in Concrete Constructions

Authors: Ardalan Tofighi Soleimandarabi

Abstract:

Artificial intelligence has revolutionized the concrete construction industry and improved processes by increasing efficiency, accuracy, and sustainability. This article examines the applications of artificial intelligence in predicting the compressive strength of concrete, optimizing mixing plans, and improving structural health monitoring systems. Artificial intelligence-based models, such as artificial neural networks (ANN) and combined machine learning techniques, have shown better performance than traditional methods in predicting concrete properties. In addition, artificial intelligence systems have made it possible to improve quality control and real-time monitoring of structures, which helps in preventive maintenance and increases the life of infrastructure. Also, the use of artificial intelligence plays an effective role in sustainable construction by optimizing material consumption and reducing waste. Although the implementation of artificial intelligence is associated with challenges such as high initial costs and the need for specialized training, it will create a smarter, more sustainable, and more affordable future for concrete structures.

Keywords: artificial intelligence, concrete construction, compressive strength prediction, structural health monitoring, stability

Procedia PDF Downloads 26
18471 Finite Element Analysis for Earing Prediction Incorporating the BBC2003 Material Model with Fully Implicit Integration Method: Derivation and Numerical Algorithm

Authors: Sajjad Izadpanah, Seyed Hadi Ghaderi, Morteza Sayah Irani, Mahdi Gerdooei

Abstract:

In this research work, a sophisticated yield criterion known as BBC2003, capable of describing planar anisotropic behaviors of aluminum alloy sheets, was integrated into the commercial finite element code ABAQUS/Standard via a user subroutine. The complete formulation of the implementation process using a fully implicit integration scheme, i.e., the classic backward Euler method, is presented, and relevant aspects of the yield criterion are introduced. In order to solve nonlinear differential and algebraic equations, the line-search algorithm was adopted in the user-defined material subroutine (UMAT) to expand the convergence domain of the iterative Newton-Raphson method. The developed subroutine was used to simulate a challenging computational problem with complex stress states, i.e., deep drawing of an anisotropic aluminum alloy AA3105. The accuracy and stability of the developed subroutine were confirmed by comparing the numerically predicted earing and thickness variation profiles with the experimental results, which showed an excellent agreement between numerical and experimental earing and thickness profiles. The integration of the BBC2003 yield criterion into ABAQUS/Standard represents a significant contribution to the field of computational mechanics and provides a useful tool for analyzing the mechanical behavior of anisotropic materials subjected to complex loading conditions.

Keywords: BBC2003 yield function, plastic anisotropy, fully implicit integration scheme, line search algorithm, explicit and implicit integration schemes

Procedia PDF Downloads 78
18470 Wind Power Forecasting Using Echo State Networks Optimized by Big Bang-Big Crunch Algorithm

Authors: Amir Hossein Hejazi, Nima Amjady

Abstract:

In recent years, due to environmental issues traditional energy sources had been replaced by renewable ones. Wind energy as the fastest growing renewable energy shares a considerable percent of energy in power electricity markets. With this fast growth of wind energy worldwide, owners and operators of wind farms, transmission system operators, and energy traders need reliable and secure forecasts of wind energy production. In this paper, a new forecasting strategy is proposed for short-term wind power prediction based on Echo State Networks (ESN). The forecast engine utilizes state-of-the-art training process including dynamical reservoir with high capability to learn complex dynamics of wind power or wind vector signals. The study becomes more interesting by incorporating prediction of wind direction into forecast strategy. The Big Bang-Big Crunch (BB-BC) evolutionary optimization algorithm is adopted for adjusting free parameters of ESN-based forecaster. The proposed method is tested by real-world hourly data to show the efficiency of the forecasting engine for prediction of both wind vector and wind power output of aggregated wind power production.

Keywords: wind power forecasting, echo state network, big bang-big crunch, evolutionary optimization algorithm

Procedia PDF Downloads 577
18469 Atomic Clusters: A Unique Building Motif for Future Smart Nanomaterials

Authors: Debesh R. Roy

Abstract:

The fundamental issue in understanding the origin and growth mechanism of nanomaterials, from a fundamental unit is a big challenging problem to the scientists. Recently, an immense attention is generated to the researchers for prediction of exceptionally stable atomic cluster units as the building units for future smart materials. The present study is a systematic investigation on the stability and electronic properties of a series of bimetallic (semiconductor-alkaline earth) clusters, viz., BxMg3 (x=1-5) is performed, in search for exceptional and/ or unusual stable motifs. A very popular hybrid exchange-correlation functional, B3LYP as proposed by A. D. Becke along with a higher basis set, viz., 6-31+G[d,p] is employed for this purpose under the density functional formalism. The magic stability among the concerned clusters is explained using the jellium model. It is evident from the present study that the magic stability of B4Mg3 cluster arises due to the jellium shell closure.

Keywords: atomic clusters, density functional theory, jellium model, magic clusters, smart nanomaterials

Procedia PDF Downloads 533
18468 Robust Image Registration Based on an Adaptive Normalized Mutual Information Metric

Authors: Huda Algharib, Amal Algharib, Hanan Algharib, Ali Mohammad Alqudah

Abstract:

Image registration is an important topic for many imaging systems and computer vision applications. The standard image registration techniques such as Mutual information/ Normalized mutual information -based methods have a limited performance because they do not consider the spatial information or the relationships between the neighbouring pixels or voxels. In addition, the amount of image noise may significantly affect the registration accuracy. Therefore, this paper proposes an efficient method that explicitly considers the relationships between the adjacent pixels, where the gradient information of the reference and scene images is extracted first, and then the cosine similarity of the extracted gradient information is computed and used to improve the accuracy of the standard normalized mutual information measure. Our experimental results on different data types (i.e. CT, MRI and thermal images) show that the proposed method outperforms a number of image registration techniques in terms of the accuracy.

Keywords: image registration, mutual information, image gradients, image transformations

Procedia PDF Downloads 252