Search results for: STS benchmark dataset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1465

Search results for: STS benchmark dataset

895 Learning to Recommend with Negative Ratings Based on Factorization Machine

Authors: Caihong Sun, Xizi Zhang

Abstract:

Rating prediction is an important problem for recommender systems. The task is to predict the rating for an item that a user would give. Most of the existing algorithms for the task ignore the effect of negative ratings rated by users on items, but the negative ratings have a significant impact on users’ purchasing decisions in practice. In this paper, we present a rating prediction algorithm based on factorization machines that consider the effect of negative ratings inspired by Loss Aversion theory. The aim of this paper is to develop a concave and a convex negative disgust function to evaluate the negative ratings respectively. Experiments are conducted on MovieLens dataset. The experimental results demonstrate the effectiveness of the proposed methods by comparing with other four the state-of-the-art approaches. The negative ratings showed much importance in the accuracy of ratings predictions.

Keywords: factorization machines, feature engineering, negative ratings, recommendation systems

Procedia PDF Downloads 222
894 A Survey on Genetic Algorithm for Intrusion Detection System

Authors: Prikhil Agrawal, N. Priyanka

Abstract:

With the increase of millions of users on Internet day by day, it is very essential to maintain highly reliable and secured data communication between various corporations. Although there are various traditional security imparting techniques such as antivirus software, password protection, data encryption, biometrics and firewall etc. But still network security has become the main issue in various leading companies. So IDSs have become an essential component in terms of security, as it can detect various network attacks and respond quickly to such occurrences. IDSs are used to detect unauthorized access to a computer system. This paper describes various intrusion detection techniques using GA approach. The intrusion detection problem has become a challenging task due to the conception of miscellaneous computer networks under various vulnerabilities. Thus the damage caused to various organizations by malicious intrusions can be mitigated and even be deterred by using this powerful tool.

Keywords: genetic algorithm (GA), intrusion detection system (IDS), dataset, network security

Procedia PDF Downloads 272
893 Context-Aware Recommender System Using Collaborative Filtering, Content-Based Algorithm and Fuzzy Rules

Authors: Xochilt Ramirez-Garcia, Mario Garcia-Valdez

Abstract:

Contextual recommendations are implemented in Recommender Systems to improve user satisfaction, recommender system makes accurate and suitable recommendations for a particular situation reaching personalized recommendations. The context provides information relevant to the Recommender System and is used as a filter for selection of relevant items for the user. This paper presents a Context-aware Recommender System, which uses techniques based on Collaborative Filtering and Content-Based, as well as fuzzy rules, to recommend items inside the context. The dataset used to test the system is Trip Advisor. The accuracy in the recommendations was evaluated with the Mean Absolute Error.

Keywords: algorithms, collaborative filtering, intelligent systems, fuzzy logic, recommender systems

Procedia PDF Downloads 401
892 Using Historical Data for Stock Prediction

Authors: Sofia Stoica

Abstract:

In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.

Keywords: finance, machine learning, opening price, stock market

Procedia PDF Downloads 157
891 Performance Analysis of Artificial Neural Network with Decision Tree in Prediction of Diabetes Mellitus

Authors: J. K. Alhassan, B. Attah, S. Misra

Abstract:

Human beings have the ability to make logical decisions. Although human decision - making is often optimal, it is insufficient when huge amount of data is to be classified. medical dataset is a vital ingredient used in predicting patients health condition. In other to have the best prediction, there calls for most suitable machine learning algorithms. This work compared the performance of Artificial Neural Network (ANN) and Decision Tree Algorithms (DTA) as regards to some performance metrics using diabetes data. The evaluations was done using weka software and found out that DTA performed better than ANN. Multilayer Perceptron (MLP) and Radial Basis Function (RBF) were the two algorithms used for ANN, while RegTree and LADTree algorithms were the DTA models used. The Root Mean Squared Error (RMSE) of MLP is 0.3913,that of RBF is 0.3625, that of RepTree is 0.3174 and that of LADTree is 0.3206 respectively.

Keywords: artificial neural network, classification, decision tree algorithms, diabetes mellitus

Procedia PDF Downloads 388
890 Bank Concentration and Industry Structure: Evidence from China

Authors: Jingjing Ye, Cijun Fan, Yan Dong

Abstract:

The development of financial sector plays an important role in shaping industrial structure. However, evidence on the micro-level channels through which this relation manifest remains relatively sparse, particularly for developing countries. In this paper, we compile an industry-by-city dataset based on manufacturing firms and registered banks in 287 Chinese cities from 1998 to 2008. Based on a difference-in-difference approach, we find the highly concentrated banking sector decreases the competitiveness of firms in each manufacturing industry. There are two main reasons: i) bank accessibility successfully fosters firm expansion within each industry, however, only for sufficiently large enterprises; ii) state-owned enterprises are favored by the banking industry in China. The results are robust after considering alternative concentration and external finance dependence measures.

Keywords: bank concentration, China, difference-in-difference, industry structure

Procedia PDF Downloads 373
889 The Effect of Mandatory International Financial Reporting Standards Reporting on Investors' Herding Practice: Evidence from Eu Equity Markets

Authors: Mohammed Lawal Danrimi, Ervina Alfan, Mazni Abdullah

Abstract:

The purpose of this study is to investigate whether the adoption of International Financial Reporting Standards (IFRS) encourages information-based trading and mitigates investors’ herding practice in emerging EU equity markets. Utilizing a modified non-linear model of cross-sectional absolute deviation (CSAD), we find that the hypothesis that mandatory IFRS adoption improves the information set of investors and reduces irrational investment behavior may in some cases be incorrect, and the reverse may be true. For instance, with regard to herding concerns, the new reporting benchmark has rather aggravated investors’ herding practice. However, we also find that mandatory IFRS adoption does not appear to be the only instigator of the observed herding practice; national institutional factors, particularly regulatory quality, political stability and control of corruption, also significantly contribute to investors’ herd formation around the new reporting regime. The findings would be of interest to academics, regulators and policymakers in performing a cost-benefit analysis of the so-called better reporting regime, as well as financial statement users who make decisions based on firms’ fundamental variables, treating them as significant indicators of future market movement.

Keywords: equity markets, herding, IFRS, CSAD

Procedia PDF Downloads 162
888 A Deep Learning Approach to Subsection Identification in Electronic Health Records

Authors: Nitin Shravan, Sudarsun Santhiappan, B. Sivaselvan

Abstract:

Subsection identification, in the context of Electronic Health Records (EHRs), is identifying the important sections for down-stream tasks like auto-coding. In this work, we classify the text present in EHRs according to their information, using machine learning and deep learning techniques. We initially describe briefly about the problem and formulate it as a text classification problem. Then, we discuss upon the methods from the literature. We try two approaches - traditional feature extraction based machine learning methods and deep learning methods. Through experiments on a private dataset, we establish that the deep learning methods perform better than the feature extraction based Machine Learning Models.

Keywords: deep learning, machine learning, semantic clinical classification, subsection identification, text classification

Procedia PDF Downloads 193
887 Investigation on Flexural Behavior of Non-Crimp 3D Orthogonal Weave Carbon Composite Reinforcement

Authors: Sh. Minapoor, S. Ajeli

Abstract:

Non-crimp three-dimensional (3D) orthogonal carbon fabrics are one of the useful textiles reinforcements in composites. In this paper, flexural and bending properties of a carbon non-crimp 3D orthogonal woven reinforcement are experimentally investigated. The present study is focused on the understanding and measurement of the main bending parameters including flexural stress, strain, and modulus. For this purpose, the three-point bending test method is used and the load-displacement curves are analyzed. The influence of some weave's parameters such as yarn type, geometry of structure, and fiber volume fraction on bending behavior of non-crimp 3D orthogonal carbon fabric is investigated. The obtained results also represent a dataset for the simulation of flexural behavior of non-crimp 3D orthogonal weave carbon composite reinforcement.

Keywords: non-crimp 3D orthogonal weave, carbon composite reinforcement, flexural behavior, three-point bending

Procedia PDF Downloads 283
886 USE-Net: SE-Block Enhanced U-Net Architecture for Robust Speaker Identification

Authors: Kilari Nikhil, Ankur Tibrewal, Srinivas Kruthiventi S. S.

Abstract:

Conventional speaker identification systems often fall short of capturing the diverse variations present in speech data due to fixed-scale architectures. In this research, we propose a CNN-based architecture, USENet, designed to overcome these limitations. Leveraging two key techniques, our approach achieves superior performance on the VoxCeleb 1 Dataset without any pre-training. Firstly, we adopt a U-net-inspired design to extract features at multiple scales, empowering our model to capture speech characteristics effectively. Secondly, we introduce the squeeze and excitation block to enhance spatial feature learning. The proposed architecture showcases significant advancements in speaker identification, outperforming existing methods, and holds promise for future research in this domain.

Keywords: multi-scale feature extraction, squeeze and excitation, VoxCeleb1 speaker identification, mel-spectrograms, USENet

Procedia PDF Downloads 51
885 U-Net Based Multi-Output Network for Lung Disease Segmentation and Classification Using Chest X-Ray Dataset

Authors: Jaiden X. Schraut

Abstract:

Medical Imaging Segmentation of Chest X-rays is used for the purpose of identification and differentiation of lung cancer, pneumonia, COVID-19, and similar respiratory diseases. Widespread application of computer-supported perception methods into the diagnostic pipeline has been demonstrated to increase prognostic accuracy and aid doctors in efficiently treating patients. Modern models attempt the task of segmentation and classification separately and improve diagnostic efficiency; however, to further enhance this process, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional CNN module for auxiliary classification output. The proposed model achieves a final Jaccard Index of .9634 for image segmentation and a final accuracy of .9600 for classification on the COVID-19 radiography database.

Keywords: chest X-ray, deep learning, image segmentation, image classification

Procedia PDF Downloads 116
884 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs

Authors: André Augusto Ceballos Melo

Abstract:

Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.

Keywords: stable diffusion, neural interface, smart prosthetic, augmenting

Procedia PDF Downloads 86
883 Impact of Financial Technology Growth on Bank Performance in Gulf Cooperation Council Region

Authors: Ahmed BenSaïda

Abstract:

This paper investigates the association between financial technology (FinTech) growth and bank performance in the Gulf Cooperation Council (GCC) region. Application is conducted on a panel dataset containing the annual observations of banks covering the period from 2012 to 2021. FinTech growth is set as an explanatory variable on three proxies of bank performance. These proxies are the return on assets (ROA), return on equity (ROE), and net interest margin (NIM). Moreover, several control variables are added to the model, including bank-specific and macroeconomic variables. The results are significant as all the proxies of the bank performance are negatively affected by the growth of FinTech startups. Consequently, banks are urged to proactively invest in FinTech startups and engage in partnerships to avoid the risk of disruption.

Keywords: financial technology, bank performance, GCC countries, panel regression

Procedia PDF Downloads 61
882 Experimental Investigation of Fluid Dynamic Effects on Crystallisation Scale Growth and Suppression in Agitation Tank

Authors: Prasanjit Das, M. M. K. Khan, M. G. Rasul, Jie Wu, I. Youn

Abstract:

Mineral scale formation is undoubtedly a more serious problem in the mineral industry than other process industries. To better understand scale growth and suppression, an experimental model is proposed in this study for supersaturated crystallised solutions commonly found in mineral process plants. In this experiment, surface crystallisation of potassium nitrate (KNO3) on the wall of the agitation tank and agitation effects on the scale growth and suppression are studied. The new quantitative scale suppression model predicts that at lower agitation speed, the scale growth rate is enhanced and at higher agitation speed, the scale suppression rate increases due to the increased flow erosion effect. A lab-scale agitation tank with and without baffles were used as a benchmark in this study. The fluid dynamic effects on scale growth and suppression in the agitation tank with three different size impellers (diameter 86, 114, 160 mm and model A310 with flow number 0.56) at various ranges of rotational speed (up to 700 rpm) and solution with different concentration (4.5, 4.75 and 5.25 mol/dm3) were investigated. For more elucidation, the effects of the different size of the impeller on wall surface scale growth and suppression rate as well as bottom settled scale accumulation rate are also discussed. Emphasis was placed on applications in the mineral industry, although results are also relevant to other industrial applications.

Keywords: agitation tank, crystallisation, impeller speed, scale

Procedia PDF Downloads 202
881 Robust Variable Selection Based on Schwarz Information Criterion for Linear Regression Models

Authors: Shokrya Saleh A. Alshqaq, Abdullah Ali H. Ahmadini

Abstract:

The Schwarz information criterion (SIC) is a popular tool for selecting the best variables in regression datasets. However, SIC is defined using an unbounded estimator, namely, the least-squares (LS), which is highly sensitive to outlying observations, especially bad leverage points. A method for robust variable selection based on SIC for linear regression models is thus needed. This study investigates the robustness properties of SIC by deriving its influence function and proposes a robust SIC based on the MM-estimation scale. The aim of this study is to produce a criterion that can effectively select accurate models in the presence of vertical outliers and high leverage points. The advantages of the proposed robust SIC is demonstrated through a simulation study and an analysis of a real dataset.

Keywords: influence function, robust variable selection, robust regression, Schwarz information criterion

Procedia PDF Downloads 124
880 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.

Authors: Zabeehullah, Fahim Arif, Yawar Abbas

Abstract:

Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.

Keywords: SDN, IoT, DL, ML, DRS

Procedia PDF Downloads 93
879 Comparison of Sourcing Process in Supply Chain Operation References Model and Business Information Systems

Authors: Batuhan Kocaoglu

Abstract:

Although using powerful systems like ERP (Enterprise Resource Planning), companies still cannot benchmark their processes and measure their process performance easily based on predefined SCOR (Supply Chain Operation References) terms. The purpose of this research is to identify common and corresponding processes to present a conceptual model to model and measure the purchasing process of an organization. The main steps for the research study are: Literature review related to 'procure to pay' process in ERP system; Literature review related to 'sourcing' process in SCOR model; To develop a conceptual model integrating 'sourcing' of SCOR model and 'procure to pay' of ERP model. In this study, we examined the similarities and differences between these two models. The proposed framework is based on the assumptions that are drawn from (1) the body of literature, (2) the authors’ experience by working in the field of enterprise and logistics information systems. The modeling framework provides a structured and systematic way to model and decompose necessary information from conceptual representation to process element specification. This conceptual model will help the organizations to make quality purchasing system measurement instruments and tools. And offered adaptation issues for ERP systems and SCOR model will provide a more benchmarkable and worldwide standard business process.

Keywords: SCOR, ERP, procure to pay, sourcing, reference model

Procedia PDF Downloads 345
878 Normalizing Scientometric Indicators of Individual Publications Using Local Cluster Detection Methods on Citation Networks

Authors: Levente Varga, Dávid Deritei, Mária Ercsey-Ravasz, Răzvan Florian, Zsolt I. Lázár, István Papp, Ferenc Járai-Szabó

Abstract:

One of the major shortcomings of widely used scientometric indicators is that different disciplines cannot be compared with each other. The issue of cross-disciplinary normalization has been long discussed, but even the classification of publications into scientific domains poses problems. Structural properties of citation networks offer new possibilities, however, the large size and constant growth of these networks asks for precaution. Here we present a new tool that in order to perform cross-field normalization of scientometric indicators of individual publications relays on the structural properties of citation networks. Due to the large size of the networks, a systematic procedure for identifying scientific domains based on a local community detection algorithm is proposed. The algorithm is tested with different benchmark and real-world networks. Then, by the use of this algorithm, the mechanism of the scientometric indicator normalization process is shown for a few indicators like the citation number, P-index and a local version of the PageRank indicator. The fat-tail trend of the article indicator distribution enables us to successfully perform the indicator normalization process.

Keywords: citation networks, cross-field normalization, local cluster detection, scientometric indicators

Procedia PDF Downloads 182
877 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking

Authors: Noga Bregman

Abstract:

Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.

Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves

Procedia PDF Downloads 10
876 The Role of Leapfrogging: Cross-Level Interactions and MNE Decision-Making in Conflict-Settings

Authors: Arrian Cornwell, Larisa Yarovaya, Mary Thomson

Abstract:

This paper seeks to examine the transboundary nature of foreign subsidiary exit vs. stay decisions when threatened by conflict in a host country. Using the concepts of nested vulnerability and teleconnections, we show that the threat of conflict can transcend bounded territories and have non-linear outcomes for actors, institutions and systems at broader scales of analysis. To the best of our knowledge, this has not been done before. By introducing the concepts of ‘leapfrogging upwards’ and ‘cascading downwards’, we develop a two-stage model which characterises the impacts of conflict as transboundary phenomena. We apply our model to a dataset of 266 foreign subsidiaries in six conflict-afflicted host countries over 2011-2015. Our results indicate that information is transmitted upwards and subsequent pressure flows cascade downwards, which, in turn, influence exit decisions.

Keywords: subsidiary exit, conflict, information transmission, pressure flows, transboundary

Procedia PDF Downloads 251
875 Attention Multiple Instance Learning for Cancer Tissue Classification in Digital Histopathology Images

Authors: Afaf Alharbi, Qianni Zhang

Abstract:

The identification of malignant tissue in histopathological slides holds significant importance in both clinical settings and pathology research. This paper introduces a methodology aimed at automatically categorizing cancerous tissue through the utilization of a multiple-instance learning framework. This framework is specifically developed to acquire knowledge of the Bernoulli distribution of the bag label probability by employing neural networks. Furthermore, we put forward a neural network based permutation-invariant aggregation operator, equivalent to attention mechanisms, which is applied to the multi-instance learning network. Through empirical evaluation of an openly available colon cancer histopathology dataset, we provide evidence that our approach surpasses various conventional deep learning methods.

Keywords: attention multiple instance learning, MIL and transfer learning, histopathological slides, cancer tissue classification

Procedia PDF Downloads 83
874 Spatial Point Process Analysis of Dengue Fever in Tainan, Taiwan

Authors: Ya-Mei Chang

Abstract:

This research is intended to apply spatio-temporal point process methods to the dengue fever data in Tainan. The spatio-temporal intensity function of the dataset is assumed to be separable. The kernel estimation is a widely used approach to estimate intensity functions. The intensity function is very helpful to study the relation of the spatio-temporal point process and some covariates. The covariate effects might be nonlinear. An nonparametric smoothing estimator is used to detect the nonlinearity of the covariate effects. A fitted parametric model could describe the influence of the covariates to the dengue fever. The correlation between the data points is detected by the K-function. The result of this research could provide useful information to help the government or the stakeholders making decisions.

Keywords: dengue fever, spatial point process, kernel estimation, covariate effect

Procedia PDF Downloads 332
873 Performance Prediction Methodology of Slow Aging Assets

Authors: M. Ben Slimene, M.-S. Ouali

Abstract:

Asset management of urban infrastructures faces a multitude of challenges that need to be overcome to obtain a reliable measurement of performances. Predicting the performance of slowly aging systems is one of those challenges, which helps the asset manager to investigate specific failure modes and to undertake the appropriate maintenance and rehabilitation interventions to avoid catastrophic failures as well as to optimize the maintenance costs. This article presents a methodology for modeling the deterioration of slowly degrading assets based on an operating history. It consists of extracting degradation profiles by grouping together assets that exhibit similar degradation sequences using an unsupervised classification technique derived from artificial intelligence. The obtained clusters are used to build the performance prediction models. This methodology is applied to a sample of a stormwater drainage culvert dataset.

Keywords: artificial Intelligence, clustering, culvert, regression model, slow degradation

Procedia PDF Downloads 87
872 Taylor’s Law and Relationship between Life Expectancy at Birth and Variance in Age at Death in Period Life Table

Authors: David A. Swanson, Lucky M. Tedrow

Abstract:

Taylor’s Law is a widely observed empirical pattern that relates variances to means in sets of non-negative measurements via an approximate power function, which has found application to human mortality. This study adds to this research by showing that Taylor’s Law leads to a model that reasonably describes the relationship between life expectancy at birth (e0, which also is equal to mean age at death in a life table) and variance at age of death in seven World Bank regional life tables measured at two points in time, 1970 and 2000. Using as a benchmark a non-random sample of four Japanese female life tables covering the period from 1950 to 2004, the study finds that the simple linear model provides reasonably accurate estimates of variance in age at death in a life table from e0, where the latter range from 60.9 to 85.59 years. Employing 2017 life tables from the Human Mortality Database, the simple linear model is used to provide estimates of variance at age in death for six countries, three of which have high e0 values and three of which have lower e0 values. The paper provides a substantive interpretation of Taylor’s Law relative to e0 and concludes by arguing that reasonably accurate estimates of variance in age at death in a period life table can be calculated using this approach, which also can be used where e0 itself is estimated rather than generated through the construction of a life table, a useful feature of the model.

Keywords: empirical pattern, mean age at death in a life table, mean age of a stationary population, stationary population

Procedia PDF Downloads 316
871 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 125
870 Probabilistic Seismic Loss Assessment of Reinforced Concrete (RC) Frame Buildings Pre- and Post-Rehabilitation

Authors: A. Flora, A. Di Lascio, D. Cardone, G. Gesualdi, G. Perrone

Abstract:

This paper considers the seismic assessment and retrofit of a pilotis-type RC frame building, which was designed for gravity loads only, prior to the introduction of seismic design provisions. Pilotis-type RC frame buildings, featuring an uniform infill throughout the height and an open ground floor, were, and still are, quite popular all over the world, as they offer large open areas very suitable for retail space at the ground floor. These architectural advantages, however, are of detriment to the building seismic behavior, as they can determine a soft-storey collapse mechanism. Extensive numerical analyses are carried out to quantify and benchmark the performance of the selected building, both in terms of overall collapse capacity and expected losses. Alternative retrofit strategies are then examined, including: (i) steel jacketing of RC columns and beam-column joints, (ii) steel bracing and (iv) seismic isolation. The Expected Annual Loss (EAL) of the selected case-study building, pre- and post-rehabilitation, is evaluated, following a probabilistic approach. The breakeven time of each solution is computed, comparing the initial cost of the retrofit intervention with expected benefit in terms of EAL reduction.

Keywords: expected annual loss, reinforced concrete buildings, seismic loss assessment, seismic retrofit

Procedia PDF Downloads 227
869 Mechanical Properties of Spark Plasma Sintered 2024 AA Reinforced with TiB₂ and Nano Yttrium

Authors: Suresh Vidyasagar Chevuri, D. B. Karunakar Chevuri

Abstract:

The main advantages of 'Metal Matrix Nano Composites (MMNCs)' include excellent mechanical performance, good wear resistance, low creep rate, etc. The method of fabrication of MMNCs is quite a challenge, which includes processing techniques like Spark Plasma Sintering (SPS), etc. The objective of the present work is to fabricate aluminum based MMNCs with the addition of small amounts of yttrium using Spark Plasma Sintering and to evaluate their mechanical and microstructure properties. Samples of 2024 AA with yttrium ranging from 0.1% to 0.5 wt% keeping 1 wt% TiB2 constant are fabricated by Spark Plasma Sintering (SPS). The mechanical property like hardness is determined using Vickers hardness testing machine. The metallurgical characterization of the samples is evaluated by Optical Microscopy (OM), Field Emission Scanning Electron Microscopy (FE-SEM) and X-Ray Diffraction (XRD). Unreinforced 2024 AA sample is also fabricated as a benchmark to compare its properties with that of the composite developed. It is found that the yttrium addition increases the above-mentioned properties to some extent and then decreases gradually when yttrium wt% increases beyond a point between 0.3 and 0.4 wt%. High density is achieved in the samples fabricated by spark plasma sintering when compared to any other fabrication route, and uniform distribution of yttrium is observed.

Keywords: spark plasma sintering, 2024 AA, yttrium addition, microstructure characterization, mechanical properties

Procedia PDF Downloads 210
868 Quantum Kernel Based Regressor for Prediction of Non-Markovianity of Open Quantum Systems

Authors: Diego Tancara, Raul Coto, Ariel Norambuena, Hoseein T. Dinani, Felipe Fanchini

Abstract:

Quantum machine learning is a growing research field that aims to perform machine learning tasks assisted by a quantum computer. Kernel-based quantum machine learning models are paradigmatic examples where the kernel involves quantum states, and the Gram matrix is calculated from the overlapping between these states. With the kernel at hand, a regular machine learning model is used for the learning process. In this paper we investigate the quantum support vector machine and quantum kernel ridge models to predict the degree of non-Markovianity of a quantum system. We perform digital quantum simulation of amplitude damping and phase damping channels to create our quantum dataset. We elaborate on different kernel functions to map the data and kernel circuits to compute the overlapping between quantum states. We observe a good performance of the models.

Keywords: quantum, machine learning, kernel, non-markovianity

Procedia PDF Downloads 154
867 Advanced Concrete Crack Detection Using Light-Weight MobileNetV2 Neural Network

Authors: Li Hui, Riyadh Hindi

Abstract:

Concrete structures frequently suffer from crack formation, a critical issue that can significantly reduce their lifespan by allowing damaging agents to enter. Traditional methods of crack detection depend on manual visual inspections, which heavily relies on the experience and expertise of inspectors using tools. In this study, a more efficient, computer vision-based approach is introduced by using the lightweight MobileNetV2 neural network. A dataset of 40,000 images was used to develop a specialized crack evaluation algorithm. The analysis indicates that MobileNetV2 matches the accuracy of traditional CNN methods but is more efficient due to its smaller size, making it well-suited for mobile device applications. The effectiveness and reliability of this new method were validated through experimental testing, highlighting its potential as an automated solution for crack detection in concrete structures.

Keywords: Concrete crack, computer vision, deep learning, MobileNetV2 neural network

Procedia PDF Downloads 45
866 Segmentation of Korean Words on Korean Road Signs

Authors: Lae-Jeong Park, Kyusoo Chung, Jungho Moon

Abstract:

This paper introduces an effective method of segmenting Korean text (place names in Korean) from a Korean road sign image. A Korean advanced directional road sign is composed of several types of visual information such as arrows, place names in Korean and English, and route numbers. Automatic classification of the visual information and extraction of Korean place names from the road sign images make it possible to avoid a lot of manual inputs to a database system for management of road signs nationwide. We propose a series of problem-specific heuristics that correctly segments Korean place names, which is the most crucial information, from the other information by leaving out non-text information effectively. The experimental results with a dataset of 368 road sign images show 96% of the detection rate per Korean place name and 84% per road sign image.

Keywords: segmentation, road signs, characters, classification

Procedia PDF Downloads 427