Search results for: STS benchmark dataset
571 PointNetLK-OBB: A Point Cloud Registration Algorithm with High Accuracy
Authors: Wenhao Lan, Ning Li, Qiang Tong
Abstract:
To improve the registration accuracy of a source point cloud and template point cloud when the initial relative deflection angle is too large, a PointNetLK algorithm combined with an oriented bounding box (PointNetLK-OBB) is proposed. In this algorithm, the OBB of a 3D point cloud is used to represent the macro feature of source and template point clouds. Under the guidance of the iterative closest point algorithm, the OBB of the source and template point clouds is aligned, and a mirror symmetry effect is produced between them. According to the fitting degree of the source and template point clouds, the mirror symmetry plane is detected, and the optimal rotation and translation of the source point cloud is obtained to complete the 3D point cloud registration task. To verify the effectiveness of the proposed algorithm, a comparative experiment was performed using the publicly available ModelNet40 dataset. The experimental results demonstrate that, compared with PointNetLK, PointNetLK-OBB improves the registration accuracy of the source and template point clouds when the initial relative deflection angle is too large, and the sensitivity of the initial relative position between the source point cloud and template point cloud is reduced. The primary contribution of this paper is the use of PointNetLK to avoid the non-convex problem of traditional point cloud registration and leveraging the regularity of the OBB to avoid the local optimization problem in the PointNetLK context.Keywords: mirror symmetry, oriented bounding box, point cloud registration, PointNetLK-OBB
Procedia PDF Downloads 152570 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method
Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson
Abstract:
Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.Keywords: adversarial examples, attack, computer vision, image processing
Procedia PDF Downloads 193569 Factors Affecting the Profitability of Commercial Banks: An Empirical Study of Indian Banking Sector
Authors: Neeraj Gupta, Jitendra Mahakud
Abstract:
The banking system plays a major role in the Indian economy. Banking system is the payment gateway of most of the financial transactions. Banking has gone a major transition that is still in progress. Recent banking reforms after liberalization in 1991 have led to the establishment of the foreign banks in the country. The foreign banks are not listed in the Indian stock markets and have increased the competition leading to the capture of the significant share in the revenue from the public sector banks which are still the major players in the Indian banking sector. The performance of the banking sector depends on the internal (bank specific) as well as the external (market specific and macroeconomic) factors. Profitability in banking sector is affected by numerous factors which can be internal or external. The present study examines these internal and external factors which are likely to effect the profitablilty of the Indian banks. The sample consists of a panel dataset of 64 commercial banks in India, consisting of 1088 observations over the years from 1998 to 2016. The GMM dynamic panel estimation given by Arellano and Bond has been used. The study revealed that the variables capital adequacy ratio, deposit, age, labour productivity, non-performing asset, inflation and concentration have significant effect on performance measured.Keywords: banks in India, bank performance, bank productivity, banking management
Procedia PDF Downloads 274568 Online Handwritten Character Recognition for South Indian Scripts Using Support Vector Machines
Authors: Steffy Maria Joseph, Abdu Rahiman V, Abdul Hameed K. M.
Abstract:
Online handwritten character recognition is a challenging field in Artificial Intelligence. The classification success rate of current techniques decreases when the dataset involves similarity and complexity in stroke styles, number of strokes and stroke characteristics variations. Malayalam is a complex south indian language spoken by about 35 million people especially in Kerala and Lakshadweep islands. In this paper, we consider the significant feature extraction for the similar stroke styles of Malayalam. This extracted feature set are suitable for the recognition of other handwritten south indian languages like Tamil, Telugu and Kannada. A classification scheme based on support vector machines (SVM) is proposed to improve the accuracy in classification and recognition of online malayalam handwritten characters. SVM Classifiers are the best for real world applications. The contribution of various features towards the accuracy in recognition is analysed. Performance for different kernels of SVM are also studied. A graphical user interface has developed for reading and displaying the character. Different writing styles are taken for each of the 44 alphabets. Various features are extracted and used for classification after the preprocessing of input data samples. Highest recognition accuracy of 97% is obtained experimentally at the best feature combination with polynomial kernel in SVM.Keywords: SVM, matlab, malayalam, South Indian scripts, onlinehandwritten character recognition
Procedia PDF Downloads 576567 Reed: An Approach Towards Quickly Bootstrapping Multilingual Acoustic Models
Authors: Bipasha Sen, Aditya Agarwal
Abstract:
Multilingual automatic speech recognition (ASR) system is a single entity capable of transcribing multiple languages sharing a common phone space. Performance of such a system is highly dependent on the compatibility of the languages. State of the art speech recognition systems are built using sequential architectures based on recurrent neural networks (RNN) limiting the computational parallelization in training. This poses a significant challenge in terms of time taken to bootstrap and validate the compatibility of multiple languages for building a robust multilingual system. Complex architectural choices based on self-attention networks are made to improve the parallelization thereby reducing the training time. In this work, we propose Reed, a simple system based on 1D convolutions which uses very short context to improve the training time. To improve the performance of our system, we use raw time-domain speech signals directly as input. This enables the convolutional layers to learn feature representations rather than relying on handcrafted features such as MFCC. We report improvement on training and inference times by atleast a factor of 4x and 7.4x respectively with comparable WERs against standard RNN based baseline systems on SpeechOcean's multilingual low resource dataset.Keywords: convolutional neural networks, language compatibility, low resource languages, multilingual automatic speech recognition
Procedia PDF Downloads 124566 Performance Comparison of Outlier Detection Techniques Based Classification in Wireless Sensor Networks
Authors: Ayadi Aya, Ghorbel Oussama, M. Obeid Abdulfattah, Abid Mohamed
Abstract:
Nowadays, many wireless sensor networks have been distributed in the real world to collect valuable raw sensed data. The challenge is to extract high-level knowledge from this huge amount of data. However, the identification of outliers can lead to the discovery of useful and meaningful knowledge. In the field of wireless sensor networks, an outlier is defined as a measurement that deviates from the normal behavior of sensed data. Many detection techniques of outliers in WSNs have been extensively studied in the past decade and have focused on classic based algorithms. These techniques identify outlier in the real transaction dataset. This survey aims at providing a structured and comprehensive overview of the existing researches on classification based outlier detection techniques as applicable to WSNs. Thus, we have identified key hypotheses, which are used by these approaches to differentiate between normal and outlier behavior. In addition, this paper tries to provide an easier and a succinct understanding of the classification based techniques. Furthermore, we identified the advantages and disadvantages of different classification based techniques and we presented a comparative guide with useful paradigms for promoting outliers detection research in various WSN applications and suggested further opportunities for future research.Keywords: bayesian networks, classification-based approaches, KPCA, neural networks, one-class SVM, outlier detection, wireless sensor networks
Procedia PDF Downloads 499565 Influence of Solenoid Configuration on Electromagnetic Acceleration of Plunger
Authors: Shreyansh Bharadwaj, Raghavendra Kollipara, Sijoy C. D., R. K. Mittal
Abstract:
Utilizing the Lorentz force to propel an electrically conductive plunger through a solenoid represents a fundamental application in electromagnetism. The parameters of the solenoid significantly influence the force exerted on the plunger, impacting its response. A parametric study has been done to understand the effect of these parameters on the force acting on the plunger. This study is done to determine the most optimal combination of parameters to obtain the fast response. Analysis has been carried out using an algorithm capable of simulating the scenario of a plunger undergoing acceleration within a solenoid. Authors have conducted an analysis focusing on several key configuration parameters of the solenoid. These parameters include the inter-layer gap (in the case of a multi-turn solenoid), different conductor diameters, varying numbers of turns, and diverse numbers of layers. Primary objective of this paper is to discern how alterations in these parameters affect the force applied to the plunger. Through extensive numerical simulations, a dataset has been generated and utilized to construct informative plots. These plots provide visual representations of the relationships between the solenoid configuration parameters and the resulting force exerted on the plunger, which can further be used to deduce scaling laws. This research endeavors to offer valuable insights into optimizing solenoid configurations for enhanced electromagnetic acceleration, thereby contributing to advancements in electromagnetic propulsion technology.Keywords: Lorentz force, solenoid configuration, electromagnetic acceleration, parametric analysis, simulation
Procedia PDF Downloads 51564 Optimization of a Convolutional Neural Network for the Automated Diagnosis of Melanoma
Authors: Kemka C. Ihemelandu, Chukwuemeka U. Ihemelandu
Abstract:
The incidence of melanoma has been increasing rapidly over the past two decades, making melanoma a current public health crisis. Unfortunately, even as screening efforts continue to expand in an effort to ameliorate the death rate from melanoma, there is a need to improve diagnostic accuracy to decrease misdiagnosis. Artificial intelligence (AI) a new frontier in patient care has the ability to improve the accuracy of melanoma diagnosis. Convolutional neural network (CNN) a form of deep neural network, most commonly applied to analyze visual imagery, has been shown to outperform the human brain in pattern recognition. However, there are noted limitations with the accuracy of the CNN models. Our aim in this study was the optimization of convolutional neural network algorithms for the automated diagnosis of melanoma. We hypothesized that Optimal selection of the momentum and batch hyperparameter increases model accuracy. Our most successful model developed during this study, showed that optimal selection of momentum of 0.25, batch size of 2, led to a superior performance and a faster model training time, with an accuracy of ~ 83% after nine hours of training. We did notice a lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone. Training set image transformations did not result in a superior model performance in our study.Keywords: melanoma, convolutional neural network, momentum, batch hyperparameter
Procedia PDF Downloads 101563 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories
Authors: Haj Najafi Leila, Tehranizadeh Mohsen
Abstract:
Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.Keywords: dependency, story-cost, cost modes, engineering demand parameter
Procedia PDF Downloads 181562 RV-YOLOX: Object Detection on Inland Waterways Based on Optimized YOLOX Through Fusion of Vision and 3+1D Millimeter Wave Radar
Authors: Zixian Zhang, Shanliang Yao, Zile Huang, Zhaodong Wu, Xiaohui Zhu, Yong Yue, Jieming Ma
Abstract:
Unmanned Surface Vehicles (USVs) are valuable due to their ability to perform dangerous and time-consuming tasks on the water. Object detection tasks are significant in these applications. However, inherent challenges, such as the complex distribution of obstacles, reflections from shore structures, water surface fog, etc., hinder the performance of object detection of USVs. To address these problems, this paper provides a fusion method for USVs to effectively detect objects in the inland surface environment, utilizing vision sensors and 3+1D Millimeter-wave radar. MMW radar is complementary to vision sensors, providing robust environmental information. The radar 3D point cloud is transferred to 2D radar pseudo image to unify radar and vision information format by utilizing the point transformer. We propose a multi-source object detection network (RV-YOLOX )based on radar-vision fusion for inland waterways environment. The performance is evaluated on our self-recording waterways dataset. Compared with the YOLOX network, our fusion network significantly improves detection accuracy, especially for objects with bad light conditions.Keywords: inland waterways, YOLO, sensor fusion, self-attention
Procedia PDF Downloads 127561 Scientific Linux Cluster for BIG-DATA Analysis (SLBD): A Case of Fayoum University
Authors: Hassan S. Hussein, Rania A. Abul Seoud, Amr M. Refaat
Abstract:
Scientific researchers face in the analysis of very large data sets that is increasing noticeable rate in today’s and tomorrow’s technologies. Hadoop and Spark are types of software that developed frameworks. Hadoop framework is suitable for many Different hardware platforms. In this research, a scientific Linux cluster for Big Data analysis (SLBD) is presented. SLBD runs open source software with large computational capacity and high performance cluster infrastructure. SLBD composed of one cluster contains identical, commodity-grade computers interconnected via a small LAN. SLBD consists of a fast switch and Gigabit-Ethernet card which connect four (nodes). Cloudera Manager is used to configure and manage an Apache Hadoop stack. Hadoop is a framework allows storing and processing big data across the cluster by using MapReduce algorithm. MapReduce algorithm divides the task into smaller tasks which to be assigned to the network nodes. Algorithm then collects the results and form the final result dataset. SLBD clustering system allows fast and efficient processing of large amount of data resulting from different applications. SLBD also provides high performance, high throughput, high availability, expandability and cluster scalability.Keywords: big data platforms, cloudera manager, Hadoop, MapReduce
Procedia PDF Downloads 361560 Well-Being Inequality Using Superimposing Satisfaction Waves: Heisenberg Uncertainty in Behavioral Economics and Econometrics
Authors: Okay Gunes
Abstract:
In this article, for the first time in the literature for this subject we propose a new method for the measuring of well-being inequality through a model composed of superimposing satisfaction waves. The displacement of households’ satisfactory state (i.e. satisfaction) is defined in a satisfaction string. The duration of the satisfactory state for a given period of time is measured in order to determine the relationship between utility and total satisfactory time, itself dependent on the density and tension of each satisfaction string. Thus, individual cardinal total satisfaction values are computed by way of a one-dimensional form for scalar sinusoidal (harmonic) moving wave function, using satisfaction waves with varying amplitudes and frequencies which allow us to measure well-being inequality. One advantage to using satisfaction waves is the ability to show that individual utility and consumption amounts would probably not commute; hence it is impossible to measure or to know simultaneously the values of these observables from the dataset. Thus, we crystallize the problem by using a Heisenberg-type uncertainty resolution for self-adjoint economic operators. We propose to eliminate any estimation bias by correlating the standard deviations of selected economic operators; this is achieved by replacing the aforementioned observed uncertainties with households’ perceived uncertainties (i.e. corrected standard deviations) obtained through the logarithmic psychophysical law proposed by Weber and Fechner.Keywords: Heisenberg uncertainty principle, superimposing satisfaction waves, Weber–Fechner law, well-being inequality
Procedia PDF Downloads 441559 A Recommender System for Dynamic Selection of Undergraduates' Elective Courses
Authors: Adewale O. Ogunde, Emmanuel O. Ajibade
Abstract:
The task of selecting a few elective courses from a variety of available elective courses has been a difficult one for many students over the years. In many higher institutions, guidance and counselors or level advisers are usually employed to assist the students in picking the right choice of courses. In reality, these counselors and advisers are most times overloaded with too many students to attend to, and sometimes they do not have enough time for the students. Most times, the academic strength of the student based on past results are not considered in the new choice of electives. Recommender systems implement advanced data analysis techniques to help users find the items of their interest by producing a predicted likeliness score or a list of top recommended items for a given active user. Therefore, in this work, a collaborative filtering-based recommender system that will dynamically recommend elective courses to undergraduate students based on their past grades in related courses was developed. This approach employed the use of the k-nearest neighbor algorithm to discover hidden relationships between the related courses passed by students in the past and the currently available elective courses. Real students’ results dataset was used to build and test the recommendation model. The developed system will not only improve the academic performance of students, but it will also help reduce the workload on the level advisers and school counselors.Keywords: collaborative filtering, elective courses, k-nearest neighbor algorithm, recommender systems
Procedia PDF Downloads 167558 The Comparison of Joint Simulation and Estimation Methods for the Geometallurgical Modeling
Authors: Farzaneh Khorram
Abstract:
This paper endeavors to construct a block model to assess grinding energy consumption (CCE) and pinpoint blocks with the highest potential for energy usage during the grinding process within a specified region. Leveraging geostatistical techniques, particularly joint estimation, or simulation, based on geometallurgical data from various mineral processing stages, our objective is to forecast CCE across the study area. The dataset encompasses variables obtained from 2754 drill samples and a block model comprising 4680 blocks. The initial analysis encompassed exploratory data examination, variography, multivariate analysis, and the delineation of geological and structural units. Subsequent analysis involved the assessment of contacts between these units and the estimation of CCE via cokriging, considering its correlation with SPI. The selection of blocks exhibiting maximum CCE holds paramount importance for cost estimation, production planning, and risk mitigation. The study conducted exploratory data analysis on lithology, rock type, and failure variables, revealing seamless boundaries between geometallurgical units. Simulation methods, such as Plurigaussian and Turning band, demonstrated more realistic outcomes compared to cokriging, owing to the inherent characteristics of geometallurgical data and the limitations of kriging methods.Keywords: geometallurgy, multivariate analysis, plurigaussian, turning band method, cokriging
Procedia PDF Downloads 70557 US Foreign Aids and Its Institutional and Non-Institutional Impacts in the Middle East, Africa, Southeast Asia, and Latin America (2000 - 2020)
Authors: Mahdi Fakheri, Mohammad Mohsen Mahdizadeh Naeini
Abstract:
This paper addresses an understudied aspect of U.S. foreign aids between the years 2000 and 2020. Despite a growing body of literature on the impacts of U.S. aids, the question about how the United States uses its foreign aids to change developing countries has remained unanswered. As foreign aid is a tool of the United States' foreign policy, answering this very question can reveal the future that the U.S. prefers for developing countries and that secures its national interest. This paper will explore USAID's official dataset, which includes the data of foreign aids to the Middle East, Africa, Latin America, and Southeast Asia from 2000 to 2020. Through an empirical analysis, this paper argues that the focus of U.S. foreign aid is evenly divided between institutional and non-institutional (i.e., slight enhancement of status quo) changes. The former is induced by training and education, funding the initiatives and projects, making capacity and increasing the efficiency of human, operational, and management sectors, and enhancing the living condition of the people. Moreover, it will be demonstrated that the political, military, cultural, economic, and judicial are some of the institutions that the U.S. has planned to change in the aforementioned period and regions.Keywords: USAID, foreign aid, development, developing countries, Middle East, Africa, Southeast Asia, Latin America
Procedia PDF Downloads 190556 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course
Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu
Abstract:
This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN
Procedia PDF Downloads 44555 Regression of Hand Kinematics from Surface Electromyography Data Using an Long Short-Term Memory-Transformer Model
Authors: Anita Sadat Sadati Rostami, Reza Almasi Ghaleh
Abstract:
Surface electromyography (sEMG) offers important insights into muscle activation and has applications in fields including rehabilitation and human-computer interaction. The purpose of this work is to predict the degree of activation of two joints in the index finger using an LSTM-Transformer architecture trained on sEMG data from the Ninapro DB8 dataset. We apply advanced preprocessing techniques, such as multi-band filtering and customizable rectification methods, to enhance the encoding of sEMG data into features that are beneficial for regression tasks. The processed data is converted into spike patterns and simulated using Leaky Integrate-and-Fire (LIF) neuron models, allowing for neuromorphic-inspired processing. Our findings demonstrate that adjusting filtering parameters and neuron dynamics and employing the LSTM-Transformer model improves joint angle prediction performance. This study contributes to the ongoing development of deep learning frameworks for sEMG analysis, which could lead to improvements in motor control systems.Keywords: surface electromyography, LSTM-transformer, spiking neural networks, hand kinematics, leaky integrate-and-fire neuron, band-pass filtering, muscle activity decoding
Procedia PDF Downloads 18554 Parametric Modeling for Survival Data with Competing Risks Using the Generalized Gompertz Distribution
Authors: Noora Al-Shanfari, M. Mazharul Islam
Abstract:
The cumulative incidence function (CIF) is a fundamental approach for analyzing survival data in the presence of competing risks, which estimates the marginal probability for each competing event. Parametric modeling of CIF has the advantage of fitting various shapes of CIF and estimates the impact of covariates with maximum efficiency. To calculate the total CIF's covariate influence using a parametric model., it is essential to parametrize the baseline of the CIF. As the CIF is an improper function by nature, it is necessary to utilize an improper distribution when applying parametric models. The Gompertz distribution, which is an improper distribution, is limited in its applicability as it only accounts for monotone hazard shapes. The generalized Gompertz distribution, however, can adapt to a wider range of hazard shapes, including unimodal, bathtub, and monotonic increasing or decreasing hazard shapes. In this paper, the generalized Gompertz distribution is used to parametrize the baseline of the CIF, and the parameters of the proposed model are estimated using the maximum likelihood approach. The proposed model is compared with the existing Gompertz model using the Akaike information criterion. Appropriate statistical test procedures and model-fitting criteria will be used to test the adequacy of the model. Both models are applied to the ‘colon’ dataset, which is available in the “biostat3” package in R.Keywords: competing risks, cumulative incidence function, improper distribution, parametric modeling, survival analysis
Procedia PDF Downloads 108553 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation
Authors: Arian Hosseini, Mahmudul Hasan
Abstract:
To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing
Procedia PDF Downloads 55552 Filtering Intrusion Detection Alarms Using Ant Clustering Approach
Authors: Ghodhbani Salah, Jemili Farah
Abstract:
With the growth of cyber attacks, information safety has become an important issue all over the world. Many firms rely on security technologies such as intrusion detection systems (IDSs) to manage information technology security risks. IDSs are considered to be the last line of defense to secure a network and play a very important role in detecting large number of attacks. However the main problem with today’s most popular commercial IDSs is generating high volume of alerts and huge number of false positives. This drawback has become the main motivation for many research papers in IDS area. Hence, in this paper we present a data mining technique to assist network administrators to analyze and reduce false positive alarms that are produced by an IDS and increase detection accuracy. Our data mining technique is unsupervised clustering method based on hybrid ANT algorithm. This algorithm discovers clusters of intruders’ behavior without prior knowledge of a possible number of classes, then we apply K-means algorithm to improve the convergence of the ANT clustering. Experimental results on real dataset show that our proposed approach is efficient with high detection rate and low false alarm rate.Keywords: intrusion detection system, alarm filtering, ANT class, ant clustering, intruders’ behaviors, false alarms
Procedia PDF Downloads 404551 Trip Reduction in Turbo Machinery
Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto
Abstract:
Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBTKeywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start
Procedia PDF Downloads 77550 Graph Neural Networks and Rotary Position Embedding for Voice Activity Detection
Authors: YingWei Tan, XueFeng Ding
Abstract:
Attention-based voice activity detection models have gained significant attention in recent years due to their fast training speed and ability to capture a wide contextual range. The inclusion of multi-head style and position embedding in the attention architecture are crucial. Having multiple attention heads allows for differential focus on different parts of the sequence, while position embedding provides guidance for modeling dependencies between elements at various positions in the input sequence. In this work, we propose an approach by considering each head as a node, enabling the application of graph neural networks (GNN) to identify correlations among the different nodes. In addition, we adopt an implementation named rotary position embedding (RoPE), which encodes absolute positional information into the input sequence by a rotation matrix, and naturally incorporates explicit relative position information into a self-attention module. We evaluate the effectiveness of our method on a synthetic dataset, and the results demonstrate its superiority over the baseline CRNN in scenarios with low signal-to-noise ratio and noise, while also exhibiting robustness across different noise types. In summary, our proposed framework effectively combines the strengths of CNN and RNN (LSTM), and further enhances detection performance through the integration of graph neural networks and rotary position embedding.Keywords: voice activity detection, CRNN, graph neural networks, rotary position embedding
Procedia PDF Downloads 76549 Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI
Authors: Zahra Alipour, Amirreza Moheb Afzali
Abstract:
In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience.Keywords: YOLOv8, mediapipe, finger tracking, joint estimation, human-computer interaction (HCI)
Procedia PDF Downloads 13548 Developing a Green Information Technology Model in Australian Higher-Educational Institutions
Authors: Mahnaz Jafari, Parisa Izadpanahi, Francesco Mancini, Muhammad Qureshi
Abstract:
The advancement in Information Technology (IT) has been an intrinsic element in the developments of the 21st century bringing benefits such as increased economic productivity. However, its widespread application has also been associated with inadvertent negative impacts on society and the environment necessitating selective interventions to mitigate these impacts. This study responded to this need by developing a Green IT Rating Tool (GIRT) for higher education institutions (HEI) in Australia to evaluate the sustainability of IT-related practices from an environmental, social, and economic perspective. Each dimension must be considered equally to achieve sustainability. The development of the GIRT was informed by the views of interviewed IT professionals whose opinions formed the basis of a framework listing Green IT initiatives in order of their importance as perceived by the interviewed professionals. This framework formed the base of the GIRT, which identified Green IT initiatives (such as videoconferencing as a substitute for long-distance travel) and the associated weighting of each practice. The proposed sustainable Green IT model could be integrated into existing IT systems, leading to significant reductions in carbon emissions and e-waste and improvements in energy efficiency. The development of the GIRT and the findings of this study have the potential to inspire other organizations to adopt sustainable IT practices, positively impact the environment, and be used as a reference by IT professionals and decision-makers to evaluate IT-related sustainability practices. The GIRT could also serve as a benchmark for HEIs to compare their performance with other institutions and to track their progress over time. Additionally, the study's results suggest that virtual and cloud-based technologies could reduce e-waste and energy consumption in the higher education sector. Overall, this study highlights the importance of incorporating Green IT practices into the IT systems of HEI to contribute to a more sustainable future.Keywords: green information technology, international higher-educational institution, sustainable solutions, environmentally friendly IT systems
Procedia PDF Downloads 76547 Data-Driven Insights Into Juvenile Recidivism: Leveraging Machine Learning for Rehabilitation Strategies
Authors: Saiakhil Chilaka
Abstract:
Juvenile recidivism presents a significant challenge to the criminal justice system, impacting both the individuals involved and broader societal safety. This study aims to identify the key factors influencing recidivism and successful rehabilitation outcomes by utilizing a dataset of over 25,000 individuals from the NIJ Recidivism Challenge. We employed machine learning techniques, particularly Random Forest Classification, combined with SHAP (SHapley Additive exPlanations) for model interpretability. Our findings indicate that supervision risk score, percent days employed, and education level are critical factors affecting recidivism, with higher levels of supervision, successful employment, and education contributing to lower recidivism rates. Conversely, Gang Affiliation emerged as a significant risk factor for reoffending. The model achieved an accuracy of 68.8%, highlighting its utility in identifying high-risk individuals and informing targeted interventions. These results suggest that a comprehensive approach involving personalized supervision, vocational training, educational support, and anti-gang initiatives can significantly reduce recidivism and enhance rehabilitation outcomes for juveniles, providing critical insights for policymakers and juvenile justice practitioners.Keywords: juvenile, justice system, data analysis, SHAP
Procedia PDF Downloads 25546 High Resolution Image Generation Algorithm for Archaeology Drawings
Authors: Xiaolin Zeng, Lei Cheng, Zhirong Li, Xueping Liu
Abstract:
Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on a conditional generative adversarial network is proposed. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy (PA), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) and can be used to assist in drawing archaeology drawings.Keywords: archaeology drawings, digital heritage, image generation, deep learning
Procedia PDF Downloads 60545 Enhancement Method of Network Traffic Anomaly Detection Model Based on Adversarial Training With Category Tags
Authors: Zhang Shuqi, Liu Dan
Abstract:
For the problems in intelligent network anomaly traffic detection models, such as low detection accuracy caused by the lack of training samples, poor effect with small sample attack detection, a classification model enhancement method, F-ACGAN(Flow Auxiliary Classifier Generative Adversarial Network) which introduces generative adversarial network and adversarial training, is proposed to solve these problems. Generating adversarial data with category labels could enhance the training effect and improve classification accuracy and model robustness. FACGAN consists of three steps: feature preprocess, which includes data type conversion, dimensionality reduction and normalization, etc.; A generative adversarial network model with feature learning ability is designed, and the sample generation effect of the model is improved through adversarial iterations between generator and discriminator. The adversarial disturbance factor of the gradient direction of the classification model is added to improve the diversity and antagonism of generated data and to promote the model to learn from adversarial classification features. The experiment of constructing a classification model with the UNSW-NB15 dataset shows that with the enhancement of FACGAN on the basic model, the classification accuracy has improved by 8.09%, and the score of F1 has improved by 6.94%.Keywords: data imbalance, GAN, ACGAN, anomaly detection, adversarial training, data augmentation
Procedia PDF Downloads 106544 Contrasted Mean and Median Models in Egyptian Stock Markets
Authors: Mai A. Ibrahim, Mohammed El-Beltagy, Motaz Khorshid
Abstract:
Emerging Markets return distributions have shown significance departure from normality were they are characterized by fatter tails relative to the normal distribution and exhibit levels of skewness and kurtosis that constitute a significant departure from normality. Therefore, the classical Markowitz Mean-Variance is not applicable for emerging markets since it assumes normally-distributed returns (with zero skewness and kurtosis) and a quadratic utility function. Moreover, the Markowitz mean-variance analysis can be used in cases of moderate non-normality and it still provides a good approximation of the expected utility, but it may be ineffective under large departure from normality. Higher moments models and median models have been suggested in the literature for asset allocation in this case. Higher moments models have been introduced to account for the insufficiency of the description of a portfolio by only its first two moments while the median model has been introduced as a robust statistic which is less affected by outliers than the mean. Tail risk measures such as Value-at Risk (VaR) and Conditional Value-at-Risk (CVaR) have been introduced instead of Variance to capture the effect of risk. In this research, higher moment models including the Mean-Variance-Skewness (MVS) and Mean-Variance-Skewness-Kurtosis (MVSK) are formulated as single-objective non-linear programming problems (NLP) and median models including the Median-Value at Risk (MedVaR) and Median-Mean Absolute Deviation (MedMAD) are formulated as a single-objective mixed-integer linear programming (MILP) problems. The higher moment models and median models are compared to some benchmark portfolios and tested on real financial data in the Egyptian main Index EGX30. The results show that all the median models outperform the higher moment models were they provide higher final wealth for the investor over the entire period of study. In addition, the results have confirmed the inapplicability of the classical Markowitz Mean-Variance to the Egyptian stock market as it resulted in very low realized profits.Keywords: Egyptian stock exchange, emerging markets, higher moment models, median models, mixed-integer linear programming, non-linear programming
Procedia PDF Downloads 315543 TerraEnhance: High-Resolution Digital Elevation Model Generation using GANs
Authors: Siddharth Sarma, Ayush Majumdar, Nidhi Sabu, Mufaddal Jiruwaala, Shilpa Paygude
Abstract:
Digital Elevation Models (DEMs) are digital representations of the Earth’s topography, which include information about the elevation, slope, aspect, and other terrain attributes. DEMs play a crucial role in various applications, including terrain analysis, urban planning, and environmental modeling. In this paper, TerraEnhance is proposed, a distinct approach for high-resolution DEM generation using Generative Adversarial Networks (GANs) combined with Real-ESRGANs. By learning from a dataset of low-resolution DEMs, the GANs are trained to upscale the data by 10 times, resulting in significantly enhanced DEMs with improved resolution and finer details. The integration of Real-ESRGANs further enhances visual quality, leading to more accurate representations of the terrain. A post-processing layer is introduced, employing high-pass filtering to refine the generated DEMs, preserving important details while reducing noise and artifacts. The results demonstrate that TerraEnhance outperforms existing methods, producing high-fidelity DEMs with intricate terrain features and exceptional accuracy. These advancements make TerraEnhance suitable for various applications, such as terrain analysis and precise environmental modeling.Keywords: DEM, ESRGAN, image upscaling, super resolution, computer vision
Procedia PDF Downloads 11542 Alloy Design of Single Crystal Ni-base Superalloys by Combined Method of Neural Network and CALPHAD
Authors: Mehdi Montakhabrazlighi, Ercan Balikci
Abstract:
The neural network (NN) method is applied to alloy development of single crystal Ni-base Superalloys with low density and improved mechanical strength. A set of 1200 dataset which includes chemical composition of the alloys, applied stress and temperature as inputs and density and time to rupture as outputs is used for training and testing the network. Thermodynamic phase diagram modeling of the screened alloys is performed with Thermocalc software to model the equilibrium phases and also microsegregation in solidification processing. The model is first trained by 80% of the data and the 20% rest is used to test it. Comparing the predicted values and the experimental ones showed that a well-trained network is capable of accurately predicting the density and time to rupture strength of the Ni-base superalloys. Modeling results is used to determine the effect of alloying elements, stress, temperature and gamma-prime phase volume fraction on rupture strength of the Ni-base superalloys. This approach is in line with the materials genome initiative and integrated computed materials engineering approaches promoted recently with the aim of reducing the cost and time for development of new alloys for critical aerospace components. This work has been funded by TUBITAK under grant number 112M783.Keywords: neural network, rupture strength, superalloy, thermocalc
Procedia PDF Downloads 316