Search results for: STS benchmark dataset
541 Application of Supervised Deep Learning-based Machine Learning to Manage Smart Homes
Authors: Ahmed Al-Adaileh
Abstract:
Renewable energy sources, domestic storage systems, controllable loads and machine learning technologies will be key components of future smart homes management systems. An energy management scheme that uses a Deep Learning (DL) approach to support the smart home management systems, which consist of a standalone photovoltaic system, storage unit, heating ventilation air-conditioning system and a set of conventional and smart appliances, is presented. The objective of the proposed scheme is to apply DL-based machine learning to predict various running parameters within a smart home's environment to achieve maximum comfort levels for occupants, reduced electricity bills, and less dependency on the public grid. The problem is using Reinforcement learning, where decisions are taken based on applying the Continuous-time Markov Decision Process. The main contribution of this research is the proposed framework that applies DL to enhance the system's supervised dataset to offer unlimited chances to effectively support smart home systems. A case study involving a set of conventional and smart appliances with dedicated processing units in an inhabited building can demonstrate the validity of the proposed framework. A visualization graph can show "before" and "after" results.Keywords: smart homes systems, machine learning, deep learning, Markov Decision Process
Procedia PDF Downloads 205540 Predictive Modeling of Student Behavior in Virtual Reality: A Machine Learning Approach
Authors: Gayathri Sadanala, Shibam Pokhrel, Owen Murphy
Abstract:
In the ever-evolving landscape of education, Virtual Reality (VR) environments offer a promising avenue for enhancing student engagement and learning experiences. However, understanding and predicting student behavior within these immersive settings remain challenging tasks. This paper presents a comprehensive study on the predictive modeling of student behavior in VR using machine learning techniques. We introduce a rich data set capturing student interactions, movements, and progress within a VR orientation program. The dataset is divided into training and testing sets, allowing us to develop and evaluate predictive models for various aspects of student behavior, including engagement levels, task completion, and performance. Our machine learning approach leverages a combination of feature engineering and model selection to reveal hidden patterns in the data. We employ regression and classification models to predict student outcomes, and the results showcase promising accuracy in forecasting behavior within VR environments. Furthermore, we demonstrate the practical implications of our predictive models for personalized VR-based learning experiences and early intervention strategies. By uncovering the intricate relationship between student behavior and VR interactions, we provide valuable insights for educators, designers, and developers seeking to optimize virtual learning environments.Keywords: interaction, machine learning, predictive modeling, virtual reality
Procedia PDF Downloads 144539 Fuzzy Time Series Forecasting Based on Fuzzy Logical Relationships, PSO Technique, and Automatic Clustering Algorithm
Authors: A. K. M. Kamrul Islam, Abdelhamid Bouchachia, Suang Cang, Hongnian Yu
Abstract:
Forecasting model has a great impact in terms of prediction and continues to do so into the future. Although many forecasting models have been studied in recent years, most researchers focus on different forecasting methods based on fuzzy time series to solve forecasting problems. The forecasted models accuracy fully depends on the two terms that are the length of the interval in the universe of discourse and the content of the forecast rules. Moreover, a hybrid forecasting method can be an effective and efficient way to improve forecasts rather than an individual forecasting model. There are different hybrids forecasting models which combined fuzzy time series with evolutionary algorithms, but the performances are not quite satisfactory. In this paper, we proposed a hybrid forecasting model which deals with the first order as well as high order fuzzy time series and particle swarm optimization to improve the forecasted accuracy. The proposed method used the historical enrollments of the University of Alabama as dataset in the forecasting process. Firstly, we considered an automatic clustering algorithm to calculate the appropriate interval for the historical enrollments. Then particle swarm optimization and fuzzy time series are combined that shows better forecasting accuracy than other existing forecasting models.Keywords: fuzzy time series (fts), particle swarm optimization, clustering algorithm, hybrid forecasting model
Procedia PDF Downloads 252538 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms
Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna
Abstract:
In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove
Procedia PDF Downloads 304537 E-Government Development in Nigeria, 'Bank Verification No': An Anti-Corruption Tool
Authors: Ernest C. Nwadinobi, Amanda Peart, Carl Adams
Abstract:
The leading countries like the USA, UK and some of the European countries have moved their focus away from just developing the e-government platform towards just the electronic services which aim at providing access to information to its citizens or customers, but they have gone to make significant backroom changes that can accommodate this electronic service being provided to its customers or citizens. E-government has moved from just providing electronic information to citizens and customers alike to serving their needs. In developing countries like Nigeria, the enablement of e-government is being used as an anti-corruption tool. The introduction of the Bank verification number (BVN) scheme by the Central Bank of Nigeria, has helped the government in not just saving money but also protecting customer’s transaction and enhancing confidence in the banking sector. This has helped curtail the high rate of cyber and financial crime that has been part of the system. The use of BVN as an anti-corruption tool in Nigeria came at a time there was need for openness, accountability, and discipline, after years of robbing the treasury and recklessness in handling finances. As there has not been a defined method for measuring the strength or success of e-government development, in this case BVN, in Nigeria, progress will remain at the same level. The implementation strategy of the BVN in Nigeria has mostly been a quick fix, quick win solution. In fact, there is little or no indication to show evidence of a framework for e-government. Like other leading countries, there is the need for proper implementation of strategy and framework especially towards a customer orientated process, which will accommodate every administrative body of the government institution including private business rather than focusing on a non-flexible organisational structure. The development of e-government must have a strategy and framework for it to work, and this strategy must enclose every public administration and will not be limited to any individual bodies or organization. A defined framework or monitoring method must be put in place to help evaluate and benchmark government development in e-government. This framework must follow the same concept or principles. In censorious analyses of the existing methods, this paper will denote areas that must be included in the existing approach to be able to channel e-government development towards its defined strategic objectives.Keywords: Bank Verification No (BVN), quick-fix, anti-corruption, quick-win
Procedia PDF Downloads 164536 Customized Design of Amorphous Solids by Generative Deep Learning
Authors: Yinghui Shang, Ziqing Zhou, Rong Han, Hang Wang, Xiaodi Liu, Yong Yang
Abstract:
The design of advanced amorphous solids, such as metallic glasses, with targeted properties through artificial intelligence signifies a paradigmatic shift in physical metallurgy and materials technology. Here, we developed a machine-learning architecture that facilitates the generation of metallic glasses with targeted multifunctional properties. Our architecture integrates the state-of-the-art unsupervised generative adversarial network model with supervised models, allowing the incorporation of general prior knowledge derived from thousands of data points across a vast range of alloy compositions, into the creation of data points for a specific type of composition, which overcame the common issue of data scarcity typically encountered in the design of a given type of metallic glasses. Using our generative model, we have successfully designed copper-based metallic glasses, which display exceptionally high hardness or a remarkably low modulus. Notably, our architecture can not only explore uncharted regions in the targeted compositional space but also permits self-improvement after experimentally validated data points are added to the initial dataset for subsequent cycles of data generation, hence paving the way for the customized design of amorphous solids without human intervention.Keywords: metallic glass, artificial intelligence, mechanical property, automated generation
Procedia PDF Downloads 57535 Fraud Detection in Credit Cards with Machine Learning
Authors: Anjali Chouksey, Riya Nimje, Jahanvi Saraf
Abstract:
Online transactions have increased dramatically in this new ‘social-distancing’ era. With online transactions, Fraud in online payments has also increased significantly. Frauds are a significant problem in various industries like insurance companies, baking, etc. These frauds include leaking sensitive information related to the credit card, which can be easily misused. Due to the government also pushing online transactions, E-commerce is on a boom. But due to increasing frauds in online payments, these E-commerce industries are suffering a great loss of trust from their customers. These companies are finding credit card fraud to be a big problem. People have started using online payment options and thus are becoming easy targets of credit card fraud. In this research paper, we will be discussing machine learning algorithms. We have used a decision tree, XGBOOST, k-nearest neighbour, logistic-regression, random forest, and SVM on a dataset in which there are transactions done online mode using credit cards. We will test all these algorithms for detecting fraud cases using the confusion matrix, F1 score, and calculating the accuracy score for each model to identify which algorithm can be used in detecting frauds.Keywords: machine learning, fraud detection, artificial intelligence, decision tree, k nearest neighbour, random forest, XGBOOST, logistic regression, support vector machine
Procedia PDF Downloads 149534 Enhanced Image Representation for Deep Belief Network Classification of Hyperspectral Images
Authors: Khitem Amiri, Mohamed Farah
Abstract:
Image classification is a challenging task and is gaining lots of interest since it helps us to understand the content of images. Recently Deep Learning (DL) based methods gave very interesting results on several benchmarks. For Hyperspectral images (HSI), the application of DL techniques is still challenging due to the scarcity of labeled data and to the curse of dimensionality. Among other approaches, Deep Belief Network (DBN) based approaches gave a fair classification accuracy. In this paper, we address the problem of the curse of dimensionality by reducing the number of bands and replacing the HSI channels by the channels representing radiometric indices. Therefore, instead of using all the HSI bands, we compute the radiometric indices such as NDVI (Normalized Difference Vegetation Index), NDWI (Normalized Difference Water Index), etc, and we use the combination of these indices as input for the Deep Belief Network (DBN) based classification model. Thus, we keep almost all the pertinent spectral information while reducing considerably the size of the image. In order to test our image representation, we applied our method on several HSI datasets including the Indian pines dataset, Jasper Ridge data and it gave comparable results to the state of the art methods while reducing considerably the time of training and testing.Keywords: hyperspectral images, deep belief network, radiometric indices, image classification
Procedia PDF Downloads 280533 Traffic Forecasting for Open Radio Access Networks Virtualized Network Functions in 5G Networks
Authors: Khalid Ali, Manar Jammal
Abstract:
In order to meet the stringent latency and reliability requirements of the upcoming 5G networks, Open Radio Access Networks (O-RAN) have been proposed. The virtualization of O-RAN has allowed it to be treated as a Network Function Virtualization (NFV) architecture, while its components are considered Virtualized Network Functions (VNFs). Hence, intelligent Machine Learning (ML) based solutions can be utilized to apply different resource management and allocation techniques on O-RAN. However, intelligently allocating resources for O-RAN VNFs can prove challenging due to the dynamicity of traffic in mobile networks. Network providers need to dynamically scale the allocated resources in response to the incoming traffic. Elastically allocating resources can provide a higher level of flexibility in the network in addition to reducing the OPerational EXpenditure (OPEX) and increasing the resources utilization. Most of the existing elastic solutions are reactive in nature, despite the fact that proactive approaches are more agile since they scale instances ahead of time by predicting the incoming traffic. In this work, we propose and evaluate traffic forecasting models based on the ML algorithm. The algorithms aim at predicting future O-RAN traffic by using previous traffic data. Detailed analysis of the traffic data was carried out to validate the quality and applicability of the traffic dataset. Hence, two ML models were proposed and evaluated based on their prediction capabilities.Keywords: O-RAN, traffic forecasting, NFV, ARIMA, LSTM, elasticity
Procedia PDF Downloads 228532 Synthesis, Characterization and Photocatalytic Activity of Electrospun Zinc and/or Titanium Oxide Nanofibers for Methylene Blue Degradation
Authors: Zainab Dahrouch, Beatrix Petrovičová, Claudia Triolo, Fabiola Pantò, Angela Malara, Salvatore Patanè, Maria Allegrini, Saveria Santangelo
Abstract:
Synthetic dyes dispersed in water cause environmental damage and have harmful effects on human health. Methylene blue (MB) is broadly used as a dye in the textile, pharmaceutical, printing, cosmetics, leather, and food industries. The complete removal of MB is difficult due to the presence of aromatic rings in its structure. The present study is focused on electrospun nanofibers (NFs) with engineered architecture and surface to be used as catalysts for the photodegradation of MB. Ti and/or Zn oxide NFs are produced by electrospinning precursor solutions with different Ti: Zn molar ratios (from 0:1 to 1:0). Subsequent calcination and cooling steps are operated at fast rates to generate porous NFs with capture centers to reduce the recombination rate of the photogenerated charges. The comparative evaluation of the NFs as photocatalysts for the removal of MB from an aqueous solution with a dye concentration of 15 µM under UV irradiation shows that the binary (wurtzite ZnO and anatase TiO₂) oxides exhibit higher catalytic activity compared to ternary (ZnTiO₃ and Zn₂TiO₄) oxides. The higher band gap and lower crystallinity of the ternary oxides are responsible for their lower photocatalytic activity. It has been found that the optimal load for the wurtzite ZnO is 0.66 mg mL⁻¹, obtaining a degradation rate of 7.94.10⁻² min⁻¹. The optimal load for anatase TiO₂ is lower (0.33 mg mL⁻¹) and the corresponding rate constant (1.12×10⁻¹ min⁻¹) is higher. This finding (higher activity with lower load) is of crucial importance for the scaling up of the process on an industrial scale. Indeed, the anatase NFs outperform even the commonly used P25-TiO₂ benchmark. Besides, they can be reused twice without any regeneration treatment, with 5.2% and 18.7% activity decrease after second and third use, respectively. Thanks to the scalability of the electrospinning technique, this laboratory-scale study provides a perspective towards the sustainable large-scale manufacture of photocatalysts for the treatment of industry effluents.Keywords: anatase, capture centers, methylene blue dye, nanofibers, photodegradation, zinc oxide
Procedia PDF Downloads 157531 Seismic Performance of Slopes Subjected to Earthquake Mainshock Aftershock Sequences
Authors: Alisha Khanal, Gokhan Saygili
Abstract:
It is commonly observed that aftershocks follow the mainshock. Aftershocks continue over a period of time with a decreasing frequency and typically there is not sufficient time for repair and retrofit between a mainshock–aftershock sequence. Usually, aftershocks are smaller in magnitude; however, aftershock ground motion characteristics such as the intensity and duration can be greater than the mainshock due to the changes in the earthquake mechanism and location with respect to the site. The seismic performance of slopes is typically evaluated based on the sliding displacement predicted to occur along a critical sliding surface. Various empirical models are available that predict sliding displacement as a function of seismic loading parameters, ground motion parameters, and site parameters but these models do not include the aftershocks. The seismic risks associated with the post-mainshock slopes ('damaged slopes') subjected to aftershocks is significant. This paper extends the empirical sliding displacement models for flexible slopes subjected to earthquake mainshock-aftershock sequences (a multi hazard approach). A dataset was developed using 144 pairs of as-recorded mainshock-aftershock sequences using the Pacific Earthquake Engineering Research Center (PEER) database. The results reveal that the combination of mainshock and aftershock increases the seismic demand on slopes relative to the mainshock alone; thus, seismic risks are underestimated if aftershocks are neglected.Keywords: seismic slope stability, mainshock, aftershock, landslide, earthquake, flexible slopes
Procedia PDF Downloads 146530 Integrative Analysis of Urban Transportation Network and Land Use Using GIS: A Case Study of Siddipet City
Authors: P. Priya Madhuri, J. Kamini, S. C. Jayanthi
Abstract:
Assessment of land use and transportation networks is essential for sustainable urban growth, urban planning, efficient public transportation systems, and reducing traffic congestion. The study focuses on land use, population density, and their correlation with the road network for future development. The scope of the study covers inventory and assessment of the road network dataset (line) at the city, zonal, or ward level, which is extracted from very high-resolution satellite data (spatial resolution < 0.5 m) at 1:4000 map scale and ground truth verification. Road network assessment is carried out by computing various indices that measure road coverage and connectivity. In this study, an assessment of the road network is carried out for the study region at the municipal and ward levels. In order to identify gaps, road coverage and connectivity were associated with urban land use, built-up area, and population density in the study area. Ward-wise road connectivity and coverage maps have been prepared. To assess the relationship between road network metrics, correlation analysis is applied. The study's conclusions are extremely beneficial for effective road network planning and detecting gaps in the road network at the ward level in association with urban land use, existing built-up, and population.Keywords: road connectivity, road coverage, road network, urban land use, transportation analysis
Procedia PDF Downloads 35529 Random Forest Classification for Population Segmentation
Authors: Regina Chua
Abstract:
To reduce the costs of re-fielding a large survey, a Random Forest classifier was applied to measure the accuracy of classifying individuals into their assigned segments with the fewest possible questions. Given a long survey, one needed to determine the most predictive ten or fewer questions that would accurately assign new individuals to custom segments. Furthermore, the solution needed to be quick in its classification and usable in non-Python environments. In this paper, a supervised Random Forest classifier was modeled on a dataset with 7,000 individuals, 60 questions, and 254 features. The Random Forest consisted of an iterative collection of individual decision trees that result in a predicted segment with robust precision and recall scores compared to a single tree. A random 70-30 stratified sampling for training the algorithm was used, and accuracy trade-offs at different depths for each segment were identified. Ultimately, the Random Forest classifier performed at 87% accuracy at a depth of 10 with 20 instead of 254 features and 10 instead of 60 questions. With an acceptable accuracy in prioritizing feature selection, new tools were developed for non-Python environments: a worksheet with a formulaic version of the algorithm and an embedded function to predict the segment of an individual in real-time. Random Forest was determined to be an optimal classification model by its feature selection, performance, processing speed, and flexible application in other environments.Keywords: machine learning, supervised learning, data science, random forest, classification, prediction, predictive modeling
Procedia PDF Downloads 95528 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing
Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto
Abstract:
Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence
Procedia PDF Downloads 382527 Structural Health Monitoring of Buildings–Recorded Data and Wave Method
Authors: Tzong-Ying Hao, Mohammad T. Rahmani
Abstract:
This article presents the structural health monitoring (SHM) method based on changes in wave traveling times (wave method) within a layered 1-D shear beam model of structure. The wave method measures the velocity of shear wave propagating in a building from the impulse response functions (IRF) obtained from recorded data at different locations inside the building. If structural damage occurs in a structure, the velocity of wave propagation through it changes. The wave method analysis is performed on the responses of Torre Central building, a 9-story shear wall structure located in Santiago, Chile. Because events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded at this building, therefore it can serve as a full-scale benchmark to validate the structural health monitoring method utilized. The analysis of inter-story drifts and the Fourier spectra for the EW and NS motions during 2010 Chile earthquake are presented. The results for the NS motions suggest the coupling of translation and torsion responses. The system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) were detected initially decreasing approximately 24% in the EW motion. Near the end of shaking, an increase of about 17% was detected. These analysis and results serve as baseline indicators of the occurrence of structural damage. The detected changes in wave velocities of the shear beam model are consistent with the observed damage. However, the 1-D shear beam model is not sufficient to simulate the coupling of translation and torsion responses in the NS motion. The wave method is proven for actual implementation in structural health monitoring systems based on carefully assessing the resolution and accuracy of the model for its effectiveness on post-earthquake damage detection in buildings.Keywords: Chile earthquake, damage detection, earthquake response, impulse response function, shear beam model, shear wave velocity, structural health monitoring, torre central building, wave method
Procedia PDF Downloads 369526 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method
Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang
Abstract:
Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series
Procedia PDF Downloads 276525 Automated End-to-End Pipeline Processing Solution for Autonomous Driving
Authors: Ashish Kumar, Munesh Raghuraj Varma, Nisarg Joshi, Gujjula Vishwa Teja, Srikanth Sambi, Arpit Awasthi
Abstract:
Autonomous driving vehicles are revolutionizing the transportation system of the 21st century. This has been possible due to intensive research put into making a robust, reliable, and intelligent program that can perceive and understand its environment and make decisions based on the understanding. It is a very data-intensive task with data coming from multiple sensors and the amount of data directly reflects on the performance of the system. Researchers have to design the preprocessing pipeline for different datasets with different sensor orientations and alignments before the dataset can be fed to the model. This paper proposes a solution that provides a method to unify all the data from different sources into a uniform format using the intrinsic and extrinsic parameters of the sensor used to capture the data allowing the same pipeline to use data from multiple sources at a time. This also means easy adoption of new datasets or In-house generated datasets. The solution also automates the complete deep learning pipeline from preprocessing to post-processing for various tasks allowing researchers to design multiple custom end-to-end pipelines. Thus, the solution takes care of the input and output data handling, saving the time and effort spent on it and allowing more time for model improvement.Keywords: augmentation, autonomous driving, camera, custom end-to-end pipeline, data unification, lidar, post-processing, preprocessing
Procedia PDF Downloads 125524 Design of an Ensemble Learning Behavior Anomaly Detection Framework
Authors: Abdoulaye Diop, Nahid Emad, Thierry Winter, Mohamed Hilia
Abstract:
Data assets protection is a crucial issue in the cybersecurity field. Companies use logical access control tools to vault their information assets and protect them against external threats, but they lack solutions to counter insider threats. Nowadays, insider threats are the most significant concern of security analysts. They are mainly individuals with legitimate access to companies information systems, which use their rights with malicious intents. In several fields, behavior anomaly detection is the method used by cyber specialists to counter the threats of user malicious activities effectively. In this paper, we present the step toward the construction of a user and entity behavior analysis framework by proposing a behavior anomaly detection model. This model combines machine learning classification techniques and graph-based methods, relying on linear algebra and parallel computing techniques. We show the utility of an ensemble learning approach in this context. We present some detection methods tests results on an representative access control dataset. The use of some explored classifiers gives results up to 99% of accuracy.Keywords: cybersecurity, data protection, access control, insider threat, user behavior analysis, ensemble learning, high performance computing
Procedia PDF Downloads 128523 Prediction of Springback in U-bending of W-Temper AA6082 Aluminum Alloy
Authors: Jemal Ebrahim Dessie, Lukács Zsolt
Abstract:
High-strength aluminum alloys have drawn a lot of attention because of the expanding demand for lightweight vehicle design in the automotive sector. Due to poor formability at room temperature, warm and hot forming have been advised. However, warm and hot forming methods need more steps in the production process and an advanced tooling system. In contrast, since ordinary tools can be used, forming sheets at room temperature in the W temper condition is advantageous. However, springback of supersaturated sheets and their thinning are critical challenges and must be resolved during the use of this technique. In this study, AA6082-T6 aluminum alloy was solution heat treated at different oven temperatures and times using a specially designed and developed furnace in order to optimize the W-temper heat treatment temperature. A U-shaped bending test was carried out at different time periods between W-temper heat treatment and forming operation. Finite element analysis (FEA) of U-bending was conducted using AutoForm aiming to validate the experimental result. The uniaxial tensile and unload test was performed in order to determine the kinematic hardening behavior of the material and has been optimized in the Finite element code using systematic process improvement (SPI). In the simulation, the effect of friction coefficient & blank holder force was considered. Springback parameters were evaluated by the geometry adopted from the NUMISHEET ’93 benchmark problem. It is noted that the change of shape was higher at the more extended time periods between W-temper heat treatment and forming operation. Die radius was the most influential parameter at the flange springback. However, the change of shape shows an overall increasing tendency on the sidewall as the increase of radius of the punch than the radius of the die. The springback angles on the flange and sidewall seem to be highly influenced by the coefficient of friction than blank holding force, and the effect becomes increases as increasing the blank holding force.Keywords: aluminum alloy, FEA, springback, SPI, U-bending, W-temper
Procedia PDF Downloads 100522 Correlation between Speech Emotion Recognition Deep Learning Models and Noises
Authors: Leah Lee
Abstract:
This paper examines the correlation between deep learning models and emotions with noises to see whether or not noises mask emotions. The deep learning models used are plain convolutional neural networks (CNN), auto-encoder, long short-term memory (LSTM), and Visual Geometry Group-16 (VGG-16). Emotion datasets used are Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D), Toronto Emotional Speech Set (TESS), and Surrey Audio-Visual Expressed Emotion (SAVEE). To make it four times bigger, audio set files, stretch, and pitch augmentations are utilized. From the augmented datasets, five different features are extracted for inputs of the models. There are eight different emotions to be classified. Noise variations are white noise, dog barking, and cough sounds. The variation in the signal-to-noise ratio (SNR) is 0, 20, and 40. In summation, per a deep learning model, nine different sets with noise and SNR variations and just augmented audio files without any noises will be used in the experiment. To compare the results of the deep learning models, the accuracy and receiver operating characteristic (ROC) are checked.Keywords: auto-encoder, convolutional neural networks, long short-term memory, speech emotion recognition, visual geometry group-16
Procedia PDF Downloads 78521 Adolescent Social Anxiety, School Satisfaction, and School Absenteeism; Findings from Young-HUNT3 and Norwegian National Education Data
Authors: Malik D. Halidu, Cathrine F. Moe, Tommy Haugan
Abstract:
Purpose: The demand for effective school-based interventions in shaping adolescents' unmet mental health needs is growing. Grounding in the functional contextualism approach, this study investigates the role of school satisfaction (SS) in serving as a buffer to school absenteeism (SAB) among adolescents experiencing social anxiety (SA). Methods: A unique and large population-based sample of adolescents (upper secondary school pupils; n= 1864) from the Young-HUNT 3 survey dataset merged with the national educational registry from Norway. Moderation regression analysis was performed using Stata 17. Results: We find a statistically significant moderating role of school satisfaction on the relationship between social anxiety and school absenteeism (β=-0.109,p<0.01) among upper secondary school pupils. Among socially anxious adolescents associated with a higher perceived quality of school life, it functions as a buffer by reducing the positive relationship between SA and SAB. But, there was no statistically significant difference between social anxiety and school absenteeism for adolescents with low school satisfaction. Conclusion: Overall, the study's hypothesis model was statistically supported and contributes to the discourse that school satisfaction as a target of school-based interventions can effectively improve school outcomes (e.g., reduced absenteeism) among socially anxious pupils.Keywords: social anxiety, school satisfaction, school absenteeism, Norwegian adolescent
Procedia PDF Downloads 91520 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms
Authors: Rikson Gultom
Abstract:
Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.Keywords: abusive language, hate speech, machine learning, optimization, social media
Procedia PDF Downloads 129519 Post Pandemic Mobility Analysis through Indexing and Sharding in MongoDB: Performance Optimization and Insights
Authors: Karan Vishavjit, Aakash Lakra, Shafaq Khan
Abstract:
The COVID-19 pandemic has pushed healthcare professionals to use big data analytics as a vital tool for tracking and evaluating the effects of contagious viruses. To effectively analyze huge datasets, efficient NoSQL databases are needed. The analysis of post-COVID-19 health and well-being outcomes and the evaluation of the effectiveness of government efforts during the pandemic is made possible by this research’s integration of several datasets, which cuts down on query processing time and creates predictive visual artifacts. We recommend applying sharding and indexing technologies to improve query effectiveness and scalability as the dataset expands. Effective data retrieval and analysis are made possible by spreading the datasets into a sharded database and doing indexing on individual shards. Analysis of connections between governmental activities, poverty levels, and post-pandemic well being is the key goal. We want to evaluate the effectiveness of governmental initiatives to improve health and lower poverty levels. We will do this by utilising advanced data analysis and visualisations. The findings provide relevant data that supports the advancement of UN sustainable objectives, future pandemic preparation, and evidence-based decision-making. This study shows how Big Data and NoSQL databases may be used to address problems with global health.Keywords: big data, COVID-19, health, indexing, NoSQL, sharding, scalability, well being
Procedia PDF Downloads 71518 Bioengineering of a Plant System to Sustainably Remove Heavy Metals and to Harvest Rare Earth Elements (REEs) from Industrial Wastes
Authors: Edmaritz Hernandez-Pagan, Kanjana Laosuntisuk, Alex Harris, Allison Haynes, David Buitrago, Michael Kudenov, Colleen Doherty
Abstract:
Rare Earth Elements (REEs) are critical metals for modern electronics, green technologies, and defense systems. However, due to their dispersed nature in the Earth’s crust, frequent co-occurrence with radioactive materials, and similar chemical properties, acquiring and purifying REEs is costly and environmentally damaging, restricting access to these metals. Plants could serve as resources for bioengineering REE mining systems. Although there is limited information on how REEs affect plants at a cellular and molecular level, plants with high REE tolerance and hyperaccumulation have been identified. This dissertation aims to develop a plant-based system for harvesting REEs from industrial waste material with a focus on Acid Mine Drainage (AMD), a toxic coal mining product. The objectives are 1) to develop a non-destructive, in vivo detection method for REE detection in Phytolacca plants (REE hyperaccumulator) plants utilizing fluorescence spectroscopy and with a primary focus on dysprosium, 2) to characterize the uptake of REE and Heavy Metals in Phytolacca americana and Phytolacca acinosa (REE hyperaccumulator) in AMD for potential implementation in the plant-based system, 3) to implement the REE detection method to identify REE-binding proteins and peptides for potential enhancement of uptake and selectivity for targeted REEs in the plants implemented in the plant-based system. The candidates are known REE-binding peptides or proteins, orthologs of known metal-binding proteins from REE hyperaccumulator plants, and novel proteins and peptides identified by comparative plant transcriptomics. Lanmodulin, a high-affinity REE-binding protein from methylotrophic bacteria, is used as a benchmark for the REE-protein binding fluorescence assays and expression in A. thaliana to test for changes in REE plant tolerance and uptake.Keywords: phytomining, agromining, rare earth elements, pokeweed, phytolacca
Procedia PDF Downloads 18517 Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques
Authors: Mrinmoy Dhara, Vivek K. Sengar, Shovan L. Chattoraj, Soumiya Bhattacharjee
Abstract:
Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies.Keywords: ASTER, hyperion, band ratios, alteration zones, SAM
Procedia PDF Downloads 280516 Towards a Broader Understanding of Journal Impact: Measuring Relationships between Journal Characteristics and Scholarly Impact
Authors: X. Gu, K. L. Blackmore
Abstract:
The impact factor was introduced to measure the quality of journals. Various impact measures exist from multiple bibliographic databases. In this research, we aim to provide a broader understanding of the relationship between scholarly impact and other characteristics of academic journals. Data used for this research were collected from Ulrich’s Periodicals Directory (Ulrichs), Cabell’s (Cabells), and SCImago Journal & Country Rank (SJR) from 1999 to 2015. A master journal dataset was consolidated via Journal Title and ISSN. We adopted a two-step analysis process to study the quantitative relationships between scholarly impact and other journal characteristics. Firstly, we conducted a correlation analysis over the data attributes, with results indicating that there are no correlations between any of the identified journal characteristics. Secondly, we examined the quantitative relationship between scholarly impact and other characteristics using quartile analysis. The results show interesting patterns, including some expected and others less anticipated. Results show that higher quartile journals publish more in both frequency and quantity, and charge more for subscription cost. Top quartile journals also have the lowest acceptance rates. Non-English journals are more likely to be categorized in lower quartiles, which are more likely to stop publishing than higher quartiles. Future work is suggested, which includes analysis of the relationship between scholars and their publications, based on the quartile ranking of journals in which they publish.Keywords: academic journal, acceptance rate, impact factor, journal characteristics
Procedia PDF Downloads 304515 Corn Production in the Visayas: An Industry Study from 2002-2019
Authors: Julie Ann L. Gadin, Andrearose C. Igano, Carl Joseph S. Ignacio, Christopher C. Bacungan
Abstract:
Corn production has become an important and pervasive industry in the Visayas for many years. Its role as a substitute commodity to rice heightens demand for health-particular consumers. Unfortunately, the corn industry is confronted with several challenges, such as weak institutions. Considering these issues, the paper examined the factors that influence corn production in the three administrative regions in the Visayas, namely, Western Visayas, Central Visayas, and Eastern Visayas. The data used was retrieved from a variety of publicly available data sources such as the Philippine Statistics Authority, the Department of Agriculture, the Philippine Crop Insurance Corporation, and the International Disaster Database. Utilizing a dataset from 2002 to 2019, the indicators were tested using three multiple linear regression (MLR) models. Results showed that the land area harvested (p=0.02), and the value of corn production (p=0.00) are statistically significant variables that influence corn production in the Visayas. Given these findings, it is suggested that the policy of forest conversion and sustainable land management should be effective in enabling farmworkers to obtain land to grow corn crops, especially in rural regions. Furthermore, the Biofuels Act of 2006, the Livestock Industry Restructuring and Rationalization Act, and supported policy, Senate Bill No. 225, or an Act Establishing the Philippine Corn Research Institute and Appropriating Funds, should be enforced inclusively in order to improve the demand for the corn-allied industries which may lead to an increase in the value and volume of corn production in the Visayas.Keywords: corn, industry, production, MLR, Visayas
Procedia PDF Downloads 217514 Attitudinal Change: A Major Therapy for Non–Technical Losses in the Nigerian Power Sector
Authors: Fina O. Faithpraise, Effiong O. Obisung, Azele E. Peter, Chris R. Chatwin
Abstract:
This study investigates and identifies consumer attitude as a major influence that results in non-technical losses in the Nigerian electricity supply sector. This discovery is revealed by the combination of quantitative and qualitative research to complete a survey. The dataset employed is a simple random sampling of households using electricity (public power supply), and the number of units chosen is based on statistical power analysis. The units were subdivided into two categories (household with and without electrical meters). The hypothesis formulated was tested and analyzed using a chi-square statistical method. The results obtained shows that the critical value for the household with electrical prepared meter (EPM) was (9.488 < 427.4) and those without electrical prepared meter (EPMn) was (9.488 < 436.1) with a p-value of 0.01%. The analysis demonstrated so far established the real-time position, which shows that the wrong attitude towards handling the electricity supplied (not turning off light bulbs and electrical appliances when not in use within the rooms and outdoors within 12 hours of the day) characterized the non-technical losses in the power sector. Therefore the adoption of efficient lighting attitudes in individual households as recommended by the researcher is greatly encouraged. The results from this study should serve as a model for energy efficiency and use for the improvement of electricity consumption as well as a stable economy.Keywords: attitudinal change, household, non-technical losses, prepared meter
Procedia PDF Downloads 180513 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm
Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn
Abstract:
Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.Keywords: binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct
Procedia PDF Downloads 227512 Thermal Behaviour of a Low-Cost Passive Solar House in Somerset East, South Africa
Authors: Ochuko K. Overen, Golden Makaka, Edson L. Meyer, Sampson Mamphweli
Abstract:
Low-cost housing provided for people with small incomes in South Africa are characterized by poor thermal performance. This is due to inferior craftsmanship with no regard to energy efficient design during the building process. On average, South African households spend 14% of their total monthly income on energy needs, in particular space heating; which is higher than the international benchmark of 10% for energy poverty. Adopting energy efficient passive solar design strategies and superior thermal building materials can create a stable thermal comfort environment indoors. Thereby, reducing energy consumption for space heating. The aim of this study is to analyse the thermal behaviour of a low-cost house integrated with passive solar design features. A low-cost passive solar house with superstructure fly ash brick walls was designed and constructed in Somerset East, South Africa. Indoor and outdoor meteorological parameters of the house were monitored for a period of one year. The ASTM E741-11 Standard was adopted to perform ventilation test in the house. In summer, the house was found to be thermally comfortable for 66% of the period monitored, while for winter it was about 79%. The ventilation heat flow rate of the windows and doors were found to be 140 J/s and 68 J/s, respectively. Air leakage through cracks and openings in the building envelope was 0.16 m3/m2h with a corresponding ventilation heat flow rate of 24 J/s. The indoor carbon dioxide concentration monitored overnight was found to be 0.248%, which is less than the maximum range limit of 0.500%. The prediction percentage dissatisfaction of the house shows that 86% of the occupants will express the thermal satisfaction of the indoor environment. With a good operation of the house, it can create a well-ventilated, thermal comfortable and nature luminous indoor environment for the occupants. Incorporating passive solar design in low-cost housing can be one of the long and immediate solutions to the energy crisis facing South Africa.Keywords: energy efficiency, low-cost housing, passive solar design, rural development, thermal comfort
Procedia PDF Downloads 262