Search results for: Fourier neural operator
2224 The Twin Terminal of Pedestrian Trajectory Based on City Intelligent Model (CIM) 4.0
Authors: Chen Xi, Lao Xuerui, Li Junjie, Jiang Yike, Wang Hanwei, Zeng Zihao
Abstract:
To further promote the development of smart cities, the microscopic "nerve endings" of the City Intelligent Model (CIM) are extended to be more sensitive. In this paper, we develop a pedestrian trajectory twin terminal based on the CIM and CNN technology. It also uses 5G networks, architectural and geoinformatics technologies, convolutional neural networks, combined with deep learning networks for human behaviour recognition models, to provide empirical data such as 'pedestrian flow data and human behavioural characteristics data', and ultimately form spatial performance evaluation criteria and spatial performance warning systems, to make the empirical data accurate and intelligent for prediction and decision making.Keywords: urban planning, urban governance, CIM, artificial intelligence, convolutional neural network
Procedia PDF Downloads 1492223 Modeling Fertility and Production of Hazelnut Cultivars through the Artificial Neural Network under Climate Change of Karaj
Authors: Marziyeh Khavari
Abstract:
In recent decades, climate change, global warming, and the growing population worldwide face some challenges, such as increasing food consumption and shortage of resources. Assessing how climate change could disturb crops, especially hazelnut production, seems crucial for sustainable agriculture production. For hazelnut cultivation in the mid-warm condition, such as in Iran, here we present an investigation of climate parameters and how much they are effective on fertility and nut production of hazelnut trees. Therefore, the climate change of the northern zones in Iran has investigated (1960-2017) and was reached an uptrend in temperature. Furthermore, the descriptive analysis performed on six cultivars during seven years shows how this small-scale survey could demonstrate the effects of climate change on hazelnut production and stability. Results showed that some climate parameters are more significant on nut production, such as solar radiation, soil temperature, relative humidity, and precipitation. Moreover, some cultivars have produced more stable production, for instance, Negret and Segorbe, while the Mervill de Boliver recorded the most variation during the study. Another aspect that needs to be met is training and predicting an actual model to simulate nut production through a neural network and linear regression simulation. The study developed and estimated the ANN model's generalization capability with different criteria such as RMSE, SSE, and accuracy factors for dependent and independent variables (environmental and yield traits). The models were trained and tested while the accuracy of the model is proper to predict hazelnut production under fluctuations in weather parameters.Keywords: climate change, neural network, hazelnut, global warming
Procedia PDF Downloads 1322222 Foot Recognition Using Deep Learning for Knee Rehabilitation
Authors: Rakkrit Duangsoithong, Jermphiphut Jaruenpunyasak, Alba Garcia
Abstract:
The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method.Keywords: foot recognition, deep learning, knee rehabilitation, convolutional neural network
Procedia PDF Downloads 1612221 Neural Correlates of Attention Bias to Threat during the Emotional Stroop Task in Schizophrenia
Authors: Camellia Al-Ibrahim, Jenny Yiend, Sukhwinder S. Shergill
Abstract:
Background: Attention bias to threat play a role in the development, maintenance, and exacerbation of delusional beliefs in schizophrenia in which patients emphasize the threatening characteristics of stimuli and prioritise them for processing. Cognitive control deficits arise when task-irrelevant emotional information elicits attentional bias and obstruct optimal performance. This study is investigating neural correlates of interference effect of linguistic threat and whether these effects are independent of delusional severity. Methods: Using an event-related functional magnetic resonance imaging (fMRI), neural correlates of interference effect of linguistic threat during the emotional Stroop task were investigated and compared patients with schizophrenia with high (N=17) and low (N=16) paranoid symptoms and healthy controls (N=20). Participants were instructed to identify the font colour of each word presented on the screen as quickly and accurately as possible. Stimuli types vary between threat-relevant, positive and neutral words. Results: Group differences in whole brain effects indicate decreased amygdala activity in patients with high paranoid symptoms compared with low paranoid patients and healthy controls. Regions of interest analysis (ROI) validated our results within the amygdala and investigated changes within the striatum showing a pattern of reduced activation within the clinical group compared to healthy controls. Delusional severity was associated with significant decreased neural activity in the striatum within the clinical group. Conclusion: Our findings suggest that the emotional interference mediated by the amygdala and striatum may reduce responsiveness to threat-related stimuli in schizophrenia and that attenuation of fMRI Blood-oxygen-level dependent (BOLD) signal within these areas might be influenced by the severity of delusional symptoms.Keywords: attention bias, fMRI, Schizophrenia, Stroop
Procedia PDF Downloads 1992220 Concrete Mix Design Using Neural Network
Authors: Rama Shanker, Anil Kumar Sachan
Abstract:
Basic ingredients of concrete are cement, fine aggregate, coarse aggregate and water. To produce a concrete of certain specific properties, optimum proportion of these ingredients are mixed. The important factors which govern the mix design are grade of concrete, type of cement and size, shape and grading of aggregates. Concrete mix design method is based on experimentally evolved empirical relationship between the factors in the choice of mix design. Basic draw backs of this method are that it does not produce desired strength, calculations are cumbersome and a number of tables are to be referred for arriving at trial mix proportion moreover, the variation in attainment of desired strength is uncertain below the target strength and may even fail. To solve this problem, a lot of cubes of standard grades were prepared and attained 28 days strength determined for different combination of cement, fine aggregate, coarse aggregate and water. An artificial neural network (ANN) was prepared using these data. The input of ANN were grade of concrete, type of cement, size, shape and grading of aggregates and output were proportions of various ingredients. With the help of these inputs and outputs, ANN was trained using feed forward back proportion model. Finally trained ANN was validated, it was seen that it gave the result with/ error of maximum 4 to 5%. Hence, specific type of concrete can be prepared from given material properties and proportions of these materials can be quickly evaluated using the proposed ANN.Keywords: aggregate proportions, artificial neural network, concrete grade, concrete mix design
Procedia PDF Downloads 3892219 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal
Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan
Abstract:
This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal
Procedia PDF Downloads 1132218 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 822217 Nonlinear Modeling of the PEMFC Based on NNARX Approach
Authors: Shan-Jen Cheng, Te-Jen Chang, Kuang-Hsiung Tan, Shou-Ling Kuo
Abstract:
Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accuracy of NNARX model are tested by one step ahead relating output voltage to input current from measured experimental of PEMFC. The results show that the obtained nonlinear NNARX model can efficiently approximate the dynamic mode of the PEMFC and model output and system measured output consistently.Keywords: PEMFC, neural network, nonlinear modeling, NNARX
Procedia PDF Downloads 3812216 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 2212215 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 1372214 Harmonic Mitigation and Total Harmonic Distortion Reduction in Grid-Connected PV Systems: A Case Study Using Real-Time Data and Filtering Techniques
Authors: Atena Tazikeh Lemeski, Ismail Ozdamar
Abstract:
This study presents a detailed analysis of harmonic distortion in a grid-connected photovoltaic (PV) system using real-time data captured from a solar power plant. Harmonics introduced by inverters in PV systems can degrade power quality and lead to increased Total Harmonic Distortion (THD), which poses challenges such as transformer overheating, increased power losses, and potential grid instability. This research addresses these issues by applying Fast Fourier Transform (FFT) to identify significant harmonic components and employing notch filters to target specific frequencies, particularly the 3rd harmonic (150 Hz), which was identified as the largest contributor to THD. Initial analysis of the unfiltered voltage signal revealed a THD of 21.15%, with prominent harmonic peaks at 150 Hz, 250 Hz and 350 Hz, corresponding to the 3rd, 5th, and 7th harmonics, respectively. After implementing the notch filters, the THD was reduced to 5.72%, demonstrating the effectiveness of this approach in mitigating harmonic distortion without affecting the fundamental frequency. This paper provides practical insights into the application of real-time filtering techniques in PV systems and their role in improving overall grid stability and power quality. The results indicate that targeted harmonic mitigation is crucial for the sustainable integration of renewable energy sources into modern electrical grids.Keywords: grid-connected photovoltaic systems, fast Fourier transform, harmonic filtering, inverter-induced harmonics
Procedia PDF Downloads 332213 An Early Detection Type 2 Diabetes Using K - Nearest Neighbor Algorithm
Authors: Ng Liang Shen, Ngahzaifa Abdul Ghani
Abstract:
This research aimed at developing an early warning system for pre-diabetic and diabetics by analyzing simple and easily determinable signs and symptoms of diabetes among the people living in Malaysia using Particle Swarm Optimized Artificial. With the skyrocketing prevalence of Type 2 diabetes in Malaysia, the system can be used to encourage affected people to seek further medical attention to prevent the onset of diabetes or start managing it early enough to avoid the associated complications. The study sought to find out the best predictive variables of Type 2 Diabetes Mellitus, developed a system to diagnose diabetes from the variables using Artificial Neural Networks and tested the system on accuracy to find out the patent generated from diabetes diagnosis result in machine learning algorithms even at primary or advanced stages.Keywords: diabetes diagnosis, Artificial Neural Networks, artificial intelligence, soft computing, medical diagnosis
Procedia PDF Downloads 3362212 Normalized P-Laplacian: From Stochastic Game to Image Processing
Authors: Abderrahim Elmoataz
Abstract:
More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems
Procedia PDF Downloads 5122211 Tracking of Linarin from the Ethyl Acetate Fraction of Melinjo (Gnetum gnemon L.) Seeds Using Preparative High Performance Liquid Chromatography
Authors: Asep Sukohar, Ramadhan Triyandi, Muhammad Iqbal, Sahidin, Suharyani
Abstract:
Introduction: Resveratrol is a class of bioactive chemicals found in melinjo, which has a wide range of biological actions. The purpose of this study is to determine the linarin content of the melinjo fraksi by using preparative-high-performance liquid chromatography (prep-HPLC). Method: Extraction used the soxhletation method with 96% ethanol solvent. Fractionation used ethyl acetate and ethanol in a ratio of 1:1. Tracing of linarin compound used prep-HPLC with a mobile phase ratio of distilled water: methanol (55: 45, v/v). The presence of linarin was detected using a wavelength of 215 nm. Fourier Transform Infrared (FTIR) was used to identify the functional groups of compound. Result: The retention time required to elute the ethyl acetate fraction was 2.601 minutes. Compound separation identification using Fourier Transform Infrared Spectroscopy - Quest Attenuated Total Reflectance (FTIR - QATR) has a similarity value range with standards from 0 to 1000. The elution results of the ethyl acetate fraction have similar values with the standard compounds linarin (668), resveratrol (578), and catechin (455). Conclusion: Tracing for active compound in the ethyl acetate fraction of Gnetum Gnemon L. using prep-HPLC showed a strong suspicion of the presence of linarin compound.Keywords: Gnetum gnemon L., linarin, prep-HPLC, fraction ethyl acetate
Procedia PDF Downloads 1162210 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks
Procedia PDF Downloads 3312209 Visual Inspection of Road Conditions Using Deep Convolutional Neural Networks
Authors: Christos Theoharatos, Dimitris Tsourounis, Spiros Oikonomou, Andreas Makedonas
Abstract:
This paper focuses on the problem of visually inspecting and recognizing the road conditions in front of moving vehicles, targeting automotive scenarios. The goal of road inspection is to identify whether the road is slippery or not, as well as to detect possible anomalies on the road surface like potholes or body bumps/humps. Our work is based on an artificial intelligence methodology for real-time monitoring of road conditions in autonomous driving scenarios, using state-of-the-art deep convolutional neural network (CNN) techniques. Initially, the road and ego lane are segmented within the field of view of the camera that is integrated into the front part of the vehicle. A novel classification CNN is utilized to identify among plain and slippery road textures (e.g., wet, snow, etc.). Simultaneously, a robust detection CNN identifies severe surface anomalies within the ego lane, such as potholes and speed bumps/humps, within a distance of 5 to 25 meters. The overall methodology is illustrated under the scope of an integrated application (or system), which can be integrated into complete Advanced Driver-Assistance Systems (ADAS) systems that provide a full range of functionalities. The outcome of the proposed techniques present state-of-the-art detection and classification results and real-time performance running on AI accelerator devices like Intel’s Myriad 2/X Vision Processing Unit (VPU).Keywords: deep learning, convolutional neural networks, road condition classification, embedded systems
Procedia PDF Downloads 1342208 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: recurrent neural network, players lineup, basketball data, decision making model
Procedia PDF Downloads 1332207 Room Level Indoor Localization Using Relevant Channel Impulse Response Parameters
Authors: Raida Zouari, Iness Ahriz, Rafik Zayani, Ali Dziri, Ridha Bouallegue
Abstract:
This paper proposes a room level indoor localization algorithm based on the use Multi-Layer Neural Network (MLNN) classifiers and one versus one strategy. Seven parameters of the Channel Impulse Response (CIR) were used and Gram-Shmidt Orthogonalization was performed to study the relevance of the extracted parameters. Simulation results show that when relevant CIR parameters are used as position fingerprint and when optimal MLNN architecture is selected good room level localization score can be achieved. The current study showed also that some of the CIR parameters are not correlated to the location and can decrease the localization performance of the system.Keywords: mobile indoor localization, multi-layer neural network (MLNN), channel impulse response (CIR), Gram-Shmidt orthogonalization
Procedia PDF Downloads 3572206 An Ensemble-based Method for Vehicle Color Recognition
Authors: Saeedeh Barzegar Khalilsaraei, Manoocheher Kelarestaghi, Farshad Eshghi
Abstract:
The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals.Keywords: Vehicle Color Recognition, Ensemble Algorithm, Stack Generalization, Convolutional Neural Network
Procedia PDF Downloads 852205 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 542204 The Use Support Vector Machine and Back Propagation Neural Network for Prediction of Daily Tidal Levels Along The Jeddah Coast, Saudi Arabia
Authors: E. A. Mlybari, M. S. Elbisy, A. H. Alshahri, O. M. Albarakati
Abstract:
Sea level rise threatens to increase the impact of future storms and hurricanes on coastal communities. Accurate sea level change prediction and supplement is an important task in determining constructions and human activities in coastal and oceanic areas. In this study, support vector machines (SVM) is proposed to predict daily tidal levels along the Jeddah Coast, Saudi Arabia. The optimal parameter values of kernel function are determined using a genetic algorithm. The SVM results are compared with the field data and with back propagation (BP). Among the models, the SVM is superior to BPNN and has better generalization performance.Keywords: tides, prediction, support vector machines, genetic algorithm, back-propagation neural network, risk, hazards
Procedia PDF Downloads 4682203 Identification of Breast Anomalies Based on Deep Convolutional Neural Networks and K-Nearest Neighbors
Authors: Ayyaz Hussain, Tariq Sadad
Abstract:
Breast cancer (BC) is one of the widespread ailments among females globally. The early prognosis of BC can decrease the mortality rate. Exact findings of benign tumors can avoid unnecessary biopsies and further treatments of patients under investigation. However, due to variations in images, it is a tough job to isolate cancerous cases from normal and benign ones. The machine learning technique is widely employed in the classification of BC pattern and prognosis. In this research, a deep convolution neural network (DCNN) called AlexNet architecture is employed to get more discriminative features from breast tissues. To achieve higher accuracy, K-nearest neighbor (KNN) classifiers are employed as a substitute for the softmax layer in deep learning. The proposed model is tested on a widely used breast image database called MIAS dataset for experimental purposes and achieved 99% accuracy.Keywords: breast cancer, DCNN, KNN, mammography
Procedia PDF Downloads 1362202 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models
Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan
Abstract:
Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network
Procedia PDF Downloads 272201 The Mechanism Study of Degradative Solvent Extraction of Biomass by Liquid Membrane-Fourier Transform Infrared Spectroscopy
Authors: W. Ketren, J. Wannapeera, Z. Heishun, A. Ryuichi, K. Toshiteru, M. Kouichi, O. Hideaki
Abstract:
Degradative solvent extraction is the method developed for biomass upgrading by dewatering and fractionation of biomass under the mild condition. However, the conversion mechanism of the degradative solvent extraction method has not been fully understood so far. The rice straw was treated in 1-methylnaphthalene (1-MN) at a different solvent-treatment temperature varied from 250 to 350 oC with the residence time for 60 min. The liquid membrane-Fourier Transform Infrared Spectroscopy (FTIR) technique is applied to study the processing mechanism in-depth without separation of the solvent. It has been found that the strength of the oxygen-hydrogen stretching (3600-3100 cm-1) decreased slightly with increasing temperature in the range of 300-350 oC. The decrease of the hydroxyl group in the solvent soluble suggested dehydration reaction taking place between 300 and 350 oC. FTIR spectra in the carbonyl stretching region (1800-1600 cm-1) revealed the presence of esters groups, carboxylic acid and ketonic groups in the solvent-soluble of biomass. The carboxylic acid increased in the range of 200 to 250 oC and then decreased. The prevailing of aromatic groups showed that the aromatization took place during extraction at above 250 oC. From 300 to 350 oC, the carbonyl functional groups in the solvent-soluble noticeably decreased. The removal of the carboxylic acid and the decrease of esters into the form of carbon dioxide indicated that the decarboxylation reaction occurred during the extraction process.Keywords: biomass waste, degradative solvent extraction, mechanism, upgrading
Procedia PDF Downloads 2852200 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris
Abstract:
Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging
Procedia PDF Downloads 3602199 Using Self Organizing Feature Maps for Classification in RGB Images
Authors: Hassan Masoumi, Ahad Salimi, Nazanin Barhemmat, Babak Gholami
Abstract:
Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feed-forward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on self organizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification.Keywords: classification, SOFM algorithm, neural network, neighborhood, RGB image
Procedia PDF Downloads 4782198 Parallel Self Organizing Neural Network Based Estimation of Archie’s Parameters and Water Saturation in Sandstone Reservoir
Authors: G. M. Hamada, A. A. Al-Gathe, A. M. Al-Khudafi
Abstract:
Determination of water saturation in sandstone is a vital question to determine the initial oil or gas in place in reservoir rocks. Water saturation determination using electrical measurements is mainly on Archie’s formula. Consequently accuracy of Archie’s formula parameters affects water saturation values rigorously. Determination of Archie’s parameters a, m, and n is proceeded by three conventional techniques, Core Archie-Parameter Estimation (CAPE) and 3-D. This work introduces the hybrid system of parallel self-organizing neural network (PSONN) targeting accepted values of Archie’s parameters and, consequently, reliable water saturation values. This work focuses on Archie’s parameters determination techniques; conventional technique, CAPE technique, and 3-D technique, and then the calculation of water saturation using current. Using the same data, a hybrid parallel self-organizing neural network (PSONN) algorithm is used to estimate Archie’s parameters and predict water saturation. Results have shown that estimated Arche’s parameters m, a, and n are highly accepted with statistical analysis, indicating that the PSONN model has a lower statistical error and higher correlation coefficient. This study was conducted using a high number of measurement points for 144 core plugs from a sandstone reservoir. PSONN algorithm can provide reliable water saturation values, and it can supplement or even replace the conventional techniques to determine Archie’s parameters and thereby calculate water saturation profiles.Keywords: water saturation, Archie’s parameters, artificial intelligence, PSONN, sandstone reservoir
Procedia PDF Downloads 1282197 Analysis of Moving Loads on Bridges Using Surrogate Models
Authors: Susmita Panda, Arnab Banerjee, Ajinkya Baxy, Bappaditya Manna
Abstract:
The design of short to medium-span high-speed bridges in critical locations is an essential aspect of vehicle-bridge interaction. Due to dynamic interaction between moving load and bridge, mathematical models or finite element modeling computations become time-consuming. Thus, to reduce the computational effort, a universal approximator using an artificial neural network (ANN) has been used to evaluate the dynamic response of the bridge. The data set generation and training of surrogate models have been conducted over the results obtained from mathematical modeling. Further, the robustness of the surrogate model has been investigated, which showed an error percentage of less than 10% with conventional methods. Additionally, the dependency of the dynamic response of the bridge on various load and bridge parameters has been highlighted through a parametric study.Keywords: artificial neural network, mode superposition method, moving load analysis, surrogate models
Procedia PDF Downloads 1002196 Radar Signal Detection Using Neural Networks in Log-Normal Clutter for Multiple Targets Situations
Authors: Boudemagh Naime
Abstract:
Automatic radar detection requires some methods of adapting to variations in the background clutter in order to control their false alarm rate. The problem becomes more complicated in non-Gaussian environment. In fact, the conventional approach in real time applications requires a complex statistical modeling and much computational operations. To overcome these constraints, we propose another approach based on artificial neural network (ANN-CMLD-CFAR) using a Back Propagation (BP) training algorithm. The considered environment follows a log-normal distribution in the presence of multiple Rayleigh-targets. To evaluate the performances of the considered detector, several situations, such as scale parameter and the number of interferes targets, have been investigated. The simulation results show that the ANN-CMLD-CFAR processor outperforms the conventional statistical one.Keywords: radat detection, ANN-CMLD-CFAR, log-normal clutter, statistical modelling
Procedia PDF Downloads 3632195 Designing an Intelligent Voltage Instability System in Power Distribution Systems in the Philippines Using IEEE 14 Bus Test System
Authors: Pocholo Rodriguez, Anne Bernadine Ocampo, Ian Benedict Chan, Janric Micah Gray
Abstract:
The state of an electric power system may be classified as either stable or unstable. The borderline of stability is at any condition for which a slight change in an unfavourable direction of any pertinent quantity will cause instability. Voltage instability in power distribution systems could lead to voltage collapse and thus power blackouts. The researchers will present an intelligent system using back propagation algorithm that can detect voltage instability and output voltage of a power distribution and classify it as stable or unstable. The researchers’ work is the use of parameters involved in voltage instability as input parameters to the neural network for training and testing purposes that can provide faster detection and monitoring of the power distribution system.Keywords: back-propagation algorithm, load instability, neural network, power distribution system
Procedia PDF Downloads 435