Search results for: Convolutional neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5314

Search results for: Convolutional neural network

4354 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models

Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai

Abstract:

Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.

Keywords: plant identification, CNN, image processing, vision transformer, classification

Procedia PDF Downloads 105
4353 Improved Dynamic Bayesian Networks Applied to Arabic On Line Characters Recognition

Authors: Redouane Tlemsani, Abdelkader Benyettou

Abstract:

Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology. This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data. Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables. In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization. The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition, computer vision

Procedia PDF Downloads 429
4352 Enabling Non-invasive Diagnosis of Thyroid Nodules with High Specificity and Sensitivity

Authors: Sai Maniveer Adapa, Sai Guptha Perla, Adithya Reddy P.

Abstract:

Thyroid nodules can often be diagnosed with ultrasound imaging, although differentiating between benign and malignant nodules can be challenging for medical professionals. This work suggests a novel approach to increase the precision of thyroid nodule identification by combining machine learning and deep learning. The new approach first extracts information from the ultrasound pictures using a deep learning method known as a convolutional autoencoder. A support vector machine, a type of machine learning model, is then trained using these features. With an accuracy of 92.52%, the support vector machine can differentiate between benign and malignant nodules. This innovative technique may decrease the need for pointless biopsies and increase the accuracy of thyroid nodule detection.

Keywords: thyroid tumor diagnosis, ultrasound images, deep learning, machine learning, convolutional auto-encoder, support vector machine

Procedia PDF Downloads 59
4351 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 226
4350 Handwriting Velocity Modeling by Artificial Neural Networks

Authors: Mohamed Aymen Slim, Afef Abdelkrim, Mohamed Benrejeb

Abstract:

The handwriting is a physical demonstration of a complex cognitive process learnt by man since his childhood. People with disabilities or suffering from various neurological diseases are facing so many difficulties resulting from problems located at the muscle stimuli (EMG) or signals from the brain (EEG) and which arise at the stage of writing. The handwriting velocity of the same writer or different writers varies according to different criteria: age, attitude, mood, writing surface, etc. Therefore, it is interesting to reconstruct an experimental basis records taking, as primary reference, the writing speed for different writers which would allow studying the global system during handwriting process. This paper deals with a new approach of the handwriting system modeling based on the velocity criterion through the concepts of artificial neural networks, precisely the Radial Basis Functions (RBF) neural networks. The obtained simulation results show a satisfactory agreement between responses of the developed neural model and the experimental data for various letters and forms then the efficiency of the proposed approaches.

Keywords: Electro Myo Graphic (EMG) signals, experimental approach, handwriting process, Radial Basis Functions (RBF) neural networks, velocity modeling

Procedia PDF Downloads 441
4349 An Approach to Control Electric Automotive Water Pumps Deploying Artificial Neural Networks

Authors: Gabriel S. Adesina, Ruixue Cheng, Geetika Aggarwal, Michael Short

Abstract:

With the global shift towards sustainability and technological advancements, electric Hybrid vehicles (EHVs) are increasingly being seen as viable alternatives to traditional internal combustion (IC) engine vehicles, which also require efficient cooling systems. The electric Automotive Water Pump (AWP) has been introduced as an alternative to IC engine belt-driven pump systems. However, current control methods for AWPs typically employ fixed gain settings, which are not ideal for the varying conditions of dynamic vehicle environments, potentially leading to overheating issues. To overcome the limitations of fixed gain control, this paper proposes implementing an artificial neural network (ANN) for managing the AWP in EHVs. The proposed ANN provides an intelligent, adaptive control strategy that enhances the AWP's performance, supported through MATLAB simulation work illustrated in this paper. Comparative analysis demonstrates that the ANN-based controller surpasses conventional PID and fuzzy logic-based controllers (FLC), exhibiting no overshoot, 0.1secs rapid response, and 0.0696 IAE performance. Consequently, the findings suggest that ANNs can be effectively utilized in EHVs.

Keywords: automotive water pump, cooling system, electric hybrid vehicles, artificial neural networks, PID control, fuzzy logic control, IAE, MATLAB

Procedia PDF Downloads 41
4348 A Tutorial on Network Security: Attacks and Controls

Authors: Belbahi Ahlam

Abstract:

With the phenomenal growth in the Internet, network security has become an integral part of computer and information security. In order to come up with measures that make networks more secure, it is important to learn about the vulnerabilities that could exist in a computer network and then have an understanding of the typical attacks that have been carried out in such networks. The first half of this paper will expose the readers to the classical network attacks that have exploited the typical vulnerabilities of computer networks in the past and solutions that have been adopted since then to prevent or reduce the chances of some of these attacks. The second half of the paper will expose the readers to the different network security controls including the network architecture, protocols, standards and software/ hardware tools that have been adopted in modern day computer networks.

Keywords: network security, attacks and controls, computer and information, solutions

Procedia PDF Downloads 457
4347 Modeling and Prediction of Zinc Extraction Efficiency from Concentrate by Operating Condition and Using Artificial Neural Networks

Authors: S. Mousavian, D. Ashouri, F. Mousavian, V. Nikkhah Rashidabad, N. Ghazinia

Abstract:

PH, temperature, and time of extraction of each stage, agitation speed, and delay time between stages effect on efficiency of zinc extraction from concentrate. In this research, efficiency of zinc extraction was predicted as a function of mentioned variable by artificial neural networks (ANN). ANN with different layer was employed and the result show that the networks with 8 neurons in hidden layer has good agreement with experimental data.

Keywords: zinc extraction, efficiency, neural networks, operating condition

Procedia PDF Downloads 547
4346 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 61
4345 Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening

Authors: Ksheeraj Sai Vepuri, Nada Attar

Abstract:

We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.

Keywords: facial expression recognittion, image preprocessing, deep learning, CNN

Procedia PDF Downloads 145
4344 Air Quality Assessment for a Hot-Spot Station by Neural Network Modelling of the near-Traffic Emission-Immission Interaction

Authors: Tim Steinhaus, Christian Beidl

Abstract:

Urban air quality and climate protection are two major challenges for future mobility systems. Despite the steady reduction of pollutant emissions from vehicles over past decades, local immission load within cities partially still reaches heights, which are considered hazardous to human health. Although traffic-related emissions account for a major part of the overall urban pollution, modeling the exact interaction remains challenging. In this paper, a novel approach for the determination of the emission-immission interaction on the basis of neural network modeling for traffic induced NO2-immission load within a near-traffic hot-spot scenario is presented. In a detailed sensitivity analysis, the significance of relevant influencing variables on the prevailing NO2 concentration is initially analyzed. Based on this, the generation process of the model is described, in which not only environmental influences but also the vehicle fleet composition including its associated segment- and certification-specific real driving emission factors are derived and used as input quantities. The validity of this approach, which has been presented in the past, is re-examined in this paper using updated data on vehicle emissions and recent immission measurement data. Within the framework of a final scenario analysis, the future development of the immission load is forecast for different developments in the vehicle fleet composition. It is shown that immission levels of less than half of today’s yearly average limit values are technically feasible in hot-spot situations.

Keywords: air quality, emission, emission-immission-interaction, immission, NO2, zero impact

Procedia PDF Downloads 127
4343 Optimal Solutions for Real-Time Scheduling of Reconfigurable Embedded Systems Based on Neural Networks with Minimization of Power Consumption

Authors: Ghofrane Rehaiem, Hamza Gharsellaoui, Samir Benahmed

Abstract:

In this study, Artificial Neural Networks (ANNs) were used for modeling the parameters that allow the real-time scheduling of embedded systems under resources constraints designed for real-time applications running. The objective of this work is to implement a neural networks based approach for real-time scheduling of embedded systems in order to handle real-time constraints in execution scenarios. In our proposed approach, many techniques have been proposed for both the planning of tasks and reducing energy consumption. In fact, a combination of Dynamic Voltage Scaling (DVS) and time feedback can be used to scale the frequency dynamically adjusting the operating voltage. Indeed, we present in this paper a hybrid contribution that handles the real-time scheduling of embedded systems, low power consumption depending on the combination of DVS and Neural Feedback Scheduling (NFS) with the energy Priority Earlier Deadline First (PEDF) algorithm. Experimental results illustrate the efficiency of our original proposed approach.

Keywords: optimization, neural networks, real-time scheduling, low-power consumption

Procedia PDF Downloads 372
4342 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 122
4341 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 44
4340 End-to-End Control and Management of Multi-AS Virtual Service Networks Using SDN and Autonomic Computing Architecture

Authors: Yong Xue, Daniel A. Menascé

Abstract:

Automated and end-to-end network resource management and provisioning for virtual service networks in a multiple autonomous systems (a.k.a multi-AS) environment is a challenging and open problem. This paper proposes a novel, scalable and interoperable high-level architecture that incorporates a number of emerging enabling technologies including Software Defined Network (SDN), Network Function Virtualization (NFV), Service Oriented Architecture (SOA), and Autonomic Computing. The proposed architecture can be used to not only automate network resource management and provisioning for virtual service networks across multiple autonomous substrate networks, but also provide an adaptive capability for achieving optimal network resource management and maintaining network-level end-to-end network performance as well. The paper argues that this SDN and autonomic computing based architecture lays a solid foundation that can facilitate the development of the future Internet based on the pluralistic paradigm.

Keywords: virtual network, software defined network, virtual service network, adaptive resource management, SOA, multi-AS, inter-domain

Procedia PDF Downloads 533
4339 Predicting Oil Spills in Real-Time: A Machine Learning and AIS Data-Driven Approach

Authors: Tanmay Bisen, Aastha Shayla, Susham Biswas

Abstract:

Oil spills from tankers can cause significant harm to the environment and local communities, as well as have economic consequences. Early predictions of oil spills can help to minimize these impacts. Our proposed system uses machine learning and neural networks to predict potential oil spills by monitoring data from ship Automatic Identification Systems (AIS). The model analyzes ship movements, speeds, and changes in direction to identify patterns that deviate from the norm and could indicate a potential spill. Our approach not only identifies anomalies but also predicts spills before they occur, providing early detection and mitigation measures. This can prevent or minimize damage to the reputation of the company responsible and the country where the spill takes place. The model's performance on the MV Wakashio oil spill provides insight into its ability to detect and respond to real-world oil spills, highlighting areas for improvement and further research.

Keywords: Anomaly Detection, Oil Spill Prediction, Machine Learning, Image Processing, Graph Neural Network (GNN)

Procedia PDF Downloads 76
4338 Mathematical Modelling and AI-Based Degradation Analysis of the Second-Life Lithium-Ion Battery Packs for Stationary Applications

Authors: Farhad Salek, Shahaboddin Resalati

Abstract:

The production of electric vehicles (EVs) featuring lithium-ion battery technology has substantially escalated over the past decade, demonstrating a steady and persistent upward trajectory. The imminent retirement of electric vehicle (EV) batteries after approximately eight years underscores the critical need for their redirection towards recycling, a task complicated by the current inadequacy of recycling infrastructures globally. A potential solution for such concerns involves extending the operational lifespan of electric vehicle (EV) batteries through their utilization in stationary energy storage systems during secondary applications. Such adoptions, however, require addressing the safety concerns associated with batteries’ knee points and thermal runaways. This paper develops an accurate mathematical model representative of the second-life battery packs from a cell-to-pack scale using an equivalent circuit model (ECM) methodology. Neural network algorithms are employed to forecast the degradation parameters based on the EV batteries' aging history to develop a degradation model. The degradation model is integrated with the ECM to reflect the impacts of the cycle aging mechanism on battery parameters during operation. The developed model is tested under real-life load profiles to evaluate the life span of the batteries in various operating conditions. The methodology and the algorithms introduced in this paper can be considered the basis for Battery Management System (BMS) design and techno-economic analysis of such technologies.

Keywords: second life battery, electric vehicles, degradation, neural network

Procedia PDF Downloads 66
4337 Performance and Emission Prediction in a Biodiesel Engine Fuelled with Honge Methyl Ester Using RBF Neural Networks

Authors: Shiva Kumar, G. S. Vijay, Srinivas Pai P., Shrinivasa Rao B. R.

Abstract:

In the present study RBF neural networks were used for predicting the performance and emission parameters of a biodiesel engine. Engine experiments were carried out in a 4 stroke diesel engine using blends of diesel and Honge methyl ester as the fuel. Performance parameters like BTE, BSEC, Tech and emissions from the engine were measured. These experimental results were used for ANN modeling. RBF center initialization was done by random selection and by using Clustered techniques. Network was trained by using fixed and varying widths for the RBF units. It was observed that RBF results were having a good agreement with the experimental results. Networks trained by using clustering technique gave better results than using random selection of centers in terms of reduced MRE and increased prediction accuracy. The average MRE for the performance parameters was 3.25% with the prediction accuracy of 98% and for emissions it was 10.4% with a prediction accuracy of 80%.

Keywords: radial basis function networks, emissions, performance parameters, fuzzy c means

Procedia PDF Downloads 560
4336 Efficient DNN Training on Heterogeneous Clusters with Pipeline Parallelism

Authors: Lizhi Ma, Dan Liu

Abstract:

Pipeline parallelism has been widely used to accelerate distributed deep learning to alleviate GPU memory bottlenecks and to ensure that models can be trained and deployed smoothly under limited graphics memory conditions. However, in highly heterogeneous distributed clusters, traditional model partitioning methods are not able to achieve load balancing. The overlap of communication and computation is also a big challenge. In this paper, HePipe is proposed, an efficient pipeline parallel training method for highly heterogeneous clusters. According to the characteristics of the neural network model pipeline training task, oriented to the 2-level heterogeneous cluster computing topology, a training method based on the 2-level stage division of neural network modeling and partitioning is designed to improve the parallelism. Additionally, a multi-forward 1F1B scheduling strategy is designed to accelerate the training time of each stage by executing the computation units in advance to maximize the overlap between the forward propagation communication and backward propagation computation. Finally, a dynamic recomputation strategy based on task memory requirement prediction is proposed to improve the fitness ratio of task and memory, which improves the throughput of the cluster and solves the memory shortfall problem caused by memory differences in heterogeneous clusters. The empirical results show that HePipe improves the training speed by 1.6×−2.2× over the existing asynchronous pipeline baselines.

Keywords: pipeline parallelism, heterogeneous cluster, model training, 2-level stage partitioning

Procedia PDF Downloads 20
4335 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 96
4334 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 272
4333 Implementation of an Associative Memory Using a Restricted Hopfield Network

Authors: Tet H. Yeap

Abstract:

An analog restricted Hopfield Network is presented in this paper. It consists of two layers of nodes, visible and hidden nodes, connected by directional weighted paths forming a bipartite graph with no intralayer connection. An energy or Lyapunov function was derived to show that the proposed network will converge to stable states. By introducing hidden nodes, the proposed network can be trained to store patterns and has increased memory capacity. Training to be an associative memory, simulation results show that the associative memory performs better than a classical Hopfield network by being able to perform better memory recall when the input is noisy.

Keywords: restricted Hopfield network, Lyapunov function, simultaneous perturbation stochastic approximation

Procedia PDF Downloads 134
4332 Classification of Barley Varieties by Artificial Neural Networks

Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran

Abstract:

In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.

Keywords: physical properties, artificial neural networks, barley, classification

Procedia PDF Downloads 180
4331 Designing Directed Network with Optimal Controllability

Authors: Liang Bai, Yandong Xiao, Haorang Wang, Songyang Lao

Abstract:

The directedness of links is crucial to determine the controllability in complex networks. Even the edge directions can determine the controllability of complex networks. Obviously, for a given network, we wish to design its edge directions that make this network approach the optimal controllability. In this work, we firstly introduce two methods to enhance network by assigning edge directions. However, these two methods could not completely mitigate the negative effects of inaccessibility and dilations. Thus, to approach the optimal network controllability, the edge directions must mitigate the negative effects of inaccessibility and dilations as much as possible. Finally, we propose the edge direction for optimal controllability. The optimal method has been found to be successfully useful on real-world and synthetic networks.

Keywords: complex network, dynamics, network control, optimization

Procedia PDF Downloads 188
4330 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection

Authors: Devadrita Dey Sarkar

Abstract:

Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.

Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)

Procedia PDF Downloads 456
4329 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV

Authors: Maria Pavlova

Abstract:

In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.

Keywords: camera, object recognition, OpenCV, Raspberry

Procedia PDF Downloads 219
4328 Neural Networks Underlying the Generation of Neural Sequences in the HVC

Authors: Zeina Bou Diab, Arij Daou

Abstract:

The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.

Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird

Procedia PDF Downloads 72
4327 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 80
4326 System Survivability in Networks

Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez

Abstract:

We consider the problem of attacks on networks. We define the concept of system survivability in networks in the presence of intelligent threats. Our setting of the problem assumes a flow to be sent from one source node to a destination node. The attacker attempts to disable the network by preventing the flow to reach its destination while the defender attempts to identify the best path-set to use to maximize the chance of arrival of the flow to the destination node. Our concept is shown to be different from the classical concept of network reliability. We distinguish two types of network survivability related to the defender and to the attacker of the network, respectively. We prove that the defender-based-network survivability plays the role of a lower bound while the attacker-based-network survivability plays the role of an upper bound of network reliability. We also prove that both concepts almost never agree nor coincide with network reliability. Moreover, we use the shortest-path problem to determine the defender-based-network survivability and the min-cut problem to determine the attacker-based-network survivability. We extend the problem to a variety of models including the minimum-spanning-tree problem and the multiple source-/destination-network problems.

Keywords: defense/attack strategies, information, networks, reliability, survivability

Procedia PDF Downloads 397
4325 Impact Assessment of Information Communication, Network Providers, Teledensity, and Consumer Complaints on Gross Domestic Products

Authors: Essang Anwana Onuntuei, Chinyere Blessing Azunwoke

Abstract:

The study used secondary data from foreign and local organizations to explore major challenges and opportunities abound in Information Communication. The study aimed at exploring the tie between tele density (network coverage area) and the number of network subscriptions, probing if the degree of consumer complaints varies significantly among network providers, and assessing if network subscriptions do significantly influence the sector’s GDP contribution. Methods used for data analysis include Pearson product-moment correlation and regression analysis, and the Analysis of Variance (ANOVA) as well. At a two-tailed test of 0.05 confidence level, the results of findings established about 85.6% of network subscriptions were explained by tele density (network coverage area), and the number of network subscriptions; Consumer Complaints’ degree varied significantly among network providers as 80.158291 (F calculated) > 3.490295 (F critical) with very high confidence associated p-value = 0.000000 which is < 0.05; and finally, 65% of the nation’s GDP was explained by network subscription to show a high association.

Keywords: tele density, subscription, network coverage, information communication, consumer

Procedia PDF Downloads 51