Search results for: convolutional neural network.
4241 The Using of Liquefied Petroleum Gas (LPG) on a Low Heat Loss Si Engine
Authors: Hanbey Hazar, Hakan Gul
Abstract:
In this study, Thermal Barrier Coating (TBC) application is performed in order to reduce the engine emissions. Piston, exhaust, and intake valves of a single-cylinder four-cycle gasoline engine were coated with chromium carbide (Cr3C2) at a thickness of 300 µm by using the Plasma Spray coating method which is a TBC method. Gasoline engine was converted into an LPG system. The study was conducted in 4 stages. In the first stage, the piston, exhaust, and intake valves of the gasoline engine were coated with Cr3C2. In the second stage, gasoline engine was converted into the LPG system and the emission values in this engine were recorded. In the third stage, the experiments were repeated under the same conditions with a standard (uncoated) engine and the results were recorded. In the fourth stage, data obtained from both engines were loaded on Artificial Neural Networks (ANN) and estimated values were produced for every revolution. Thus, mathematical modeling of coated and uncoated engines was performed by using ANN. While there was a slight increase in exhaust gas temperature (EGT) of LPG engine due to TBC, carbon monoxide (CO) values decreased.Keywords: LPG fuel, thermal barrier coating, artificial neural network, mathematical modelling
Procedia PDF Downloads 4254240 Prediction of Vapor Liquid Equilibrium for Dilute Solutions of Components in Ionic Liquid by Neural Networks
Authors: S. Mousavian, A. Abedianpour, A. Khanmohammadi, S. Hematian, Gh. Eidi Veisi
Abstract:
Ionic liquids are finding a wide range of applications from reaction media to separations and materials processing. In these applications, Vapor–Liquid equilibrium (VLE) is the most important one. VLE for six systems at 353 K and activity coefficients at infinite dilution 〖(γ〗_i^∞) for various solutes (alkanes, alkenes, cycloalkanes, cycloalkenes, aromatics, alcohols, ketones, esters, ethers, and water) in the ionic liquids (1-ethyl-3-methylimidazolium bis (trifluoromethylsulfonyl)imide [EMIM][BTI], 1-hexyl-3-methyl imidazolium bis (trifluoromethylsulfonyl) imide [HMIM][BTI], 1-octyl-3-methylimidazolium bis(trifluoromethylsulfonyl) imide [OMIM][BTI], and 1-butyl-1-methylpyrrolidinium bis (trifluoromethylsulfonyl) imide [BMPYR][BTI]) have been used to train neural networks in the temperature range from (303 to 333) K. Densities of the ionic liquids, Hildebrant constant of substances, and temperature were selected as input of neural networks. The networks with different hidden layers were examined. Networks with seven neurons in one hidden layer have minimum error and good agreement with experimental data.Keywords: ionic liquid, neural networks, VLE, dilute solution
Procedia PDF Downloads 3004239 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis
Authors: Syed Asif Hassan, Syed Atif Hassan
Abstract:
Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction
Procedia PDF Downloads 3914238 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering
Authors: R. Nandhini, Gaurab Mudbhari
Abstract:
Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.Keywords: machine learning, deep learning, image classification, image clustering
Procedia PDF Downloads 84237 Image Processing-Based Maize Disease Detection Using Mobile Application
Authors: Nathenal Thomas
Abstract:
In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot
Procedia PDF Downloads 744236 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images
Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso
Abstract:
Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence
Procedia PDF Downloads 194235 Protection Plan of Medium Voltage Distribution Network in Tunisia
Abstract:
The distribution networks are often exposed to harmful incidents which can halt the electricity supply of the customer. In this context, we studied a real case of a critical zone of the Tunisian network which is currently characterized by the dysfunction of its plan of protection. In this paper, we were interested in the harmonization of the protection plan settings in order to ensure a perfect selectivity and a better continuity of service on the whole of the network.Keywords: distribution network Gabes-Tunisia, continuity of service, protection plan settings, selectivity
Procedia PDF Downloads 5104234 Alteration Quartz-Kfeldspar-Apatite-Molybdenite at B Anomaly Prospection with Artificial Neural Network to Determining Molydenite Economic Deposits in Malala District, Western Sulawesi
Authors: Ahmad Lutfi, Nikolas Dhega
Abstract:
The Malala deposit in northwest Sulawesi is the only known porphyry molybdenum and the only source for rhenium, occurrence in Indonesia. The neural network method produces results that correspond very closely to those of the knowledge-based fuzzy logic method and weights of evidence method. This method required data of solid geology, regional faults, airborne magnetic, gamma-ray survey data and GIS data. This interpretation of the network output fits with the intuitive notion that a prospective area has characteristics that closely resemble areas known to contain mineral deposits. Contrasts with the weights of evidence and fuzzy logic methods, where, for a given grid location, each input-parameter value automatically results in an increase in the prospective estimated. Malala District indicated molybdenum anomalies in stream sediments from in excess of 15 km2 were obtained, including the Takudan Fault as most prominent structure with striking 40̊ to 60̊ over a distance of about 30 km and in most places weakly at anomaly B, developed over an area of 4 km2, with a ‘shell’ up to 50 m thick at the intrusive contact with minor mineralization occurring in the Tinombo Formation. Series of NW trending, steeply dipping fracture zones, named the East Zone has an estimated resource of 100 Mt at 0.14% MoS2 and minimum target of 150 Mt 0.25%. The Malala porphyries occur as stocks and dykes with predominantly granitic, with fluorine-poor class of molybdenum deposits and belongs to the plutonic sub-type. Unidirectional solidification textures consisting of subparallel, crenulated layers of quartz that area separated by layers of intrusive material textures. The deuteric nature of the molybdenum mineralization and the dominance of carbonate alteration.The nature of the Stage I with alteration barren quartz K‐feldspar; and Stage II with alteration quartz‐K‐feldspar‐apatite-molybdenite veins combined with the presence of disseminated molybdenite with primary biotite in the host intrusive.Keywords: molybdenite, Malala, porphyries, anomaly B
Procedia PDF Downloads 1534233 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 1064232 Plant Leaf Recognition Using Deep Learning
Authors: Aadhya Kaul, Gautam Manocha, Preeti Nagrath
Abstract:
Our environment comprises of a wide variety of plants that are similar to each other and sometimes the similarity between the plants makes the identification process tedious thus increasing the workload of the botanist all over the world. Now all the botanists cannot be accessible all the time for such laborious plant identification; therefore, there is an urge for a quick classification model. Also, along with the identification of the plants, it is also necessary to classify the plant as healthy or not as for a good lifestyle, humans require good food and this food comes from healthy plants. A large number of techniques have been applied to classify the plants as healthy or diseased in order to provide the solution. This paper proposes one such method known as anomaly detection using autoencoders using a set of collections of leaves. In this method, an autoencoder model is built using Keras and then the reconstruction of the original images of the leaves is done and the threshold loss is found in order to classify the plant leaves as healthy or diseased. A dataset of plant leaves is considered to judge the reconstructed performance by convolutional autoencoders and the average accuracy obtained is 71.55% for the purpose.Keywords: convolutional autoencoder, anomaly detection, web application, FLASK
Procedia PDF Downloads 1634231 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 1874230 Quality and Quantity in the Strategic Network of Higher Education Institutions
Authors: Juha Kettunen
Abstract:
The study analyzes the quality and the size of the strategic network of higher education institutions and the concept of fitness for purpose in quality assurance. It also analyses the transaction costs of networking that have consequences on the number of members in the network. Empirical evidence is presented from the Consortium on Applied Research and Professional Education, which is a European strategic network of six higher education institutions. The results of the study support the argument that the number of members in the strategic network should be relatively small to provide high-quality results. The practical importance is that networking has been able to promote international research and development projects. The results of this study are important for those who want to design and improve international networks in higher education.Keywords: higher education, network, research and development, strategic management
Procedia PDF Downloads 3484229 Energy Efficient Firefly Algorithm in Wireless Sensor Network
Authors: Wafa’ Alsharafat, Khalid Batiha, Alaa Kassab
Abstract:
Wireless sensor network (WSN) is comprised of a huge number of small and cheap devices known as sensor nodes. Usually, these sensor nodes are massively and deployed randomly as in Ad-hoc over hostile and harsh environment to sense, collect and transmit data to the needed locations (i.e., base station). One of the main advantages of WSN is that the ability to work in unattended and scattered environments regardless the presence of humans such as remote active volcanoes environments or earthquakes. In WSN expanding network, lifetime is a major concern. Clustering technique is more important to maximize network lifetime. Nature-inspired algorithms are developed and optimized to find optimized solutions for various optimization problems. We proposed Energy Efficient Firefly Algorithm to improve network lifetime as long as possible.Keywords: wireless network, SN, Firefly, energy efficiency
Procedia PDF Downloads 3894228 Turbulent Channel Flow Synthesis using Generative Adversarial Networks
Authors: John M. Lyne, K. Andrea Scott
Abstract:
In fluid dynamics, direct numerical simulations (DNS) of turbulent flows require large amounts of nodes to appropriately resolve all scales of energy transfer. Due to the size of these databases, sharing these datasets amongst the academic community is a challenge. Recent work has been done to investigate the use of super-resolution to enable database sharing, where a low-resolution flow field is super-resolved to high resolutions using a neural network. Recently, Generative Adversarial Networks (GAN) have grown in popularity with impressive results in the generation of faces, landscapes, and more. This work investigates the generation of unique high-resolution channel flow velocity fields from a low-dimensional latent space using a GAN. The training objective of the GAN is to generate samples in which the distribution of the generated samplesis ideally indistinguishable from the distribution of the training data. In this study, the network is trained using samples drawn from a statistically stationary channel flow at a Reynolds number of 560. Results show that the turbulent statistics and energy spectra of the generated flow fields are within reasonable agreement with those of the DNS data, demonstrating that GANscan produce the intricate multi-scale phenomena of turbulence.Keywords: computational fluid dynamics, channel flow, turbulence, generative adversarial network
Procedia PDF Downloads 2064227 How to Modernise the European Competition Network (ECN)
Authors: Dorota Galeza
Abstract:
This paper argues that networks, such as the ECN and the American network, are affected by certain small events which are inherent to path dependence and preclude the full evolution towards efficiency. It is advocated that the American network is superior to the ECN in many respects due to its greater flexibility and longer history. This stems in particular from the creation of the American network, which was based on a small number of cases. Such a structure encourages further changes and modifications which are not necessarily radical. The ECN, by contrast, was established by legislative action, which explains its rigid structure and resistance to change. This paper is an attempt to transpose the superiority of the American network on to the ECN. It looks at concepts such as judicial cooperation, harmonisation of procedure, peer review and regulatory impact assessments (RIAs), and dispute resolution procedures.Keywords: antitrust, competition, networks, path dependence
Procedia PDF Downloads 3154226 Graph Based Traffic Analysis and Delay Prediction Using a Custom Built Dataset
Authors: Gabriele Borg, Alexei Debono, Charlie Abela
Abstract:
There on a constant rise in the availability of high volumes of data gathered from multiple sources, resulting in an abundance of unprocessed information that can be used to monitor patterns and trends in user behaviour. Similarly, year after year, Malta is also constantly experiencing ongoing population growth and an increase in mobilization demand. This research takes advantage of data which is continuously being sourced and converting it into useful information related to the traffic problem on the Maltese roads. The scope of this paper is to provide a methodology to create a custom dataset (MalTra - Malta Traffic) compiled from multiple participants from various locations across the island to identify the most common routes taken to expose the main areas of activity. This use of big data is seen being used in various technologies and is referred to as ITSs (Intelligent Transportation Systems), which has been concluded that there is significant potential in utilising such sources of data on a nationwide scale. Furthermore, a series of traffic prediction graph neural network models are conducted to compare MalTra to large-scale traffic datasets.Keywords: graph neural networks, traffic management, big data, mobile data patterns
Procedia PDF Downloads 1304225 A Mobile Application for Analyzing and Forecasting Crime Using Autoregressive Integrated Moving Average with Artificial Neural Network
Authors: Gajaanuja Megalathan, Banuka Athuraliya
Abstract:
Crime is one of our society's most intimidating and threatening challenges. With the majority of the population residing in cities, many experts and data provided by local authorities suggest a rapid increase in the number of crimes committed in these cities in recent years. There has been an increasing graph in the crime rates. People living in Sri Lanka have the right to know the exact crime rates and the crime rates in the future of the place they are living in. Due to the current economic crisis, crime rates have spiked. There have been so many thefts and murders recorded within the last 6-10 months. Although there are many sources to find out, there is no solid way of searching and finding out the safety of the place. Due to all these reasons, there is a need for the public to feel safe when they are introduced to new places. Through this research, the author aims to develop a mobile application that will be a solution to this problem. It is mainly targeted at tourists, and people who recently relocated will gain advantage of this application. Moreover, the Arima Model combined with ANN is to be used to predict crime rates. From the past researchers' works, it is evidently clear that they haven’t used the Arima model combined with Artificial Neural Networks to forecast crimes.Keywords: arima model, ANN, crime prediction, data analysis
Procedia PDF Downloads 1314224 Neuroplasticity: A Fresh Begining for Life
Authors: Leila Maleki, Ezatollah Ahmadi
Abstract:
Neuroplasticity or the flexibility of the neural system is the ability of the brain to adapt to the lack or deterioration of sense and the capability of the neural system to modify itself through changing shape and function. Not only have studies revealed that neuroplasticity does not end in childhood, but also they have proven that it continues till the end of life and is not limited to the neural system and covers the cognitive system as well. In the field of cognition, neuroplasticity is defined as the ability to change old thoughts according to new conditions and the individuals' differences in using various styles of cognitive regulation inducing several social, emotional and cognitive outcomes. On the other hand, complexities of daily life necessitates cognitive neuroplasticity in order to adapt to different circumstances. The present paper attempts to discuss and define major theories and principles of neuroplasticity and elaborate on nature or nurture.Keywords: neuroplasticity, cognitive plasticity, plasticity theories, plasticity mechanisms
Procedia PDF Downloads 4954223 Neuroplasticity: A Fresh Beginning for Life
Authors: Leila Maleki, Ezatollah Ahmadi
Abstract:
Neuroplasticity or the flexibility of the neural system is the ability of the brain to adapt to the lack or deterioration of sense and the capability of the neural system to modify itself through changing shape and function. Not only have studies revealed that neuroplasticity does not end in childhood, but also they have proven that it continues till the end of life and is not limited to the neural system and covers the cognitive system as well. In the field of cognition, neuroplasticity is defined as the ability to change old thoughts according to new conditions and the individuals' differences in using various styles of cognitive regulation inducing several social, emotional and cognitive outcomes. On the other hand, complexities of daily life necessitates cognitive neuroplasticity in order to adapt to different circumstances. The. present paper attempts to discuss and define major theories and principles of neuroplasticity and elaborate on nature or nurture.Keywords: neuroplasticity, cognitive plasticity, plasticity theories, plasticity mechanisms
Procedia PDF Downloads 4524222 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger
Authors: Hany Elsaid Fawaz Abdallah
Abstract:
This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations
Procedia PDF Downloads 874221 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics
Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez
Abstract:
In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.Keywords: data analysis, emotional domotics, performance improvement, neural network
Procedia PDF Downloads 1404220 Enhanced Constraint-Based Optical Network (ECON) for Enhancing OSNR
Authors: G. R. Kavitha, T. S. Indumathi
Abstract:
With the constantly rising demands of the multimedia services, the requirements of long haul transport network are constantly changing in the area of optical network. Maximum data transmission using optimization of the communication channel poses the biggest challenge. Although there has been a constant focus on this area from the past decade, there was no evidence of a significant result that has been accomplished. Hence, after reviewing some potential design of optical network from literatures, it was understood that optical signal to noise ratio was one of the elementary attributes that can define the performance of the optical network. In this paper, we propose a framework termed as ECON (Enhanced Constraint-based Optical Network) that primarily optimize the optical signal to noise ratio using ROADM. The simulation is performed in Matlab and optical signal to noise ratio is extracted considering the system matrix. The outcome of the proposed study shows that optimized OSNR as compared to the existing studies.Keywords: component, optical network, reconfigurable optical add-drop multiplexer, optical signal-to-noise ratio
Procedia PDF Downloads 4884219 A Proposed Algorithm for Obtaining the Map of Subscribers’ Density Distribution for a Mobile Wireless Communication Network
Authors: C. Temaneh-Nyah, F. A. Phiri, D. Karegeya
Abstract:
This paper presents an algorithm for obtaining the map of subscriber’s density distribution for a mobile wireless communication network based on the actual subscriber's traffic data obtained from the base station. This is useful in statistical characterization of the mobile wireless network.Keywords: electromagnetic compatibility, statistical analysis, simulation of communication network, subscriber density
Procedia PDF Downloads 3094218 Early Recognition and Grading of Cataract Using a Combined Log Gabor/Discrete Wavelet Transform with ANN and SVM
Authors: Hadeer R. M. Tawfik, Rania A. K. Birry, Amani A. Saad
Abstract:
Eyes are considered to be the most sensitive and important organ for human being. Thus, any eye disorder will affect the patient in all aspects of life. Cataract is one of those eye disorders that lead to blindness if not treated correctly and quickly. This paper demonstrates a model for automatic detection, classification, and grading of cataracts based on image processing techniques and artificial intelligence. The proposed system is developed to ease the cataract diagnosis process for both ophthalmologists and patients. The wavelet transform combined with 2D Log Gabor Wavelet transform was used as feature extraction techniques for a dataset of 120 eye images followed by a classification process that classified the image set into three classes; normal, early, and advanced stage. A comparison between the two used classifiers, the support vector machine SVM and the artificial neural network ANN were done for the same dataset of 120 eye images. It was concluded that SVM gave better results than ANN. SVM success rate result was 96.8% accuracy where ANN success rate result was 92.3% accuracy.Keywords: cataract, classification, detection, feature extraction, grading, log-gabor, neural networks, support vector machines, wavelet
Procedia PDF Downloads 3324217 Twitter Ego Networks and the Capital Markets: A Social Network Analysis Perspective of Market Reactions to Earnings Announcement Events
Authors: Gregory D. Saxton
Abstract:
Networks are everywhere: lunch ties among co-workers, golfing partnerships among employees, interlocking board-of-director connections, Facebook friendship ties, etc. Each network varies in terms of its structure -its size, how inter-connected network members are, and the prevalence of sub-groups and cliques. At the same time, within any given network, some network members will have a more important, more central position on account of their greater number of connections or their capacity as “bridges” connecting members of different network cliques. The logic of network structure and position is at the heart of what is known as social network analysis, and this paper applies this logic to the study of the stock market. Using an array of data analytics and machine learning tools, this study will examine 17 million Twitter messages discussing the stocks of the firms in the S&P 1,500 index in 2018. Each of these 1,500 stocks has a distinct Twitter discussion network that varies in terms of core network characteristics such as size, density, influence, norms and values, level of activity, and embedded resources. The study’s core proposition is that the ultimate effect of any market-relevant information is contingent on the characteristics of the network through which it flows. To test this proposition, this study operationalizes each of the core network characteristics and examines their influence on market reactions to 2018 quarterly earnings announcement events.Keywords: data analytics, investor-to-investor communication, social network analysis, Twitter
Procedia PDF Downloads 1214216 New Neuroplasmonic Sensor Based on Soft Nanolithography
Authors: Seyedeh Mehri Hamidi, Nasrin Asgari, Foozieh Sohrabi, Mohammad Ali Ansari
Abstract:
New neuro plasmonic sensor based on one dimensional plasmonic nano-grating has been prepared. To record neural activity, the sample has been exposed under different infrared laser and then has been calculated by ellipsometry parameters. Our results show that we have efficient sensitivity to different laser excitation.Keywords: neural activity, Plasmonic sensor, Nanograting, Gold thin film
Procedia PDF Downloads 3984215 CERD: Cost Effective Route Discovery in Mobile Ad Hoc Networks
Authors: Anuradha Banerjee
Abstract:
A mobile ad hoc network is an infrastructure less network, where nodes are free to move independently in any direction. The nodes have limited battery power; hence, we require energy efficient route discovery technique to enhance their lifetime and network performance. In this paper, we propose an energy-efficient route discovery technique CERD that greatly reduces the number of route requests flooded into the network and also gives priority to the route request packets sent from the routers that has communicated with the destination very recently, in single or multi-hop paths. This does not only enhance the lifetime of nodes but also decreases the delay in tracking the destination.Keywords: ad hoc network, energy efficiency, flooding, node lifetime, route discovery
Procedia PDF Downloads 3474214 Optimisation of the Hydrometeorological-Hydrometric Network: A Case Study in Greece
Authors: E. Baltas, E. Feloni, G. Bariamis
Abstract:
The operation of a network of hydrometeorological-hydrometric stations is basic infrastructure for the management of water resources, as well as, for flood protection. The assessment of water resources potential led to the necessity of adoption management practices including a multi-criteria analysis for the optimum design of the region’s station network. This research work aims at the optimisation of a new/existing network, using GIS methods. The planning of optimum network stations is based on the guidelines of international organizations such as World Meteorological Organization (WMO). The uniform spatial distribution of the stations, the drainage basin for the hydrometric stations and criteria concerning the low terrain slope, the accessibility to the stations and proximity to hydrological interest sites, were taken into consideration for its development. The abovementioned methodology has been implemented for two different areas the Florina municipality and the Argolis area in Greece, and comparison of the results has been conducted.Keywords: GIS, hydrometeorological, hydrometric, network, optimisation
Procedia PDF Downloads 2874213 Improved Predictive Models for the IRMA Network Using Nonlinear Optimisation
Authors: Vishwesh Kulkarni, Nikhil Bellarykar
Abstract:
Cellular complexity stems from the interactions among thousands of different molecular species. Thanks to the emerging fields of systems and synthetic biology, scientists are beginning to unravel these regulatory, signaling, and metabolic interactions and to understand their coordinated action. Reverse engineering of biological networks has has several benefits but a poor quality of data combined with the difficulty in reproducing it limits the applicability of these methods. A few years back, many of the commonly used predictive algorithms were tested on a network constructed in the yeast Saccharomyces cerevisiae (S. cerevisiae) to resolve this issue. The network was a synthetic network of five genes regulating each other for the so-called in vivo reverse-engineering and modeling assessment (IRMA). The network was constructed in S. cereviase since it is a simple and well characterized organism. The synthetic network included a variety of regulatory interactions, thus capturing the behaviour of larger eukaryotic gene networks on a smaller scale. We derive a new set of algorithms by solving a nonlinear optimization problem and show how these algorithms outperform other algorithms on these datasets.Keywords: synthetic gene network, network identification, optimization, nonlinear modeling
Procedia PDF Downloads 1564212 Evaluation of Robust Feature Descriptors for Texture Classification
Authors: Jia-Hong Lee, Mei-Yi Wu, Hsien-Tsung Kuo
Abstract:
Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers.Keywords: texture classification, texture descriptor, SIFT, SURF, ORB
Procedia PDF Downloads 369