Search results for: siamese networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2710

Search results for: siamese networks

1900 Feasibility Study on the Application of Waste Materials for Production of Sustainable Asphalt Mixtures

Authors: Farzaneh Tahmoorian, Bijan Samali, John Yeaman

Abstract:

Road networks are expanding all over the world during the past few decades to meet the increasing freight volumes created by the population growth and industrial development. At the same time, the rate of generation of solid wastes in the society is increasing with the population growth, technological development, and changes in the lifestyle of people. Thus, the management of solid wastes has become an acute problem. Accordingly, there is a need for greater efficiency in the construction and maintenance of road networks, in reducing the overall cost, especially the utilization of natural materials such as aggregates. An efficient means to reduce construction and maintenance costs of road networks is to replace natural (virgin) materials by secondary, recycled materials. Recycling will also help to reduce pressure on landfills and demand for extraction of natural virgin materials thus ensuring sustainability. Application of solid wastes in asphalt layer reduces not only environmental issues associated with waste disposal but also the demand for virgin materials which will subsequently result in sustainability. Therefore, this research aims to investigate the feasibility of the application of some of the waste materials such as glass, construction and demolition wastes, etc. as alternative materials in pavement construction, particularly flexible pavements. To this end, various combination of different waste materials in certain percentages is considered in designing the asphalt mixture. One of the goals of this research is to determine the optimum percentage of all these materials in the mixture. This is done through a series of tests to evaluate the volumetric properties and resilient modulus of the mixture. The information and data collected from these tests are used to select the adequate samples for further assessment through advanced tests such as triaxial dynamic test and fatigue test, in order to investigate the asphalt mixture resistance to permanent deformation and also cracking. This paper presents the results of these investigations on the application of waste materials in asphalt mixture for production of a sustainable asphalt mix.

Keywords: asphalt, glass, pavement, recycled aggregate, sustainability

Procedia PDF Downloads 218
1899 Economic Life of Iranians on Instagram and the Disturbance in Politics

Authors: Mohammad Zaeimzade

Abstract:

The development of communication technologies is clearly and rapidly moving towards reducing the distance between the virtual and real worlds. Of course, living in a two-spatial or two-globalized world or any other interpretation that means mixing real and virtual life is still relevant and debatable. In the present age of communication, where social networks have transformed the message equation and turned the audience out of passivity and turned into a user. Platforms have penetrated widely in various aspects of human life, from culture and education and economy. Among the messengers, Instagram, which is one of the most extensive image-based interactive networks, plays a significant role in the new economic life. It doesn't need much explanation that the era of thinking of every messenger as a non-insulating conductor that is just a neutral load has passed. Every messenger has its own economic, political and of course security background, Instagram is no exception to this rule and of course it leaves its effects in bio-economics as well. Iran, as the 19th largest economy in the world, has not been unaffected by new platforms, including Instagram, and their consequences in the economy. Generally, in the policy-making space, there are two simple and inflexible pessimistic or optimistic views on this issue, and each of the holders of these views usually have their own one-dimensional policy recommendations regarding how to deal with Instagram. Prescriptions that are usually very different and sometimes contradictory. In this article, we show that this confusion of policymakers is the result of not accurately describing the reality of its effect, and the reason for this inaccurate description is the existence of a conflict of interests in the eyes of describers and researchers. In this article, we first take a look at the main indicators of the Iranian economy, estimate the role of the digital economy in Iran's economic growth, then study the conflicting descriptions of the Instagram-based digital economy, the statistics that show the tolerance of economic users of Instagram in Iran. 300 thousand to 9 million have been estimated. Finally, we take a look at the government's actions in this matter, especially in the context of street riots in October and November 2022. And we suggest an intermediate idea.

Keywords: digital economy, instagram, conflict of interest, social networks

Procedia PDF Downloads 57
1898 Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification

Authors: Megha Gupta, Nupur Prakash

Abstract:

Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network (CNN) architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm.

Keywords: comparative analysis, convolutional neural networks, deep learning, plant disease identification

Procedia PDF Downloads 174
1897 Free and Open Source Licences, Software Programmers, and the Social Norm of Reciprocity

Authors: Luke McDonagh

Abstract:

Over the past three decades, free and open source software (FOSS) programmers have developed new, innovative and legally binding licences that have in turn enabled the creation of innumerable pieces of everyday software, including Linux, Mozilla Firefox and Open Office. That FOSS has been highly successful in competing with 'closed source software' (e.g. Microsoft Office) is now undeniable, but in noting this success, it is important to examine in detail why this system of FOSS has been so successful. One key reason is the existence of networks or communities of programmers, who are bound together by a key shared social norm of 'reciprocity'. At the same time, these FOSS networks are not unitary – they are highly diverse and there are large divergences of opinion between members regarding which licences are generally preferable: some members favour the flexible ‘free’ or 'no copyleft' licences, such as BSD and MIT, while other members favour the ‘strong open’ or 'strong copyleft' licences such as GPL. This paper argues that without both the existence of the shared norm of reciprocity and the diversity of licences, it is unlikely that the innovative legal framework provided by FOSS would have succeeded to the extent that it has.

Keywords: open source, copyright, licensing, copyleft

Procedia PDF Downloads 351
1896 Identifying a Drug Addict Person Using Artificial Neural Networks

Authors: Mustafa Al Sukar, Azzam Sleit, Abdullatif Abu-Dalhoum, Bassam Al-Kasasbeh

Abstract:

Use and abuse of drugs by teens is very common and can have dangerous consequences. The drugs contribute to physical and sexual aggression such as assault or rape. Some teenagers regularly use drugs to compensate for depression, anxiety or a lack of positive social skills. Teen resort to smoking should not be minimized because it can be "gateway drugs" for other drugs (marijuana, cocaine, hallucinogens, inhalants, and heroin). The combination of teenagers' curiosity, risk taking behavior, and social pressure make it very difficult to say no. This leads most teenagers to the questions: "Will it hurt to try once?" Nowadays, technological advances are changing our lives very rapidly and adding a lot of technologies that help us to track the risk of drug abuse such as smart phones, Wireless Sensor Networks (WSNs), Internet of Things (IoT), etc. This technique may help us to early discovery of drug abuse in order to prevent an aggravation of the influence of drugs on the abuser. In this paper, we have developed a Decision Support System (DSS) for detecting the drug abuse using Artificial Neural Network (ANN); we used a Multilayer Perceptron (MLP) feed-forward neural network in developing the system. The input layer includes 50 variables while the output layer contains one neuron which indicates whether the person is a drug addict. An iterative process is used to determine the number of hidden layers and the number of neurons in each one. We used multiple experiment models that have been completed with Log-Sigmoid transfer function. Particularly, 10-fold cross validation schemes are used to access the generalization of the proposed system. The experiment results have obtained 98.42% classification accuracy for correct diagnosis in our system. The data had been taken from 184 cases in Jordan according to a set of questions compiled from Specialists, and data have been obtained through the families of drug abusers.

Keywords: drug addiction, artificial neural networks, multilayer perceptron (MLP), decision support system

Procedia PDF Downloads 277
1895 Human Performance Evaluating of Advanced Cardiac Life Support Procedure Using Fault Tree and Bayesian Network

Authors: Shokoufeh Abrisham, Seyed Mahmoud Hossieni, Elham Pishbin

Abstract:

In this paper, a hybrid method based on the fault tree analysis (FTA) and Bayesian networks (BNs) are employed to evaluate the team performance quality of advanced cardiac life support (ACLS) procedures in emergency department. According to American Heart Association (AHA) guidelines, a category relying on staff action leading to clinical incidents and also some discussions with emergency medicine experts, a fault tree model for ACLS procedure is obtained based on the human performance. The obtained FTA model is converted into BNs, and some different scenarios are defined to demonstrate the efficiency and flexibility of the presented model of BNs. Also, a sensitivity analysis is conducted to indicate the effects of team leader presence and uncertainty knowledge of experts on the quality of ACLS. The proposed model based on BNs shows that how the results of risk analysis can be closed to reality comparing to the obtained results based on only FTA in medical procedures.

Keywords: advanced cardiac life support, fault tree analysis, Bayesian belief networks, numan performance, healthcare systems

Procedia PDF Downloads 129
1894 Use of Smartphones in 6th and 7th Grade (Elementary Schools) in Istria: Pilot Study

Authors: Maja Ruzic-Baf, Vedrana Keteles, Andrea Debeljuh

Abstract:

Younger and younger children are now using a smartphone, a device which has become ‘a must have’ and the life of children would be almost ‘unthinkable’ without one. Devices are becoming lighter and lighter but offering an array of options and applications as well as the unavoidable access to the Internet, without which it would be almost unusable. Numerous features such as taking of photographs, listening to music, information search on the Internet, access to social networks, usage of some of the chatting and messaging services, are only some of the numerous features offered by ‘smart’ devices. They have replaced the alarm clock, home phone, camera, tablet and other devices. Their use and possession have become a part of the everyday image of young people. Apart from the positive aspects, the use of smartphones has also some downsides. For instance, free time was usually spent in nature, playing, doing sports or other activities enabling children an adequate psychophysiological growth and development. The greater usage of smartphones during classes to check statuses on social networks, message your friends, play online games, are just some of the possible negative aspects of their application. Considering that the age of the population using smartphones is decreasing and that smartphones are no longer ‘foreign’ to children of pre-school age (smartphones are used at home or in coffee shops or shopping centers while waiting for their parents, playing video games often inappropriate to their age), particular attention must be paid to a very sensitive group, the teenagers who almost never separate from their ‘pets’. This paper is divided into two sections, theoretical and empirical ones. The theoretical section gives an overview of the pros and cons of the usage of smartphones, while the empirical section presents the results of a research conducted in three elementary schools regarding the usage of smartphones and, specifically, their usage during classes, during breaks and to search information on the Internet, check status updates and 'likes’ on the Facebook social network.

Keywords: education, smartphone, social networks, teenagers

Procedia PDF Downloads 437
1893 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution

Authors: Haiyan Wu, Ying Liu, Shaoyun Shi

Abstract:

Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.

Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction

Procedia PDF Downloads 116
1892 A Safety Analysis Method for Multi-Agent Systems

Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller

Abstract:

Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.

Keywords: multi-agent system, safety analysis, safety model, integration map

Procedia PDF Downloads 399
1891 [Keynote Talk]: Knowledge Codification and Innovation Success within Digital Platforms

Authors: Wissal Ben Arfi, Lubica Hikkerova, Jean-Michel Sahut

Abstract:

This study examines interfirm networks in the digital transformation era, and in particular, how tacit knowledge codification affects innovation success within digital platforms. Hence, one of the most important features of digital transformation and innovation process outcomes is the emergence of digital platforms, as an interfirm network, at the heart of open innovation. This research aims to illuminate how digital platforms influence inter-organizational innovation through virtual team interactions and knowledge sharing practices within an interfirm network. Consequently, it contributes to the respective strategic management literature on new product development (NPD), open innovation, industrial management, and its emerging interfirm networks’ management. The empirical findings show, on the one hand, that knowledge conversion may be enhanced, especially by the socialization which seems to be the most important phase as it has played a crucial role to hold the virtual team members together. On the other hand, in the process of socialization, the tacit knowledge codification is crucial because it provides the structure needed for the interfirm network actors to interact and act to reach common goals which favor the emergence of open innovation. Finally, our results offer several conditions necessary, but not always sufficient, for interfirm managers involved in NPD and innovation concerning strategies to increasingly shape interconnected and borderless markets and business collaborations. In the digital transformation era, the need for adaptive and innovative business models as well as new and flexible network forms is becoming more significant than ever. Supported by technological advancements and digital platforms, companies could benefit from increased market opportunities and creating new markets for their innovations through alliances and collaborative strategies, as a mode of reducing or eliminating uncertainty environments or entry barriers. Consequently, an efficient and well-structured interfirm network is essential to create network capabilities, to ensure tacit knowledge sharing, to enhance organizational learning and to foster open innovation success within digital platforms.

Keywords: interfirm networks, digital platform, virtual teams, open innovation, knowledge sharing

Procedia PDF Downloads 109
1890 Transboundary Pollution after Natural Disasters: Scenario Analyses for Uranium at Kyrgyzstan-Uzbekistan Border

Authors: Fengqing Li, Petra Schneider

Abstract:

Failure of tailings management facilities (TMF) of radioactive residues is an enormous challenge worldwide and can result in major catastrophes. Particularly in transboundary regions, such failure is most likely to lead to international conflict. This risk occurs in Kyrgyzstan and Uzbekistan, where the current major challenge is the quantification of impacts due to pollution from uranium legacy sites and especially the impact on river basins after natural hazards (i.e., landslides). By means of GoldSim, a probabilistic simulation model, the amount of tailing material that flows into the river networks of Mailuu Suu in Kyrgyzstan after pond failure was simulated for three scenarios, namely 10%, 20%, and 30% of material inputs. Based on Muskingum-Cunge flood routing procedure, the peak value of uranium flood wave along the river network was simulated. Among the 23 TMF, 19 ponds are close to the river networks. The spatiotemporal distributions of uranium along the river networks were then simulated for all the 19 ponds under three scenarios. Taking the TP7 which is 30 km far from the Kyrgyzstan-Uzbekistan border as one example, the uranium concentration decreased continuously along the longitudinal gradient of the river network, the concentration of uranium was observed at the border after 45 min of the pond failure and the highest value was detected after 69 min. The highest concentration of uranium at the border were 16.5, 33, and 47.5 mg/L under scenarios of 10%, 20%, and 30% of material inputs, respectively. In comparison to the guideline value of uranium in drinking water (i.e., 30 µg/L) provided by the World Health Organization, the observed concentrations of uranium at the border were 550‒1583 times higher. In order to mitigate the transboundary impact of a radioactive pollutant release, an integrated framework consisting of three major strategies were proposed. Among, the short-term strategy can be used in case of emergency event, the medium-term strategy allows both countries handling the TMF efficiently based on the benefit-sharing concept, and the long-term strategy intends to rehabilitate the site through the relocation of all TMF.

Keywords: Central Asia, contaminant transport modelling, radioactive residue, transboundary conflict

Procedia PDF Downloads 100
1889 Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500

Authors: Mustafa Elfituri, Jonathan Cook

Abstract:

Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation.

Keywords: graph computation, graph500 benchmark, parallel architectures, parallel programming, workload characterization.

Procedia PDF Downloads 127
1888 A Bayesian Network Approach to Customer Loyalty Analysis: A Case Study of Home Appliances Industry in Iran

Authors: Azam Abkhiz, Abolghasem Nasir

Abstract:

To achieve sustainable competitive advantage in the market, it is necessary to provide and improve customer satisfaction and Loyalty. To reach this objective, companies need to identify and analyze their customers. Thus, it is critical to measure the level of customer satisfaction and Loyalty very carefully. This study attempts to build a conceptual model to provide clear insights of customer loyalty. Using Bayesian networks (BNs), a model is proposed to evaluate customer loyalty and its consequences, such as repurchase and positive word-of-mouth. BN is a probabilistic approach that predicts the behavior of a system based on observed stochastic events. The most relevant determinants of customer loyalty are identified by the literature review. Perceived value, service quality, trust, corporate image, satisfaction, and switching costs are the most important variables that explain customer loyalty. The data are collected by use of a questionnaire-based survey from 1430 customers of a home appliances manufacturer in Iran. Four scenarios and sensitivity analyses are performed to run and analyze the impact of different determinants on customer loyalty. The proposed model allows businesses to not only set their targets but proactively manage their customer behaviors as well.

Keywords: customer satisfaction, customer loyalty, Bayesian networks, home appliances industry

Procedia PDF Downloads 117
1887 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots

Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra

Abstract:

Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.

Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation

Procedia PDF Downloads 164
1886 Explainable Graph Attention Networks

Authors: David Pham, Yongfeng Zhang

Abstract:

Graphs are an important structure for data storage and computation. Recent years have seen the success of deep learning on graphs such as Graph Neural Networks (GNN) on various data mining and machine learning tasks. However, most of the deep learning models on graphs cannot easily explain their predictions and are thus often labelled as “black boxes.” For example, Graph Attention Network (GAT) is a frequently used GNN architecture, which adopts an attention mechanism to carefully select the neighborhood nodes for message passing and aggregation. However, it is difficult to explain why certain neighbors are selected while others are not and how the selected neighbors contribute to the final classification result. In this paper, we present a graph learning model called Explainable Graph Attention Network (XGAT), which integrates graph attention modeling and explainability. We use a single model to target both the accuracy and explainability of problem spaces and show that in the context of graph attention modeling, we can design a unified neighborhood selection strategy that selects appropriate neighbor nodes for both better accuracy and enhanced explainability. To justify this, we conduct extensive experiments to better understand the behavior of our model under different conditions and show an increase in both accuracy and explainability.

Keywords: explainable AI, graph attention network, graph neural network, node classification

Procedia PDF Downloads 164
1885 Training a Neural Network to Segment, Detect and Recognize Numbers

Authors: Abhisek Dash

Abstract:

This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.

Keywords: convolutional neural networks, OCR, text detection, text segmentation

Procedia PDF Downloads 138
1884 Ordinary Differentiation Equations (ODE) Reconstruction of High-Dimensional Genetic Networks through Game Theory with Application to Dissecting Tree Salt Tolerance

Authors: Libo Jiang, Huan Li, Rongling Wu

Abstract:

Ordinary differentiation equations (ODE) have proven to be powerful for reconstructing precise and informative gene regulatory networks (GRNs) from dynamic gene expression data. However, joint modeling and analysis of all genes, essential for the systematical characterization of genetic interactions, are challenging due to high dimensionality and a complex pattern of genetic regulation including activation, repression, and antitermination. Here, we address these challenges by unifying variable selection and game theory through ODE. Each gene within a GRN is co-expressed with its partner genes in a way like a game of multiple players, each of which tends to choose an optimal strategy to maximize its “fitness” across the whole network. Based on this unifying theory, we designed and conducted a real experiment to infer salt tolerance-related GRNs for Euphrates poplar, a hero tree that can grow in the saline desert. The pattern and magnitude of interactions between several hub genes within these GRNs were found to determine the capacity of Euphrates poplar to resist to saline stress.

Keywords: gene regulatory network, ordinary differential equation, game theory, LASSO, saline resistance

Procedia PDF Downloads 622
1883 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging

Procedia PDF Downloads 61
1882 A Review of Current Trends in Grid Balancing Technologies

Authors: Kulkarni Rohini D.

Abstract:

While emerging as plausible sources of energy generation, new technologies, including photovoltaic (PV) solar panels, home battery energy storage systems, and electric vehicles (EVs), are exacerbating the operations of power distribution networks for distribution network operators (DNOs). Renewable energy production fluctuates, stemming in over- and under-generation energy, further complicating the issue of storing excess power and using it when necessary. Though renewable sources are non-exhausting and reoccurring, power storage of generated energy is almost as paramount as to its production process. Hence, to ensure smooth and efficient power storage at different levels, Grid balancing technologies are consequently the next theme to address in the sustainable space and growth sector. But, since hydrogen batteries were used in the earlier days to achieve this balance in power grids, new, recent advancements are more efficient and capable per unit of storage space while also being distinctive in terms of their underlying operating principles. The underlying technologies of "Flow batteries," "Gravity Solutions," and "Graphene Batteries" already have entered the market and are leading the race for efficient storage device solutions that will improve and stabilize Grid networks, followed by Grid balancing technologies.

Keywords: flow batteries, grid balancing, hydrogen batteries, power storage, solar

Procedia PDF Downloads 45
1881 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 94
1880 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 118
1879 Network Based Molecular Profiling of Intracranial Ependymoma over Spinal Ependymoma

Authors: Hyeon Su Kim, Sungjin Park, Hae Ryung Chang, Hae Rim Jung, Young Zoo Ahn, Yon Hui Kim, Seungyoon Nam

Abstract:

Ependymoma, one of the most common parenchymal spinal cord tumor, represents 3-6% of all CNS tumor. Especially intracranial ependymomas, which are more frequent in childhood, have a more poor prognosis and more malignant than spinal ependymomas. Although there are growing needs to understand pathogenesis, detailed molecular understanding of pathogenesis remains to be explored. A cancer cell is composed of complex signaling pathway networks, and identifying interaction between genes and/or proteins are crucial for understanding these pathways. Therefore, we explored each ependymoma in terms of differential expressed genes and signaling networks. We used Microsoft Excel™ to manipulate microarray data gathered from NCBI’s GEO Database. To analyze and visualize signaling network, we used web-based PATHOME algorithm and Cytoscape. We show HOX family and NEFL are down-regulated but SCL family is up-regulated in cerebrum and posterior fossa cancers over a spinal cancer, and JAK/STAT signaling pathway and Chemokine signaling pathway are significantly different in the both intracranial ependymoma comparing to spinal ependymoma. We are considering there may be an age-dependent mechanism under different histological pathogenesis. We annotated mutation data of each gene subsequently in order to find potential target genes.

Keywords: systems biology, ependymoma, deg, network analysis

Procedia PDF Downloads 278
1878 Neighbour Cell List Reduction in Multi-Tier Heterogeneous Networks

Authors: Mohanad Alhabo, Naveed Nawaz

Abstract:

The ongoing call or data session must be maintained to ensure a good quality of service. This can be accomplished by performing the handover procedure while the user is on the move. However, the dense deployment of small cells in 5G networks is a challenging issue due to the extensive number of handovers. In this paper, a neighbour cell list method is proposed to reduce the number of target small cells and hence minimizing the number of handovers. The neighbour cell list is built by omitting cells that could cause an unnecessary handover and handover failure because of short time of stay of the user in these cells. A multi-attribute decision making technique, simple additive weighting, is then applied to the optimized neighbour cell list. Multi-tier small cells network is considered in this work. The performance of the proposed method is analysed and compared with that of the existing methods. Results disclose that our method has decreased the candidate small cell list, unnecessary handovers, handover failure, and short time of stay cells compared to the competitive method.

Keywords: handover, HetNets, multi-attribute decision making, small cells

Procedia PDF Downloads 92
1877 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 335
1876 A New DIDS Design Based on a Combination Feature Selection Approach

Authors: Adel Sabry Eesa, Adnan Mohsin Abdulazeez Brifcani, Zeynep Orman

Abstract:

Feature selection has been used in many fields such as classification, data mining and object recognition and proven to be effective for removing irrelevant and redundant features from the original data set. In this paper, a new design of distributed intrusion detection system using a combination feature selection model based on bees and decision tree. Bees algorithm is used as the search strategy to find the optimal subset of features, whereas decision tree is used as a judgment for the selected features. Both the produced features and the generated rules are used by Decision Making Mobile Agent to decide whether there is an attack or not in the networks. Decision Making Mobile Agent will migrate through the networks, moving from node to another, if it found that there is an attack on one of the nodes, it then alerts the user through User Interface Agent or takes some action through Action Mobile Agent. The KDD Cup 99 data set is used to test the effectiveness of the proposed system. The results show that even if only four features are used, the proposed system gives a better performance when it is compared with the obtained results using all 41 features.

Keywords: distributed intrusion detection system, mobile agent, feature selection, bees algorithm, decision tree

Procedia PDF Downloads 385
1875 MITOS-RCNN: Mitotic Figure Detection in Breast Cancer Histopathology Images Using Region Based Convolutional Neural Networks

Authors: Siddhant Rao

Abstract:

Studies estimate that there will be 266,120 new cases of invasive breast cancer and 40,920 breast cancer induced deaths in the year of 2018 alone. Despite the pervasiveness of this affliction, the current process to obtain an accurate breast cancer prognosis is tedious and time consuming. It usually requires a trained pathologist to manually examine histopathological images and identify the features that characterize various cancer severity levels. We propose MITOS-RCNN: a region based convolutional neural network (RCNN) geared for small object detection to accurately grade one of the three factors that characterize tumor belligerence described by the Nottingham Grading System: mitotic count. Other computational approaches to mitotic figure counting and detection do not demonstrate ample recall or precision to be clinically viable. Our models outperformed all previous participants in the ICPR 2012 challenge, the AMIDA 2013 challenge and the MITOS-ATYPIA-14 challenge along with recently published works. Our model achieved an F- measure score of 0.955, a 6.11% improvement in accuracy from the most accurate of the previously proposed models.

Keywords: breast cancer, mitotic count, machine learning, convolutional neural networks

Procedia PDF Downloads 202
1874 Heuristic Search Algorithm (HSA) for Enhancing the Lifetime of Wireless Sensor Networks

Authors: Tripatjot S. Panag, J. S. Dhillon

Abstract:

The lifetime of a wireless sensor network can be effectively increased by using scheduling operations. Once the sensors are randomly deployed, the task at hand is to find the largest number of disjoint sets of sensors such that every sensor set provides complete coverage of the target area. At any instant, only one of these disjoint sets is switched on, while all other are switched off. This paper proposes a heuristic search method to find the maximum number of disjoint sets that completely cover the region. A population of randomly initialized members is made to explore the solution space. A set of heuristics has been applied to guide the members to a possible solution in their neighborhood. The heuristics escalate the convergence of the algorithm. The best solution explored by the population is recorded and is continuously updated. The proposed algorithm has been tested for applications which require sensing of multiple target points, referred to as point coverage applications. Results show that the proposed algorithm outclasses the existing algorithms. It always finds the optimum solution, and that too by making fewer number of fitness function evaluations than the existing approaches.

Keywords: coverage, disjoint sets, heuristic, lifetime, scheduling, Wireless sensor networks, WSN

Procedia PDF Downloads 433
1873 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy

Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş

Abstract:

Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.

Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance

Procedia PDF Downloads 229
1872 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products

Authors: Maciej Jedrzejczyk, Karolina Marzantowicz

Abstract:

Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.

Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids

Procedia PDF Downloads 280
1871 Online Social Network Vital to Hospitality and Tourism Marketing and Management

Authors: Nureni Asafe Yekini, Olawale Nasiru Lawal, Bola Dada, Gabriel Adeyemi Okunlola

Abstract:

This study is focused on the strengths and challenges associated with using the online social network as a rapidly evolving medium in marketing tourism services and businesses among the youths in Nigeria. The paper examines the Nigerian tourists’ attitude, mainly towards three aspects: application of Internet for travel and tourism; usage of online social networks in sharing travel and tourism experiences; and trust in electronic-media for marketing tourism businesses and services. The aim of this research is to determine the level of application of internet tools in marketing tourism businesses and services in Nigeria. This study reports an empirical analysis based on data obtained from a survey among 1004 Nigerian tourists. The outcome confirms the research hypothesis and points to crucial importance of introducing online social network site for marketing tourism businesses and services in Nigeria, and increasing the awareness for Nigeria as a tourist destination. Moreover, the paper strongly recommends the use of online social network as a tool for marketing tourism businesses and services, and the need for identifying effective framework for application of ICT tools in marketing tourism businesses and services in Nigeria at large.

Keywords: tourism business, internet, online social networks, tourism services, ICT

Procedia PDF Downloads 333