Search results for: deep deterministic policy gradient (DDPG)
6477 Mixed Method Analysis to Propose a Policy Action against Racism and Xenophobia in India
Authors: Anwesha Das
Abstract:
There are numerous cases of racism and discriminatory practices in India against the northeast citizens and the African migrants. The right-wing extremism of the presently ruling political party in India has resulted in increased cases of xenophobia and Afrophobia. The rigid Indian caste system contributes to such practices of racism. The establishment of the ‘Hindu race’ by the present right-wing government, leading to instilling pride among Hindus being of a superior race, has resulted in more atrocious racist practices. This paper argues that policy action is required against racist, discriminatory practices. Policy actors in India do not ask the right questions and fail to give the needed redirection. It critically analyses Acts 14 and 15 of the Indian constitution in order to examine the cause of a policy action. In proposing the need for policy action, this paper places its arguments as a vital extension of the existing scholarship on public policy studies in India. It uses mixed-method analysis to examine the factors responsible for the policy problem and aims to suggest specific points of intervention in a policy progression. The study finds that despite anti-discriminatory policies in the mentioned Acts of the Indian constitution, there are rampant cases of racism owing to religious and cultural factors. The major findings of the study show how the present right-wing government violated the constitution in aggravating xenophobia. This paper proposes a policy action required to stop such further practices.Keywords: India, migrants, policy action, racism, xenophobia
Procedia PDF Downloads 476476 Optimal Production and Maintenance Policy for a Partially Observable Production System with Stochastic Demand
Authors: Leila Jafari, Viliam Makis
Abstract:
In this paper, the joint optimization of the economic manufacturing quantity (EMQ), safety stock level, and condition-based maintenance (CBM) is presented for a partially observable, deteriorating system subject to random failure. The demand is stochastic and it is described by a Poisson process. The stochastic model is developed and the optimization problem is formulated in the semi-Markov decision process framework. A modification of the policy iteration algorithm is developed to find the optimal policy. A numerical example is presented to compare the optimal policy with the policy considering zero safety stock.Keywords: condition-based maintenance, economic manufacturing quantity, safety stock, stochastic demand
Procedia PDF Downloads 4646475 Monitoring the Effect of Deep Frying and the Type of Food on the Quality of Oil
Authors: Omar Masaud Almrhag, Frage Lhadi Abookleesh
Abstract:
Different types of food like banana, potato and chicken affect the quality of oil during deep fat frying. The changes in the quality of oil were evaluated and compared. Four different types of edible oils, namely, corn oil, soybean, canola, and palm oil were used for deep fat frying at 180°C ± 5°C for 5 h/d for six consecutive days. A potato was sliced into 7-8 cm length wedges and chicken was cut into uniform pieces of 100 g each. The parameters used to assess the quality of oil were total polar compound (TPC), iodine value (IV), specific extinction E1% at 233 nm and 269 nm, fatty acid composition (FAC), free fatty acids (FFA), viscosity (cp) and changes in the thermal properties. Results showed that, TPC, IV, FAC, Viscosity (cp) and FFA composition changed significantly with time (P< 0.05) and type of food. Significant differences (P< 0.05) were noted for the used parameters during frying of the above mentioned three products.Keywords: frying potato, chicken, frying deterioration, quality of oil
Procedia PDF Downloads 4206474 A Convolutional Deep Neural Network Approach for Skin Cancer Detection Using Skin Lesion Images
Authors: Firas Gerges, Frank Y. Shih
Abstract:
Malignant melanoma, known simply as melanoma, is a type of skin cancer that appears as a mole on the skin. It is critical to detect this cancer at an early stage because it can spread across the body and may lead to the patient's death. When detected early, melanoma is curable. In this paper, we propose a deep learning model (convolutional neural networks) in order to automatically classify skin lesion images as malignant or benign. Images underwent certain pre-processing steps to diminish the effect of the normal skin region on the model. The result of the proposed model showed a significant improvement over previous work, achieving an accuracy of 97%.Keywords: deep learning, skin cancer, image processing, melanoma
Procedia PDF Downloads 1486473 COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images
Authors: Uma Maheshwari V., Rajanikanth Aluvalu, Kumar Gautam
Abstract:
The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images.Keywords: deep CNN, COVID–19 analysis, feature extraction, feature map, accuracy
Procedia PDF Downloads 796472 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer
Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom
Abstract:
Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN
Procedia PDF Downloads 756471 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models
Authors: Haya Salah, Srinivas Sharan
Abstract:
Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time
Procedia PDF Downloads 1216470 Measuring the Full Impact of Culture: Social Indicators and Canadian Cultural Policy
Authors: Steven Wright
Abstract:
This paper argues that there is an opportunity for PCH to further expand its relevance within the Canadian policy context by taking advantage of the growing international trend of using social indicators for public policy evaluation. Within the mandate and vision of PCH, there is an incomplete understanding of the value that the arts and culture provide for Canadians, specifically with regard to four social indicators: community development, civic engagement, life satisfaction, and work-life balance. As will be shown, culture and the arts have a unique role to play in such quality of life indicators, and there is an opportunity for PCH to aid in the development of a comprehensive national framework that includes these indicators. This paper lays out approach to understanding how social indicators may be included in the Canadian context by first illustrating recent trends in policy evaluation on a national and international scale. From there, a theoretical analysis of the connection between cultural policy and social indicators is provided. The second half of the paper is dedicated to explaining the shortcomings of Canadian cultural policy evaluation in terms of its tendency to justify expenditures related to arts and cultural activities in purely economic terms, and surveying how other governments worldwide are leading the charge in this regard.Keywords: social indicators, evaluation, cultural policy, arts
Procedia PDF Downloads 2966469 Sintering of Functionally Graded WC-TiC-Co Cemented Carbides
Authors: Stella Sten, Peter Hedström, Joakim Odqvist, Susanne Norgren
Abstract:
Two functionally graded cemented carbide samples have been produced by local addition of Titanium carbide (TiC) to a pressed Tungsten carbide and Cobalt, WC-10 wt% Co, green body prior to sintering, with the aim of creating a gradient in both composition and grain size in the as-sintered component. The two samples differ only by the in-going WC particle size, where one sub-micron and one coarse WC particle size have been chosen for comparison. The produced sintered samples had a gradient, thus a non-homogenous structure. The Titanium (Ti), Cobalt (Co), and Carbon (C) concentration profiles have been investigated using SEM-EDS and WDS; in addition, the Vickers hardness profile has been measured. Moreover, the Ti concentration profile has been simulated using DICTRA software and compared with experimental results. The concentration and hardness profiles show a similar trend for both samples. Ti and C levels decrease, as expected from the area of TiC application, whereas Co increases towards the edge of the samples. The non-homogenous composition affects the number of stable phases and WC grain size evolution. The sample with finer in-going WC grain size shows a shorter gamma (γ) phase zone and a larger difference in WC grain size compared to the coarse-grained sample. Both samples show, independent of the composition, the presence of abnormally large grains.Keywords: cemented carbide, functional gradient material, grain growth, sintering
Procedia PDF Downloads 936468 Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu
Authors: Ammarah Irum, Muhammad Ali Tahir
Abstract:
Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively.Keywords: urdu sentiment analysis, deep learning, natural language processing, opinion mining, low-resource language
Procedia PDF Downloads 726467 The Development Stages of Transformation of Water Policy Management in Victoria
Authors: Ratri Werdiningtyas, Yongping Wei, Andrew Western
Abstract:
The status quo of social-ecological systems is the results of not only natural processes but also the accumulated consequence of policies applied in the past. Often water management objectives are challenging and are only achieved to a limited degree on the ground. In choosing water management approaches, it is important to account for current conditions and important differences due to varied histories. Since the mid-nineteenth century, Victorian water management has evolved through a series of policy regime shifts. The main goal of this research to explore and identify the stages of the evolution of the water policy instruments as practiced in Victoria from 1890-2016. This comparative historical analysis has identified four stages in Victorian policy instrument development. In the first stage, the creation of policy instruments aimed to match the demand and supply of the resource (reserve condition). The second stage begins after natural system alone failed to balance supply and demand. The focus of the policy instrument shifted to an authority perspective in this stage. Later, the increasing number of actors interested in water led to another change in policy instrument. The third stage focused on the significant role of information from different relevant actors. The fourth and current stage is the most advanced, in that it involved the creation of a policy instrument for synergizing the previous three focal factors: reserve, authority, and information. When considering policy in other jurisdiction, these findings suggest that a key priority should be to reflect on the jurisdictions current position among these four evolutionary stages and try to make improve progressively rather than directly adopting approaches from elsewhere without understanding the current position.Keywords: policy instrument, policy transformation, socio-ecolgical system, water management
Procedia PDF Downloads 1456466 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation
Authors: Arian Hosseini, Mahmudul Hasan
Abstract:
To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing
Procedia PDF Downloads 556465 Predicting Shot Making in Basketball Learnt Fromadversarial Multiagent Trajectories
Authors: Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan
Abstract:
In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain-specific knowledge. Although intuitive, recent work in deep learning has shown, this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multichannel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories, we use “fading.” We find that this approach is superior to a traditional FFN model. By using gradient ascent, we were able to discover what the CNN filters look for during training. Last, we find that a combined FFN+CNN is the best performing network with an error rate of 39%.Keywords: basketball, computer vision, image processing, convolutional neural network
Procedia PDF Downloads 1536464 Research on Evaluation of Renewable Energy Technology Innovation Strategy Based on PMC Index Model
Abstract:
Renewable energy technology innovation is an important way to realize the energy transformation. Our government has issued a series of policies to guide and support the development of renewable energy. The implementation of these policies will affect the further development, utilization and technological innovation of renewable energy. In this context, it is of great significance to systematically sort out and evaluate the renewable energy technology innovation policy for improving the existing policy system. Taking the 190 renewable energy technology innovation policies issued during 2005-2021 as a sample, from the perspectives of policy issuing departments and policy keywords, it uses text mining and content analysis methods to analyze the current situation of the policies and conduct a semantic network analysis to identify the core issuing departments and core policy topic words; A PMC (Policy Modeling Consistency) index model is built to quantitatively evaluate the selected policies, analyze the overall pros and cons of the policy through its PMC index, and reflect the PMC value of the model's secondary index The core departments publish policies and the performance of each dimension of the policies related to the core topic headings. The research results show that Renewable energy technology innovation policies focus on synergy between multiple departments, while the distribution of the issuers is uneven in terms of promulgation time; policies related to different topics have their own emphasis in terms of policy types, fields, functions, and support measures, but It still needs to be improved, such as the lack of policy forecasting and supervision functions, the lack of attention to product promotion, and the relatively single support measures. Finally, this research puts forward policy optimization suggestions in terms of promoting joint policy release, strengthening policy coherence and timeliness, enhancing the comprehensiveness of policy functions, and enriching incentive measures for renewable energy technology innovation.Keywords: renewable energy technology innovation, content analysis, policy evaluation, PMC index model
Procedia PDF Downloads 646463 Malaria Parasite Detection Using Deep Learning Methods
Authors: Kaustubh Chakradeo, Michael Delves, Sofya Titarenko
Abstract:
Malaria is a serious disease which affects hundreds of millions of people around the world, each year. If not treated in time, it can be fatal. Despite recent developments in malaria diagnostics, the microscopy method to detect malaria remains the most common. Unfortunately, the accuracy of microscopic diagnostics is dependent on the skill of the microscopist and limits the throughput of malaria diagnosis. With the development of Artificial Intelligence tools and Deep Learning techniques in particular, it is possible to lower the cost, while achieving an overall higher accuracy. In this paper, we present a VGG-based model and compare it with previously developed models for identifying infected cells. Our model surpasses most previously developed models in a range of the accuracy metrics. The model has an advantage of being constructed from a relatively small number of layers. This reduces the computer resources and computational time. Moreover, we test our model on two types of datasets and argue that the currently developed deep-learning-based methods cannot efficiently distinguish between infected and contaminated cells. A more precise study of suspicious regions is required.Keywords: convolution neural network, deep learning, malaria, thin blood smears
Procedia PDF Downloads 1306462 Prediction on Housing Price Based on Deep Learning
Authors: Li Yu, Chenlu Jiao, Hongrun Xin, Yan Wang, Kaiyang Wang
Abstract:
In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry.Keywords: deep learning, convolutional neural network, LSTM, housing prediction
Procedia PDF Downloads 3066461 The Value of Job Security across Various Welfare Policies
Authors: Eithan Hourie, Miki Malul, Raphael Bar-El
Abstract:
To investigate the relationship between various welfare policies and the value of job security, we conducted a study with 201 people regarding their assessments of the value of job security with respect to three elements: income stability, assurance of continuity of employment, and security in the job. The experiment simulated different welfare policy scenarios, such as the amount and duration of unemployment benefits, workfare, and basic income. The participants evaluated the value of job security in various situations. We found that the value of job security is approximately 22% of the starting salary, which is distributed as follows: 13% reflects income security, 8.7% reflects job security, and about 0.3% is for being able to keep their current employment in the future. To the best of our knowledge, this article is one of the pioneers in trying to quantify the value of job security in different market scenarios and at varying levels of welfare policy. Our conclusions may help decision-makers when deciding on a welfare policy.Keywords: job security value, employment protection legislation, status quo bias, expanding welfare policy
Procedia PDF Downloads 1066460 Comparative Study of Deep Reinforcement Learning Algorithm Against Evolutionary Algorithms for Finding the Optimal Values in a Simulated Environment Space
Authors: Akshay Paranjape, Nils Plettenberg, Robert Schmitt
Abstract:
Traditional optimization methods like evolutionary algorithms are widely used in production processes to find an optimal or near-optimal solution of control parameters based on the simulated environment space of a process. These algorithms are computationally intensive and therefore do not provide the opportunity for real-time optimization. This paper utilizes the Deep Reinforcement Learning (DRL) framework to find an optimal or near-optimal solution for control parameters. A model based on maximum a posteriori policy optimization (Hybrid-MPO) that can handle both numerical and categorical parameters is used as a benchmark for comparison. A comparative study shows that DRL can find optimal solutions of similar quality as compared to evolutionary algorithms while requiring significantly less time making them preferable for real-time optimization. The results are confirmed in a large-scale validation study on datasets from production and other fields. A trained XGBoost model is used as a surrogate for process simulation. Finally, multiple ways to improve the model are discussed.Keywords: reinforcement learning, evolutionary algorithms, production process optimization, real-time optimization, hybrid-MPO
Procedia PDF Downloads 1126459 An Eco-Friendly Preparations of Izonicotinamide Quaternary Salts in Deep Eutectic Solvents
Authors: Dajana Gašo-Sokač, Valentina Bušić
Abstract:
Deep eutectic solvents (DES) are liquids composed of two or three safe, inexpensive components, often interconnected by noncovalent hydrogen bonds which produce eutectic mixture whose melting point is lower than that of each component. No data in literature have been found on the quaternization reaction in DES. The use of DES have several advantages: they are environmentally benign and biodegradable, easy for purification and simple for preparation. An environmentally sustainable method for preparing quaternary salts of izonicotinamide and substituted 2-bromoacetophenones was demonstrated here using choline chloride-based DES. The quaternization reaction was carried out by three synthetic approaches: conventional method, microwave and ultrasonic irradiation. We showed that the highest yields were obtained by the microwave method.Keywords: deep eutectic solvents, izonicotinamide salts, microwave synthesis, ultrasonic irradiation
Procedia PDF Downloads 1306458 Studies of Zooplankton in Gdańsk Basin (2010-2011)
Authors: Lidia Dzierzbicka-Glowacka, Anna Lemieszek, Mariusz Figiela
Abstract:
In 2010-2011, the research on zooplankton was conducted in the southern part of the Baltic Sea to determine seasonal variability in changes occurring throughout the zooplankton in 2010 and 2011, both in the region of Gdańsk Deep, and in the western part of Gdańsk Bay. The research in the sea showed that the taxonomic composition of holoplankton in the southern part of the Baltic Sea was similar to that recorded in this region for many years. The maximum values of abundance and biomass of zooplankton both in the Deep and the Bay of Gdańsk were observed in the summer season. Copepoda dominated in the composition of zooplankton for almost the entire study period, while rotifers occurred in larger numbers only in the summer 2010 in the Gdańsk Deep as well as in May and July 2010 in the western part of Gdańsk Bay, and meroplankton – in April 2011.Keywords: Baltic Sea, composition, Gdańsk Bay, zooplankton
Procedia PDF Downloads 4336457 Enhanced Image Representation for Deep Belief Network Classification of Hyperspectral Images
Authors: Khitem Amiri, Mohamed Farah
Abstract:
Image classification is a challenging task and is gaining lots of interest since it helps us to understand the content of images. Recently Deep Learning (DL) based methods gave very interesting results on several benchmarks. For Hyperspectral images (HSI), the application of DL techniques is still challenging due to the scarcity of labeled data and to the curse of dimensionality. Among other approaches, Deep Belief Network (DBN) based approaches gave a fair classification accuracy. In this paper, we address the problem of the curse of dimensionality by reducing the number of bands and replacing the HSI channels by the channels representing radiometric indices. Therefore, instead of using all the HSI bands, we compute the radiometric indices such as NDVI (Normalized Difference Vegetation Index), NDWI (Normalized Difference Water Index), etc, and we use the combination of these indices as input for the Deep Belief Network (DBN) based classification model. Thus, we keep almost all the pertinent spectral information while reducing considerably the size of the image. In order to test our image representation, we applied our method on several HSI datasets including the Indian pines dataset, Jasper Ridge data and it gave comparable results to the state of the art methods while reducing considerably the time of training and testing.Keywords: hyperspectral images, deep belief network, radiometric indices, image classification
Procedia PDF Downloads 2806456 Public Policy and Institutional Reforms in Ethiopian Experience: A Retrospective Policy Analysis
Authors: Tewele Gerlase Haile
Abstract:
Like any other country, Ethiopia's state government has reached today by undergoing many political changes. Until the last quarter of the 19th century, the aristocratic regimes of Ethiopia were using their infinite mystical power to shape the traditional public administrative institutions of the country. Mystical, feudal, social, and revolutionary political systems were used as sources of ruling power to the long-lasted monarchical, military and dictatorial regimes. For a country that is struggling to escape from the vicious cycle of poverty, famines, and civil wars, understanding how political regimes reform public policies and institutions is necessary for several reasons. A retrospective policy analysis approach is employed to determine how public policies are shaped by institutional factors and why the traditional public administration paradigm of Ethiopia continues to date despite regime changes. Using the experiences of political reforms practiced in four successive regimes (1916-2023), this retrospective analysis reveals a causal relationship among policy, institutional, and political failures. Moreover, Ethiopia's law-making and policy-making background significantly reflects the behavior of governments and their institutions. With a macro-level policy analysis in mind, the paper analyzes why the recent policy and institutional reforms twisted the country into unresolved military catastrophes.Keywords: public administration, public policy, institutional reform, political structure
Procedia PDF Downloads 236455 Leveraging Automated and Connected Vehicles with Deep Learning for Smart Transportation Network Optimization
Authors: Taha Benarbia
Abstract:
The advent of automated and connected vehicles has revolutionized the transportation industry, presenting new opportunities for enhancing the efficiency, safety, and sustainability of our transportation networks. This paper explores the integration of automated and connected vehicles into a smart transportation framework, leveraging the power of deep learning techniques to optimize the overall network performance. The first aspect addressed in this paper is the deployment of automated vehicles (AVs) within the transportation system. AVs offer numerous advantages, such as reduced congestion, improved fuel efficiency, and increased safety through advanced sensing and decisionmaking capabilities. The paper delves into the technical aspects of AVs, including their perception, planning, and control systems, highlighting the role of deep learning algorithms in enabling intelligent and reliable AV operations. Furthermore, the paper investigates the potential of connected vehicles (CVs) in creating a seamless communication network between vehicles, infrastructure, and traffic management systems. By harnessing real-time data exchange, CVs enable proactive traffic management, adaptive signal control, and effective route planning. Deep learning techniques play a pivotal role in extracting meaningful insights from the vast amount of data generated by CVs, empowering transportation authorities to make informed decisions for optimizing network performance. The integration of deep learning with automated and connected vehicles paves the way for advanced transportation network optimization. Deep learning algorithms can analyze complex transportation data, including traffic patterns, demand forecasting, and dynamic congestion scenarios, to optimize routing, reduce travel times, and enhance overall system efficiency. The paper presents case studies and simulations demonstrating the effectiveness of deep learning-based approaches in achieving significant improvements in network performance metricsKeywords: automated vehicles, connected vehicles, deep learning, smart transportation network
Procedia PDF Downloads 796454 Optimizing Machine Learning Through Python Based Image Processing Techniques
Authors: Srinidhi. A, Naveed Ahmed, Twinkle Hareendran, Vriksha Prakash
Abstract:
This work reviews some of the advanced image processing techniques for deep learning applications. Object detection by template matching, image denoising, edge detection, and super-resolution modelling are but a few of the tasks. The paper looks in into great detail, given that such tasks are crucial preprocessing steps that increase the quality and usability of image datasets in subsequent deep learning tasks. We review some of the methods for the assessment of image quality, more specifically sharpness, which is crucial to ensure a robust performance of models. Further, we will discuss the development of deep learning models specific to facial emotion detection, age classification, and gender classification, which essentially includes the preprocessing techniques interrelated with model performance. Conclusions from this study pinpoint the best practices in the preparation of image datasets, targeting the best trade-off between computational efficiency and retaining important image features critical for effective training of deep learning models.Keywords: image processing, machine learning applications, template matching, emotion detection
Procedia PDF Downloads 156453 Vehicle Detection and Tracking Using Deep Learning Techniques in Surveillance Image
Authors: Abe D. Desta
Abstract:
This study suggests a deep learning-based method for identifying and following moving objects in surveillance video. The proposed method uses a fast regional convolution neural network (F-RCNN) trained on a substantial dataset of vehicle images to first detect vehicles. A Kalman filter and a data association technique based on a Hungarian algorithm are then used to monitor the observed vehicles throughout time. However, in general, F-RCNN algorithms have been shown to be effective in achieving high detection accuracy and robustness in this research study. For example, in one study The study has shown that the vehicle detection and tracking, the system was able to achieve an accuracy of 97.4%. In this study, the F-RCNN algorithm was compared to other popular object detection algorithms and was found to outperform them in terms of both detection accuracy and speed. The presented system, which has application potential in actual surveillance systems, shows the usefulness of deep learning approaches in vehicle detection and tracking.Keywords: artificial intelligence, computer vision, deep learning, fast-regional convolutional neural networks, feature extraction, vehicle tracking
Procedia PDF Downloads 1266452 A Multi-Criteria Model for Scheduling of Stochastic Single Machine Problem with Outsourcing and Solving It through Application of Chance Constrained
Authors: Homa Ghave, Parmis Shahmaleki
Abstract:
This paper presents a new multi-criteria stochastic mathematical model for a single machine scheduling with outsourcing allowed. There are multiple jobs processing in batch. For each batch, all of job or a quantity of it can be outsourced. The jobs have stochastic processing time and lead time and deterministic due dates arrive randomly. Because of the stochastic inherent of processing time and lead time, we use the chance constrained programming for modeling the problem. First, the problem is formulated in form of stochastic programming and then prepared in a form of deterministic mixed integer linear programming. The objectives are considered in the model to minimize the maximum tardiness and outsourcing cost simultaneously. Several procedures have been developed to deal with the multi-criteria problem. In this paper, we utilize the concept of satisfaction functions to increases the manager’s preference. The proposed approach is tested on instances where the random variables are normally distributed.Keywords: single machine scheduling, multi-criteria mathematical model, outsourcing strategy, uncertain lead times and processing times, chance constrained programming, satisfaction function
Procedia PDF Downloads 2646451 Correlation between Speech Emotion Recognition Deep Learning Models and Noises
Authors: Leah Lee
Abstract:
This paper examines the correlation between deep learning models and emotions with noises to see whether or not noises mask emotions. The deep learning models used are plain convolutional neural networks (CNN), auto-encoder, long short-term memory (LSTM), and Visual Geometry Group-16 (VGG-16). Emotion datasets used are Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D), Toronto Emotional Speech Set (TESS), and Surrey Audio-Visual Expressed Emotion (SAVEE). To make it four times bigger, audio set files, stretch, and pitch augmentations are utilized. From the augmented datasets, five different features are extracted for inputs of the models. There are eight different emotions to be classified. Noise variations are white noise, dog barking, and cough sounds. The variation in the signal-to-noise ratio (SNR) is 0, 20, and 40. In summation, per a deep learning model, nine different sets with noise and SNR variations and just augmented audio files without any noises will be used in the experiment. To compare the results of the deep learning models, the accuracy and receiver operating characteristic (ROC) are checked.Keywords: auto-encoder, convolutional neural networks, long short-term memory, speech emotion recognition, visual geometry group-16
Procedia PDF Downloads 756450 Trade Policy and Economic Growth of Turkey in Global Economy: New Empirical Evidence
Authors: Pınar Yardımcı
Abstract:
This paper tries to answer to the questions whether or not trade openness cause economic growth and trade policy changes is good for Turkey as a developing country in global economy before and after 1980. We employ Johansen cointegration and Granger causality tests with error correction modelling based on vector autoregressive. Using WDI data from the pre-1980 and the post-1980, we find that trade openness and economic growth are cointegrated in the second term only. Also the results suggest a lack of long-run causality between our two variables. These findings may imply that trade policy of Turkey should concentrate more on extra complementary economic reforms.Keywords: globalization, trade policy, economic growth, openness, cointegration, Turkey
Procedia PDF Downloads 3596449 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 1876448 Dynamics of Soil Carbon and Nitrogen Contents and Stocks along a Salinity Gradient
Authors: Qingqing Zhao, Junhong Bai
Abstract:
To investigate the effects of salinity on dynamics of soil carbon and nitrogen contents and stocks, soil samples were collected at a depth of 30 cm at four sampling sites (Sites B, T, S and P) along a salinity gradient in a drained coastal wetland, the Yellow River Delta, China. The salinity of these four sites ranked in the order: B (8.68±4.25 ms/cm) > T (5.89±3.17 ms/cm) > S (3.19±1.01 ms/cm) > P (2.26±0.39 ms/cm). Soil total carbon (TC), soil organic carbon (SOC), soil microbial biomass carbon (MBC), soil total nitrogen (TC) and soil microbial biomass carbon (MBC) were measured. Based on these data, soil organic carbon density (SOCD), soil microbial biomass carbon density (MBCD), soil nitrogen density (TCD) and soil microbial biomass nitrogen density (MBND) were calculated at four sites. The results showed that the mean concentrations of TC, SOC, MBC, TN and MBN showed a general deceasing tendency with increasing salinities in the top 30 cm of soils. The values of SOCD, MBCD, TND and MBND exhibited similar tendency along the salinity gradient. As for profile distribution pattern, The C/N ratios ranged from 8.28 to 56. 51. Higher C/N ratios were found in samples with high salinity. Correlation analysis showed that the concentrations of TC, SOC and MBC at four sampling sites were significantly negatively correlated with salinity (P < 0.01 or P < 0.05), indicating that salinity could inhibit soil carbon accumulation. However, no significant relationship was observed between TN, MBN and salinity (P > 0.05).Keywords: carbon content and stock, nitrogen content and stock, salinity, coastal wetland
Procedia PDF Downloads 316