Search results for: deep packet inspection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2509

Search results for: deep packet inspection

1879 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores

Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi

Abstract:

In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.

Keywords: drug synergy, clustering, prediction, machine learning., deep learning

Procedia PDF Downloads 57
1878 Review on Effective Texture Classification Techniques

Authors: Sujata S. Kulkarni

Abstract:

Effective and efficient texture feature extraction and classification is an important problem in image understanding and recognition. This paper gives a review on effective texture classification method. The objective of the problem of texture representation is to reduce the amount of raw data presented by the image, while preserving the information needed for the task. Texture analysis is important in many applications of computer image analysis for classification include industrial and biomedical surface inspection, for example for defects and disease, ground classification of satellite or aerial imagery and content-based access to image databases.

Keywords: compressed sensing, feature extraction, image classification, texture analysis

Procedia PDF Downloads 412
1877 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry

Authors: Deepika Christopher, Garima Anand

Abstract:

To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.

Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications

Procedia PDF Downloads 37
1876 KCBA, A Method for Feature Extraction of Colonoscopy Images

Authors: Vahid Bayrami Rad

Abstract:

In recent years, the use of artificial intelligence techniques, tools, and methods in processing medical images and health-related applications has been highlighted and a lot of research has been done in this regard. For example, colonoscopy and diagnosis of colon lesions are some cases in which the process of diagnosis of lesions can be improved by using image processing and artificial intelligence algorithms, which help doctors a lot. Due to the lack of accurate measurements and the variety of injuries in colonoscopy images, the process of diagnosing the type of lesions is a little difficult even for expert doctors. Therefore, by using different software and image processing, doctors can be helped to increase the accuracy of their observations and ultimately improve their diagnosis. Also, by using automatic methods, the process of diagnosing the type of disease can be improved. Therefore, in this paper, a deep learning framework called KCBA is proposed to classify colonoscopy lesions which are composed of several methods such as K-means clustering, a bag of features and deep auto-encoder. Finally, according to the experimental results, the proposed method's performance in classifying colonoscopy images is depicted considering the accuracy criterion.

Keywords: colorectal cancer, colonoscopy, region of interest, narrow band imaging, texture analysis, bag of feature

Procedia PDF Downloads 36
1875 CO₂ Storage Capacity Assessment of Deep Saline Aquifers in Malaysia

Authors: Radzuan Junin, Dayang Zulaika A. Hasbollah

Abstract:

The increasing amount of greenhouse gasses in the atmosphere recently has become one of the discussed topics in relation with world’s concern on climate change. Developing countries’ emissions (such as Malaysia) are now seen to surpass developed country’s emissions due to rapid economic development growth in recent decades. This paper presents the potential storage sites suitability and storage capacity assessment for CO2 sequestration in sedimentary basins of Malaysia. This study is the first of its kind that made an identification of potential storage sites and assessment of CO2 storage capacity within the deep saline aquifers in the country. The CO2 storage capacity in saline formation assessment was conducted based on the method for quick assessment of CO2 storage capacity in closed, and semi-closed saline formations modified to suit the geology setting of Malaysia. Then, an integrated approach that involved geographic information systems (GIS) analysis and field data assessment was adopted to provide the potential storage sites and its capacity for CO2 sequestration. This study concentrated on the assessment of major sedimentary basins in Malaysia both onshore and offshore where potential geological formations which CO2 could be stored exist below 800 meters and where suitable sealing formations are present. Based on regional study and amount of data available, there are 14 sedimentary basins all around Malaysia that has been identified as potential CO2 storage. Meanwhile, from the screening and ranking exercises, it is obvious that Malay Basin, Central Luconia Province, West Baram Delta and Balingian Province are respectively ranked as the top four in the ranking system for CO2 storage. 27% of sedimentary basins in Malaysia were evaluated as high potential area for CO2 storage. This study should provide a basis for further work to reduce the uncertainty in these estimates and also provide support to policy makers on future planning of carbon capture and sequestration (CCS) projects in Malaysia.

Keywords: CO₂ storage, deep saline aquifer, GIS, sedimentary basin

Procedia PDF Downloads 335
1874 SEM Image Classification Using CNN Architectures

Authors: Güzi̇n Ti̇rkeş, Özge Teki̇n, Kerem Kurtuluş, Y. Yekta Yurtseven, Murat Baran

Abstract:

A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.

Keywords: convolutional neural networks, deep learning, image classification, scanning electron microscope

Procedia PDF Downloads 103
1873 Secure Optimized Ingress Filtering in Future Internet Communication

Authors: Bander Alzahrani, Mohammed Alreshoodi

Abstract:

Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.

Keywords: forwarding identifier, filling factor, information centric network, topology manager

Procedia PDF Downloads 134
1872 Feature of Employment Injuries and Maintenance Works of Construction Machinery

Authors: Naoko Kanazawa, Tran Thi Bich Nguyet, Yoshiyuki Higuchi, Hideki Hamada

Abstract:

Construction machines’ condition is maintained with the regularly inspections, preventive maintenance and repairs by skillful and qualified engineers. If an accident occurs, there will be enormous influence such as human injuries, delays in the term of construction. In this paper, we revealed the characteristics such as inspection, maintenance and repair works for construction machines, and we also clarified the trends of employment injuries based on actual data by simple and cross tabulation methods, and investigated the relation with their works, injured body parts and accident types.

Keywords: construction machines, employment injuries, maintenance and repair, safety and health

Procedia PDF Downloads 288
1871 Meta Mask Correction for Nuclei Segmentation in Histopathological Image

Authors: Jiangbo Shi, Zeyu Gao, Chen Li

Abstract:

Nuclei segmentation is a fundamental task in digital pathology analysis and can be automated by deep learning-based methods. However, the development of such an automated method requires a large amount of data with precisely annotated masks which is hard to obtain. Training with weakly labeled data is a popular solution for reducing the workload of annotation. In this paper, we propose a novel meta-learning-based nuclei segmentation method which follows the label correction paradigm to leverage data with noisy masks. Specifically, we design a fully conventional meta-model that can correct noisy masks by using a small amount of clean meta-data. Then the corrected masks are used to supervise the training of the segmentation model. Meanwhile, a bi-level optimization method is adopted to alternately update the parameters of the main segmentation model and the meta-model. Extensive experimental results on two nuclear segmentation datasets show that our method achieves the state-of-the-art result. In particular, in some noise scenarios, it even exceeds the performance of training on supervised data.

Keywords: deep learning, histopathological image, meta-learning, nuclei segmentation, weak annotations

Procedia PDF Downloads 126
1870 Construction of Strain Distribution Profiles of EDD Steel at Elevated Temperatures

Authors: K. Eshwara Prasad, R. Raman Goud, Swadesh Kumar Singh, N. Sateesh

Abstract:

In the present work forming limit diagrams and strain distribution profile diagrams for extra deep drawing steel at room and elevated temperatures have been determined experimentally by conducting stretch forming experiments by using designed and fabricated warm stretchforming tooling setup. With the help of forming Limit Diagrams (FLDs) and strain distribution profile diagrams the formability of Extra Deep Drawing steel has been analyzed and co-related with mechanical properties like strain hardening COEFFICIENT (n) and normal anisotropy (r−).Mechanical properties of EDD steel from room temperature to 4500C were determined and discussed the impact of temperature on the properties like work hardening exponent (n) anisotropy(r-) and strength coefficient of the material. Also the fractured surfaces after stretching have undergone the some metallurgical investigations and attempt has been made to co-relate with the formability of EDD steel sheets. They are co-related and good agreement with FLDs at various temperatures.

Keywords: FLD, microhardness, strain distribution profile, stretch forming

Procedia PDF Downloads 307
1869 The Genesis of the Anomalous Sernio Fan (Valtellina, Northern Italy)

Authors: Erika De Finis, Paola Gattinoni, Laura Scesi

Abstract:

Massive rock avalanches formed some of the largest landslide deposits on Earth and they represent one of the major geohazards in high-relief mountains. This paper interprets a very large sedimentary fan (the Sernio fan, Valtellina, Northern Italy), located 20 Km SW from Val Pola Rock avalanche (1987), as the deposit of a partial collapse of a Deep Seated Gravitational Slope Deformation (DSGSD), afterwards eroded and buried by debris flows. The proposed emplacement sequence has been reconstructed based on geomorphological, structural and mechanical evidences. The Sernio fan is actually considered anomalous with reference to the very high ratio between the fan area (about 4.5km2) and the basin area (about 3km2). The morphology of the fan area is characterised by steep slopes (dip about 20%) and the fan apex is extended for 1.8 km inside the small catchment basin. This sedimentary fan was originated by a landslide that interested a part of a large deep-seated gravitational slope deformation, involving a wide area of about 55 km². The main controlling factor is tectonic and it is related to the proximity to regional fault systems and the consequent occurrence of fault weak rocks (GSI locally lower than 10 with compressive stress lower than 20MPa). Moreover, the fan deposit shows sedimentary evidences of recent debris flow events. The best current explanation of the Sernio fan involves an initial failure of some hundreds of Mm3. The run-out was quite limited because of the morphology of Valtellina’s valley floor, and the deposit filled the main valley forming a landslide dam, as confirmed by the lacustrine deposits detected upstream the fan. Nowadays the debris flow events represent the main hazard in the study area.

Keywords: anomalous sedimentary fans, deep seated gravitational slope deformation, Italy, rock avalanche

Procedia PDF Downloads 460
1868 Peg@GDF3:TB3+ – Rb Nanocomposites for Deep-Seated X-Ray Induced Photodynamic Therapy in Oncology

Authors: E.A. Kuchma

Abstract:

Photodynamic therapy (PDT) is considered an alternative and minimally invasive cancer treatment modality compared to chemotherapy and radiation therapy. PDT includes three main components: a photosensitizer (PS), oxygen, and a light source. PS is injected into the patient's body and then selectively accumulates in the tumor. However, the light used in PDT (spectral range 400–700 nm) is limited to superficial lesions, and the light penetration depth does not exceed a few cm. The problem of PDT (poor visible light transmission) can be solved by using X-rays. The penetration depth of X-rays is ten times greater than that of visible light. Therefore, X-ray radiation easily penetrates through the tissues of the body. The aim of this work is to develop universal nanocomposites for X-ray photodynamic therapy of deep and superficial tumors using scintillation nanoparticles of gadolinium fluoride (GdF3), doped with Tb3+, coated with a biocompatible coating (PEG) and photosensitizer RB (Rose Bengal). PEG@GdF3:Tb3+(15%) – RB could be used as an effective X-ray, UV, and photoluminescent mediator to excite a photosensitizer for generating reactive oxygen species (ROS) to kill tumor cells via photodynamic therapy. GdF3 nanoparticles can also be used as contrast agents for computed tomography (CT) and magnetic resonance imaging (MRI).

Keywords: X-ray induced photodynamic therapy, scintillating nanoparticle, radiosensitizer, photosensitizer

Procedia PDF Downloads 62
1867 Optimization of Maintenance of PV Module Arrays Based on Asset Management Strategies: Case of Study

Authors: L. Alejandro Cárdenas, Fernando Herrera, David Nova, Juan Ballesteros

Abstract:

This paper presents a methodology to optimize the maintenance of grid-connected photovoltaic systems, considering the cleaning and module replacement periods based on an asset management strategy. The methodology is based on the analysis of the energy production of the PV plant, the energy feed-in tariff, and the cost of cleaning and replacement of the PV modules, with the overall revenue received being the optimization variable. The methodology is evaluated as a case study of a 5.6 kWp solar PV plant located on the Bogotá campus of the Universidad Nacional de Colombia. The asset management strategy implemented consists of assessing the PV modules through visual inspection, energy performance analysis, pollution, and degradation. Within the visual inspection of the plant, the general condition of the modules and the structure is assessed, identifying dust deposition, visible fractures, and water accumulation on the bottom. The energy performance analysis is performed with the energy production reported by the monitoring systems and compared with the values estimated in the simulation. The pollution analysis is performed using the soiling rate due to dust accumulation, which can be modelled by a black box with an exponential function dependent on historical pollution values. The pollution rate is calculated with data collected from the energy generated during two years in a photovoltaic plant on the campus of the National University of Colombia. Additionally, the alternative of assessing the temperature degradation of the PV modules is evaluated by estimating the cell temperature with parameters such as ambient temperature and wind speed. The medium-term energy decrease of the PV modules is assessed with the asset management strategy by calculating the health index to determine the replacement period of the modules due to degradation. This study proposes a tool for decision making related to the maintenance of photovoltaic systems. The above, projecting the increase in the installation of solar photovoltaic systems in power systems associated with the commitments made in the Paris Agreement for the reduction of CO2 emissions. In the Colombian context, it is estimated that by 2030, 12% of the installed power capacity will be solar PV.

Keywords: asset management, PV module, optimization, maintenance

Procedia PDF Downloads 21
1866 A Comparison of Convolutional Neural Network Architectures for the Classification of Alzheimer’s Disease Patients Using MRI Scans

Authors: Tomas Premoli, Sareh Rowlands

Abstract:

In this study, we investigate the impact of various convolutional neural network (CNN) architectures on the accuracy of diagnosing Alzheimer’s disease (AD) using patient MRI scans. Alzheimer’s disease is a debilitating neurodegenerative disorder that affects millions worldwide. Early, accurate, and non-invasive diagnostic methods are required for providing optimal care and symptom management. Deep learning techniques, particularly CNNs, have shown great promise in enhancing this diagnostic process. We aim to contribute to the ongoing research in this field by comparing the effectiveness of different CNN architectures and providing insights for future studies. Our methodology involved preprocessing MRI data, implementing multiple CNN architectures, and evaluating the performance of each model. We employed intensity normalization, linear registration, and skull stripping for our preprocessing. The selected architectures included VGG, ResNet, and DenseNet models, all implemented using the Keras library. We employed transfer learning and trained models from scratch to compare their effectiveness. Our findings demonstrated significant differences in performance among the tested architectures, with DenseNet201 achieving the highest accuracy of 86.4%. Transfer learning proved to be helpful in improving model performance. We also identified potential areas for future research, such as experimenting with other architectures, optimizing hyperparameters, and employing fine-tuning strategies. By providing a comprehensive analysis of the selected CNN architectures, we offer a solid foundation for future research in Alzheimer’s disease diagnosis using deep learning techniques. Our study highlights the potential of CNNs as a valuable diagnostic tool and emphasizes the importance of ongoing research to develop more accurate and effective models.

Keywords: Alzheimer’s disease, convolutional neural networks, deep learning, medical imaging, MRI

Procedia PDF Downloads 54
1865 Strain DistributionProfiles of EDD Steel at Elevated Temperatures

Authors: Eshwara Prasad Koorapati, R. Raman Goud, Swadesh Kumar Singh

Abstract:

In the present work forming limit diagrams and strain distribution profile diagrams for extra deep drawing steel at room and elevated temperatures have been determined experimentally by conducting stretch forming experiments by using designed and fabricated warm stretch forming tooling setup. With the help of forming Limit Diagrams (FLDs) and strain distribution profile diagrams the formability of Extra Deep Drawing steel has been analyzed and co-related with mechanical properties like strain hardening coefficient (n) and normal anisotropy (r−).Mechanical properties of EDD steel from room temperature to 4500 C were determined and discussed the impact of temperature on the properties like work hardening exponent (n) anisotropy (r-) and strength coefficient of the material. Also, the fractured surfaces after stretching have undergone the some metallurgical investigations and attempt has been made to co-relate with the formability of EDD steel sheets. They are co-related and good agreement with FLDs at various temperatures.

Keywords: FLD, micro hardness, strain distribution profile, stretch forming

Procedia PDF Downloads 405
1864 Reinforcement Learning for Self Driving Racing Car Games

Authors: Adam Beaunoyer, Cory Beaunoyer, Mohammed Elmorsy, Hanan Saleh

Abstract:

This research aims to create a reinforcement learning agent capable of racing in challenging simulated environments with a low collision count. We present a reinforcement learning agent that can navigate challenging tracks using both a Deep Q-Network (DQN) and a Soft Actor-Critic (SAC) method. A challenging track includes curves, jumps, and varying road widths throughout. Using open-source code on Github, the environment used in this research is based on the 1995 racing game WipeOut. The proposed reinforcement learning agent can navigate challenging tracks rapidly while maintaining low racing completion time and collision count. The results show that the SAC model outperforms the DQN model by a large margin. We also propose an alternative multiple-car model that can navigate the track without colliding with other vehicles on the track. The SAC model is the basis for the multiple-car model, where it can complete the laps quicker than the single-car model but has a higher collision rate with the track wall.

Keywords: reinforcement learning, soft actor-critic, deep q-network, self-driving cars, artificial intelligence, gaming

Procedia PDF Downloads 23
1863 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks

Authors: Muneeb Ullah, Daishihan, Xiadong Young

Abstract:

Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.

Keywords: classification, deep learning, medical images, CXR, GAN.

Procedia PDF Downloads 64
1862 Groundwater Level Prediction Using hybrid Particle Swarm Optimization-Long-Short Term Memory Model and Performance Evaluation

Authors: Sneha Thakur, Sanjeev Karmakar

Abstract:

This paper proposed hybrid Particle Swarm Optimization (PSO) – Long-Short Term Memory (LSTM) model for groundwater level prediction. The evaluation of the performance is realized using the parameters: root mean square error (RMSE) and mean absolute error (MAE). Ground water level forecasting will be very effective for planning water harvesting. Proper calculation of water level forecasting can overcome the problem of drought and flood to some extent. The objective of this work is to develop a ground water level forecasting model using deep learning technique integrated with optimization technique PSO by applying 29 years data of Chhattisgarh state, In-dia. It is important to find the precise forecasting in case of ground water level so that various water resource planning and water harvesting can be managed effectively.

Keywords: long short-term memory, particle swarm optimization, prediction, deep learning, groundwater level

Procedia PDF Downloads 58
1861 Learning from Small Amount of Medical Data with Noisy Labels: A Meta-Learning Approach

Authors: Gorkem Algan, Ilkay Ulusoy, Saban Gonul, Banu Turgut, Berker Bakbak

Abstract:

Computer vision systems recently made a big leap thanks to deep neural networks. However, these systems require correctly labeled large datasets in order to be trained properly, which is very difficult to obtain for medical applications. Two main reasons for label noise in medical applications are the high complexity of the data and conflicting opinions of experts. Moreover, medical imaging datasets are commonly tiny, which makes each data very important in learning. As a result, if not handled properly, label noise significantly degrades the performance. Therefore, a label-noise-robust learning algorithm that makes use of the meta-learning paradigm is proposed in this article. The proposed solution is tested on retinopathy of prematurity (ROP) dataset with a very high label noise of 68%. Results show that the proposed algorithm significantly improves the classification algorithm's performance in the presence of noisy labels.

Keywords: deep learning, label noise, robust learning, meta-learning, retinopathy of prematurity

Procedia PDF Downloads 137
1860 Infrared Thermography Applications for Building Investigation

Authors: Hamid Yazdani, Raheleh Akbar

Abstract:

Infrared thermography is a modern non-destructive measuring method for the examination of redeveloped and non-renovated buildings. Infrared cameras provide a means for temperature measurement in building constructions from the inside, as well as from the outside. Thus, heat bridges can be detected. It has been shown that infrared thermography is applicable for insulation inspection, identifying air leakage and heat losses sources, finding the exact position of heating tubes or for discovering the reasons why mold, moisture is growing in a particular area, and it is also used in conservation field to detect hidden characteristics, degradations of building structures. The paper gives a brief description of the theoretical background of infrared thermography.

Keywords: infrared thermography, examination of buildings, emissivity, heat losses sources

Procedia PDF Downloads 500
1859 Deep Learning Based Fall Detection Using Simplified Human Posture

Authors: Kripesh Adhikari, Hamid Bouchachia, Hammadi Nait-Charif

Abstract:

Falls are one of the major causes of injury and death among elderly people aged 65 and above. A support system to identify such kind of abnormal activities have become extremely important with the increase in ageing population. Pose estimation is a challenging task and to add more to this, it is even more challenging when pose estimations are performed on challenging poses that may occur during fall. Location of the body provides a clue where the person is at the time of fall. This paper presents a vision-based tracking strategy where available joints are grouped into three different feature points depending upon the section they are located in the body. The three feature points derived from different joints combinations represents the upper region or head region, mid-region or torso and lower region or leg region. Tracking is always challenging when a motion is involved. Hence the idea is to locate the regions in the body in every frame and consider it as the tracking strategy. Grouping these joints can be beneficial to achieve a stable region for tracking. The location of the body parts provides a crucial information to distinguish normal activities from falls.

Keywords: fall detection, machine learning, deep learning, pose estimation, tracking

Procedia PDF Downloads 172
1858 On the Use of Analytical Performance Models to Design a High-Performance Active Queue Management Scheme

Authors: Shahram Jamali, Samira Hamed

Abstract:

One of the open issues in Random Early Detection (RED) algorithm is how to set its parameters to reach high performance for the dynamic conditions of the network. Although original RED uses fixed values for its parameters, this paper follows a model-based approach to upgrade performance of the RED algorithm. It models the routers queue behavior by using the Markov model and uses this model to predict future conditions of the queue. This prediction helps the proposed algorithm to make some tunings over RED's parameters and provide efficiency and better performance. Widespread packet level simulations confirm that the proposed algorithm, called Markov-RED, outperforms RED and FARED in terms of queue stability, bottleneck utilization and dropped packets count.

Keywords: active queue management, RED, Markov model, random early detection algorithm

Procedia PDF Downloads 523
1857 Investigation of Buddhology Reflected from Wall Paintings in Sri Lanka

Authors: R. G. D Jayawardena

Abstract:

The Buddha was known by great wise men from 6th century B.C up to date as a superhuman being born in the world beyond the omnipotent. The Buddha’s doctrinal descriptions reflect his deep enlightenment about imperial and metaphysical knowledge. Buddhology undertaken for this study is an unexposed subject in metaphysical points. The Buddhist wall painting in Sri Lanka depicts deep metaphysical meaning than its simple perspective of estheticism. Buddhology, in some perspectives, has been interpreted as a complete natural science discovered by the Buddha to teach the way of honorable living in perfect happiness and peace of mind till death. Such interpretations which emphasized are based on textual studies. The Buddhology conducted through literal tradition is depicted in wall paintings in Sri Lanka are in visual art with specific techniques rules. The Buddhology, which is investigated on wall paintings, portrays the Buddha in the form of a superhuman being and as an unparalleled person among the Devas, Brahmas, Yakshas, Maras, and humans. The Buddha concept is known to Sri Lankan Buddhists as a person attained to full awakening of wisdom. In personality, the Buddha is depicted as a supernormal person in the world and a rare birth. In brief, the paper will discuss and illustrate the Buddha’s transcendental position and the reality of what he experienced and its authenticity.

Keywords: Buddhology, Metaphysic, Sri Lanka, paintings

Procedia PDF Downloads 184
1856 Study on Network-Based Technology for Detecting Potentially Malicious Websites

Authors: Byung-Ik Kim, Hong-Koo Kang, Tae-Jin Lee, Hae-Ryong Park

Abstract:

Cyber terrors against specific enterprises or countries have been increasing recently. Such attacks against specific targets are called advanced persistent threat (APT), and they are giving rise to serious social problems. The malicious behaviors of APT attacks mostly affect websites and penetrate enterprise networks to perform malevolent acts. Although many enterprises invest heavily in security to defend against such APT threats, they recognize the APT attacks only after the latter are already in action. This paper discusses the characteristics of APT attacks at each step as well as the strengths and weaknesses of existing malicious code detection technologies to check their suitability for detecting APT attacks. It then proposes a network-based malicious behavior detection algorithm to protect the enterprise or national networks.

Keywords: Advanced Persistent Threat (APT), malware, network security, network packet, exploit kits

Procedia PDF Downloads 345
1855 Morphological Processing of Punjabi Text for Sentiment Analysis of Farmer Suicides

Authors: Jaspreet Singh, Gurvinder Singh, Prabhsimran Singh, Rajinder Singh, Prithvipal Singh, Karanjeet Singh Kahlon, Ravinder Singh Sawhney

Abstract:

Morphological evaluation of Indian languages is one of the burgeoning fields in the area of Natural Language Processing (NLP). The evaluation of a language is an eminent task in the era of information retrieval and text mining. The extraction and classification of knowledge from text can be exploited for sentiment analysis and morphological evaluation. This study coalesce morphological evaluation and sentiment analysis for the task of classification of farmer suicide cases reported in Punjab state of India. The pre-processing of Punjabi text involves morphological evaluation and normalization of Punjabi word tokens followed by the training of proposed model using deep learning classification on Punjabi language text extracted from online Punjabi news reports. The class-wise accuracies of sentiment prediction for four negatively oriented classes of farmer suicide cases are 93.85%, 88.53%, 83.3%, and 95.45% respectively. The overall accuracy of sentiment classification obtained using proposed framework on 275 Punjabi text documents is found to be 90.29%.

Keywords: deep neural network, farmer suicides, morphological processing, punjabi text, sentiment analysis

Procedia PDF Downloads 300
1854 Missing Link Data Estimation with Recurrent Neural Network: An Application Using Speed Data of Daegu Metropolitan Area

Authors: JaeHwan Yang, Da-Woon Jeong, Seung-Young Kho, Dong-Kyu Kim

Abstract:

In terms of ITS, information on link characteristic is an essential factor for plan or operation. But in practical cases, not every link has installed sensors on it. The link that does not have data on it is called “Missing Link”. The purpose of this study is to impute data of these missing links. To get these data, this study applies the machine learning method. With the machine learning process, especially for the deep learning process, missing link data can be estimated from present link data. For deep learning process, this study uses “Recurrent Neural Network” to take time-series data of road. As input data, Dedicated Short-range Communications (DSRC) data of Dalgubul-daero of Daegu Metropolitan Area had been fed into the learning process. Neural Network structure has 17 links with present data as input, 2 hidden layers, for 1 missing link data. As a result, forecasted data of target link show about 94% of accuracy compared with actual data.

Keywords: data estimation, link data, machine learning, road network

Procedia PDF Downloads 497
1853 Comparison of Deep Brain Stimulation Targets in Parkinson's Disease: A Systematic Review

Authors: Hushyar Azari

Abstract:

Aim and background: Deep brain stimulation (DBS) is regarded as an important therapeutic choice for Parkinson's disease (PD). The two most common targets for DBS are the subthalamic nucleus (STN) and globus pallidus (GPi). This review was conducted to compare the clinical effectiveness of these two targets. Methods: A systematic literature search in electronic databases: Embase, Cochrane Library and PubMed were restricted to English language publications 2010 to 2021. Specified MeSH terms were searched in all databases. Studies which evaluated the Unified Parkinson's Disease Rating Scale (UPDRS) III were selected by meeting the following criteria: (1) compared both GPi and STN DBS; (2) had at least three months follow-up period; (3)at least five participants in each group; (4)conducted after 2010. Study quality assessment was performed using the Modified Jadad Scale. Results: 3577 potentially relevant articles were identified, of these, 3569 were excluded based on title and abstract, duplicate and unsuitable article removal. Eight articles satisfied the inclusion criteria and were scrutinized (458 PD patients). According to Modified Jadad Scale, the majority of included studies had low evidence quality which was a limitation of this review. 5 studies reported no statistically significant between-group difference for improvements in UPDRS ш scores. At the same time, there were some results in terms of pain, action tremor, rigidity, and urinary symptoms, which indicated that STN DBS might be a better choice. Regarding the adverse effects, GPi was superior. Conclusion: It is clear that other larger randomized clinical trials with longer follow-up periods and control groups are needed to decide which target is more efficient for deep brain stimulation in Parkinson’s disease and imposes fewer adverse effects on the patients. Meanwhile, STN seems more reasonable according to the results of this systematic review.

Keywords: brain stimulation, globus pallidus, Parkinson's disease, subthalamic nucleus

Procedia PDF Downloads 170
1852 Thermosonic Devulcanization of Waste Ground Rubber Tires by Quaternary Ammonium-Based Ternary Deep Eutectic Solvents and the Effect of α-Hydrogen

Authors: Ricky Saputra, Rashmi Walvekar, Mohammad Khalid

Abstract:

Landfills, water contamination, and toxic gas emission are a few impacts faced by the environment due to the increasing number of αof waste rubber tires (WRT). In spite of such concerning issue, only minimal efforts are taken to reclaim or recycle these wastes as their products are generally not-profitable for companies. Unlike the typical reclamation process, devulcanization is a method to selectively cleave sulfidic bonds within vulcanizates to avoid polymeric scissions that compromise elastomer’s mechanical and tensile properties. The process also produces devulcanizates that are re-processable similar to virgin rubber. Often, a devulcanizing agent is needed. In the current study, novel and sustainable ammonium chloride-based ternary deep eutectic solvents (TDES), with a different number of α-hydrogens, were utilised to devulcanize ground rubber tire (GRT) as an effort to implement green chemistry to tackle such issue. 40-mesh GRT were soaked for 1 day with different TDESs and sonicated at 37-80 kHz for 60-120 mins and heated at 100-140oC for 30-90 mins. Devulcanizates were then filtered, dried, and evaluated based on the percentage of by means of Flory-Rehner calculation and swelling index. The result shows that an increasing number of α-Hs increases the degree of devulcanization, and the value achieved was around eighty-percent, thirty percent higher than the typical industrial-autoclave method. Resulting bondages of devulcanizates were also analysed by Fourier transform infrared spectrometer (FTIR), Horikx fitting, and thermogravimetric analyser (TGA). The earlier two confirms only sulfidic scissions were experienced by GRT through the treatment, while the latter proves the absence or negligibility of carbon-chains scission.

Keywords: ammonium, sustainable, deep eutectic solvent, α-hydrogen, waste rubber tire

Procedia PDF Downloads 108
1851 Wasting Human and Computer Resources

Authors: Mária Csernoch, Piroska Biró

Abstract:

The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem-solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text-based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text-based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.

Keywords: deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources

Procedia PDF Downloads 368
1850 Analysis of Cooperative Hybrid ARQ with Adaptive Modulation and Coding on a Correlated Fading Channel Environment

Authors: Ibrahim Ozkan

Abstract:

In this study, a cross-layer design which combines adaptive modulation and coding (AMC) and hybrid automatic repeat request (HARQ) techniques for a cooperative wireless network is investigated analytically. Previous analyses of such systems in the literature are confined to the case where the fading channel is independent at each retransmission, which can be unrealistic unless the channel is varying very fast. On the other hand, temporal channel correlation can have a significant impact on the performance of HARQ systems. In this study, utilizing a Markov channel model which accounts for the temporal correlation, the performance of non-cooperative and cooperative networks are investigated in terms of packet loss rate and throughput metrics for Chase combining HARQ strategy.

Keywords: cooperative network, adaptive modulation and coding, hybrid ARQ, correlated fading

Procedia PDF Downloads 123