Search results for: image dataset
3248 Maximum Entropy Based Image Segmentation of Human Skin Lesion
Authors: Sheema Shuja Khattak, Gule Saman, Imran Khan, Abdus Salam
Abstract:
Image segmentation plays an important role in medical imaging applications. Therefore, accurate methods are needed for the successful segmentation of medical images for diagnosis and detection of various diseases. In this paper, we have used maximum entropy to achieve image segmentation. Maximum entropy has been calculated using Shannon, Renyi, and Tsallis entropies. This work has novelty based on the detection of skin lesion caused by the bite of a parasite called Sand Fly causing the disease is called Cutaneous Leishmaniasis.Keywords: shannon, maximum entropy, Renyi, Tsallis entropy
Procedia PDF Downloads 4653247 Artificial Intelligence Based Method in Identifying Tumour Infiltrating Lymphocytes of Triple Negative Breast Cancer
Authors: Nurkhairul Bariyah Baharun, Afzan Adam, Reena Rahayu Md Zin
Abstract:
Tumor microenvironment (TME) in breast cancer is mainly composed of cancer cells, immune cells, and stromal cells. The interaction between cancer cells and their microenvironment plays an important role in tumor development, progression, and treatment response. The TME in breast cancer includes tumor-infiltrating lymphocytes (TILs) that are implicated in killing tumor cells. TILs can be found in tumor stroma (sTILs) and within the tumor (iTILs). TILs in triple negative breast cancer (TNBC) have been demonstrated to have prognostic and potentially predictive value. The international Immune-Oncology Biomarker Working Group (TIL-WG) had developed a guideline focus on the assessment of sTILs using hematoxylin and eosin (H&E)-stained slides. According to the guideline, the pathologists use “eye balling” method on the H&E stained- slide for sTILs assessment. This method has low precision, poor interobserver reproducibility, and is time-consuming for a comprehensive evaluation, besides only counted sTILs in their assessment. The TIL-WG has therefore recommended that any algorithm for computational assessment of TILs utilizing the guidelines provided to overcome the limitations of manual assessment, thus providing highly accurate and reliable TILs detection and classification for reproducible and quantitative measurement. This study is carried out to develop a TNBC digital whole slide image (WSI) dataset from H&E-stained slides and IHC (CD4+ and CD8+) stained slides. TNBC cases were retrieved from the database of the Department of Pathology, Hospital Canselor Tuanku Muhriz (HCTM). TNBC cases diagnosed between the year 2010 and 2021 with no history of other cancer and available block tissue were included in the study (n=58). Tissue blocks were sectioned approximately 4 µm for H&E and IHC stain. The H&E staining was performed according to a well-established protocol. Indirect IHC stain was also performed on the tissue sections using protocol from Diagnostic BioSystems PolyVue™ Plus Kit, USA. The slides were stained with rabbit monoclonal, CD8 antibody (SP16) and Rabbit monoclonal, CD4 antibody (EP204). The selected and quality-checked slides were then scanned using a high-resolution whole slide scanner (Pannoramic DESK II DW- slide scanner) to digitalize the tissue image with a pixel resolution of 20x magnification. A manual TILs (sTILs and iTILs) assessment was then carried out by the appointed pathologist (2 pathologists) for manual TILs scoring from the digital WSIs following the guideline developed by TIL-WG 2014, and the result displayed as the percentage of sTILs and iTILs per mm² stromal and tumour area on the tissue. Following this, we aimed to develop an automated digital image scoring framework that incorporates key elements of manual guidelines (including both sTILs and iTILs) using manually annotated data for robust and objective quantification of TILs in TNBC. From the study, we have developed a digital dataset of TNBC H&E and IHC (CD4+ and CD8+) stained slides. We hope that an automated based scoring method can provide quantitative and interpretable TILs scoring, which correlates with the manual pathologist-derived sTILs and iTILs scoring and thus has potential prognostic implications.Keywords: automated quantification, digital pathology, triple negative breast cancer, tumour infiltrating lymphocytes
Procedia PDF Downloads 1203246 Perceptual Image Coding by Exploiting Internal Generative Mechanism
Authors: Kuo-Cheng Liu
Abstract:
In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain
Procedia PDF Downloads 2513245 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation
Authors: Hangsik Shin
Abstract:
The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.Keywords: peak detection, photoplethysmography, sampling, signal reconstruction
Procedia PDF Downloads 3703244 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner
Authors: Aika Umemuro, Mitsuru Sato, Mizuki Narita, Saya Hori, Saya Sakurai, Tomomi Nakayama, Ayano Nakazawa, Toshihiro Ogura
Abstract:
Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.Keywords: EEG scanner, eye-detector, mammography, observers
Procedia PDF Downloads 2163243 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset
Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba
Abstract:
We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process
Procedia PDF Downloads 2633242 Using Electrical Impedance Tomography to Control a Robot
Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi
Abstract:
Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.Keywords: electrical impedance tomography, EIT, surgeon robot, image processing of electrical impedance tomography
Procedia PDF Downloads 2763241 Computer-Aided Diagnosis of Eyelid Skin Tumors Using Machine Learning
Authors: Ofira Zloto, Ofir Fogel, Eyal Klang
Abstract:
Purpose: The aim is to develop an automated framework based on machine learning to diagnose malignant eyelid skin tumors. Methods: This study utilized eyelid lesion images from Sheba Medical Center, a large tertiary center in Israel. Before model training, we pre-trained our models on the ISIC 2019 dataset consisting of 25,332 images. The proprietary eyelid dataset was then used for fine-tuning. The dataset contained multiple images per patient, aiming to classify malignant lesions in comparison to benign counterparts. Results: The analyzed dataset consisted of images representing both benign and malignant eyelid lesions. For the benign category, a total of 373 images were sourced. In comparison, the malignant category has 186 images. Based on the accuracy values, the model with 3 epochs and a learning rate of 0.0001 exhibited the best performance, achieving an accuracy of 0.748 with a standard deviation of 0.034. At a sensitivity of 69%, the model has a corresponding specificity of 82%. To further understand the decision-making process of our model, we employed heatmap visualization techniques, specifically Gradient-weighted Class Activation Mapping. Discussion: This study introduces a dependable model-aided diagnostic technology for assessing eyelid skin lesions. The model demonstrated accuracy comparable to human evaluation, effectively determining whether a lesion raises a high suspicion of malignancy or is benign. Such a model has the potential to alleviate the burden on the healthcare system, particularly benefiting rural areas and enhancing the efficiency of clinicians and overall healthcare.Keywords: machine learning;, eyelid skin tumors;, decision-making process;, heatmap visualization techniques
Procedia PDF Downloads 83240 Frequent Item Set Mining for Big Data Using MapReduce Framework
Authors: Tamanna Jethava, Rahul Joshi
Abstract:
Frequent Item sets play an essential role in many data Mining tasks that try to find interesting patterns from the database. Typically it refers to a set of items that frequently appear together in transaction dataset. There are several mining algorithm being used for frequent item set mining, yet most do not scale to the type of data we presented with today, so called “BIG DATA”. Big Data is a collection of large data sets. Our approach is to work on the frequent item set mining over the large dataset with scalable and speedy way. Big Data basically works with Map Reduce along with HDFS is used to find out frequent item sets from Big Data on large cluster. This paper focuses on using pre-processing & mining algorithm as hybrid approach for big data over Hadoop platform.Keywords: frequent item set mining, big data, Hadoop, MapReduce
Procedia PDF Downloads 4403239 Difference Expansion Based Reversible Data Hiding Scheme Using Edge Directions
Authors: Toshanlal Meenpal, Ankita Meenpal
Abstract:
A very important technique in reversible data hiding field is Difference expansion. Secret message as well as the cover image may be completely recovered without any distortion after data extraction process due to reversibility feature. In general, in any difference expansion scheme embedding is performed by integer transform in the difference image acquired by grouping two neighboring pixel values. This paper proposes an improved reversible difference expansion embedding scheme. We mainly consider edge direction for embedding by modifying the difference of two neighboring pixels values. In general, the larger difference tends to bring a degraded stego image quality than the smaller difference. Image quality in the range of 0.5 to 3.7 dB in average is achieved by the proposed scheme, which is shown through the experimental results. However payload wise it achieves almost similar capacity in comparisons with previous method.Keywords: information hiding, wedge direction, difference expansion, integer transform
Procedia PDF Downloads 4863238 Associations Between Positive Body Image, Physical Activity and Dietary Habits in Young Adults
Authors: Samrah Saeed
Abstract:
Introduction: This study considers a measure of positive body image and the associations between body appreciation, beauty ideals internalization, dietary habits, and physical activity in young adults. Positive body image is assessed by Body Appreciation Scale 2. It is used to assess a person's acceptance of the body, the degree of positivity, and respect for the body.Regular physical activity and healthy eating arebasically important for the body, and they play an important role in creating a positive image of the body. Objectives: To identify the associations between body appreciation and beauty ideals internalization. To compare body appreciation and body ideals internalization among students of different physical activity. To explore the associations between dietary habits (unhealthy, healthy), body appreciation and body ideals internalization. Research methods and organization: Study participants were young adult students, aged 18-35, both male and female.The research questionnaire consisted of four areas: body appreciation, beauty ideals internalization, dietary habits, and physical activity.The questionnaire was created in Google Forms online survey platform.The questionnaire was filled out anonymously Result and Discussion: Physical dissatisfaction, diet, eating disorders and exercise disorders are found in young adults all over the world.Thorough nutrition helps people understand who they are by reassuring them that they are okay without judging or accepting themselves. Social media can positively influence body image in many ways.A healthy body image is important because it affect self-esteem, self-acceptance, and your attitude towards food and exercise.Keywords: pysical activity, dietary habits, body image, beauty ideals internalization, body appreciation
Procedia PDF Downloads 1003237 The Use of Sustainable Tourism, Decrease Performance Levels, and Change Management for Image Branding as a Contemporary Tool of Foreign Policy
Authors: Mehtab Alam
Abstract:
Sustainable tourism practices require to improve the decreased performance levels in phases of change management for image branding. This paper addresses the innovative approach of using sustainable tourism for image branding as a contemporary tool of foreign policy. The sustainable tourism-based foreign policy promotes cultural values, green tourism, economy, and image management for the avoidance of rising global conflict. The mixed-method approach (quantitative 382 surveys, qualitative 11 interviews at saturation point) implied for the data analysis. The research finding provides the potential of using sustainable tourism by implying skills and knowledge, capacity, and personal factors of change management in improving tourism-based performance levels. It includes the valuable tourism performance role for the success of a foreign policy through sustainable tourism. Change management in tourism-based foreign policy provides the destination readiness for international engagement and curbing of climate issues through green tourism. The research recommends the impact of change management in improving the tourism-based performance levels of image branding for a coercive foreign policy. The paper’s future direction for the immediate implementation of tourism-based foreign policy is to overcome the contemporary issues of travel marketing management, green infrastructure, and cross-border regulation.Keywords: decrease performance levels, change management, sustainable tourism, image branding, foreign policy
Procedia PDF Downloads 1263236 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 1383235 Deep Learning-Based Automated Structure Deterioration Detection for Building Structures: A Technological Advancement for Ensuring Structural Integrity
Authors: Kavita Bodke
Abstract:
Structural health monitoring (SHM) is experiencing growth, necessitating the development of distinct methodologies to address its expanding scope effectively. In this study, we developed automatic structure damage identification, which incorporates three unique types of a building’s structural integrity. The first pertains to the presence of fractures within the structure, the second relates to the issue of dampness within the structure, and the third involves corrosion inside the structure. This study employs image classification techniques to discern between intact and impaired structures within structural data. The aim of this research is to find automatic damage detection with the probability of each damage class being present in one image. Based on this probability, we know which class has a higher probability or is more affected than the other classes. Utilizing photographs captured by a mobile camera serves as the input for an image classification system. Image classification was employed in our study to perform multi-class and multi-label classification. The objective was to categorize structural data based on the presence of cracks, moisture, and corrosion. In the context of multi-class image classification, our study employed three distinct methodologies: Random Forest, Multilayer Perceptron, and CNN. For the task of multi-label image classification, the models employed were Rasnet, Xceptionet, and Inception.Keywords: SHM, CNN, deep learning, multi-class classification, multi-label classification
Procedia PDF Downloads 403234 Barrier Lowering in Contacts between Graphene and Semiconductor Materials
Authors: Zhipeng Dong, Jing Guo
Abstract:
Graphene-semiconductor contacts have been extensively studied recently, both as a stand-alone diode device for potential applications in photodetectors and solar cells, and as a building block to vertical transistors. Graphene is a two-dimensional nanomaterial with vanishing density-of-states at the Dirac point, which differs from conventional metal. In this work, image-charge-induced barrier lowering (BL) in graphene-semiconductor contacts is studied and compared to that in metal Schottky contacts. The results show that despite of being a semimetal with vanishing density-of-states at the Dirac point, the image-charge-induced BL is significant. The BL value can be over 50% of that of metal contacts even in an intrinsic graphene contacted to an organic semiconductor, and it increases as the graphene doping increases. The dependences of the BL on the electric field and semiconductor dielectric constant are examined, and an empirical expression for estimating the image-charge-induced BL in graphene-semiconductor contacts is provided.Keywords: graphene, semiconductor materials, schottky barrier, image charge, contacts
Procedia PDF Downloads 3053233 Intelligent System for Diagnosis Heart Attack Using Neural Network
Authors: Oluwaponmile David Alao
Abstract:
Misdiagnosis has been the major problem in health sector. Heart attack has been one of diseases that have high level of misdiagnosis recorded on the part of physicians. In this paper, an intelligent system has been developed for diagnosis of heart attack in the health sector. Dataset of heart attack obtained from UCI repository has been used. This dataset is made up of thirteen attributes which are very vital in diagnosis of heart disease. The system is developed on the multilayer perceptron trained with back propagation neural network then simulated with feed forward neural network and a recognition rate of 87% was obtained which is a good result for diagnosis of heart attack in medical field.Keywords: heart attack, artificial neural network, diagnosis, intelligent system
Procedia PDF Downloads 6573232 Branding Tourism Destinations; The Trending Initiatives for Edifice Image Choices of Foreign Policy
Authors: Mehtab Alam, Mudiarasan Kuppusamy, Puvaneswaran Kunaserkaran
Abstract:
The purpose of this paper is to bridge the gap and complete the relationship between tourism destinations and image branding as a choice of edifice foreign policy. Such options became a crucial component for individuals interested in leisure and travel activities. The destination management factors have been evaluated and analyzed using the primary and secondary data in a mixed-methods approach (quantitative sample of 384 and qualitative 8 semi-structured interviews at saturated point). The study chose the Environmental Management Accounting (EMA) and Image Restoration (IR) theories, along with a schematic diagram and an analytical framework supported by NVivo software 12, for two locations in Abbottabad, KPK, Pakistan: Shimla Hill and Thandiani. This incorporates the use of PLS-SEM model for assessing validity of data while SPSS for data screening of descriptive statistics. The results show that destination management's promotion of tourism has significantly improved Pakistan's state image. The use of institutional setup, environmental drivers, immigration, security, and hospitality as well as recreational initiatives on destination management is encouraged. The practical ramifications direct the heads of tourism projects, diplomats, directors, and policymakers to complete destination projects before inviting people to Pakistan. The paper provides the extent of knowledge for academic tourism circles to use tourism destinations as brand ambassadors.Keywords: tourism, management, state image, foreign policy, image branding
Procedia PDF Downloads 713231 Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network
Authors: Hui Wei, Zheng Dong
Abstract:
Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model.Keywords: biological model, feature extraction, multi-layer neural network, object recognition
Procedia PDF Downloads 5453230 Transmogrification of the Danse Macabre Image: Capturing the Journey towards Creativity
Authors: Javaria Farooqui
Abstract:
This study, “Transmogrification of the Danse Macabre Image: Capturing the Journey towards Creativity,” traces the evolution of the concept of Danse Macabre. In Every man death takes away the sinful when they least expect it, in Solyman and Perseda everyone falls prey to death irrespective of their deeds and in Tauba-tun-Nasuh, the sinner is plagued. The climatic point in this brief research comes with the Modern texts, The Moon and Sixpence, Roohe-e-Insani and Amédéé, ou Comment s’en débarrasser, when Danse Macabre extends its boundaries, uniting the idea of creativity with death. Similarly in the visual context, Danse Macabre image, initially a horrifying idea, becomes a part of the present day comics and serves an entertaining rather than a cathartic purpose.Keywords: Danse macabre, transmogrification, Medieval, death, character
Procedia PDF Downloads 5233229 An Image Based Visual Servoing (IBVS) Approach Using a Linear-Quadratic Regulator (LQR) for Quadcopters
Authors: C. Gebauer, C. Henke, R. Vossen
Abstract:
Within the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020, a team of unmanned aerial vehicles (UAV) is used to capture intruder drones by physical interaction. The challenge is motivated by UAV safety. The purpose of this work is to investigate the agility of a quadcopter being controlled visually. The aim is to track and follow a highly dynamic target, e.g., an intruder quadcopter. The following is realized in close range and the opponent has a velocity of up to 10 m/s. Additional limitations are given by the hardware itself, where only monocular vision is present, and no additional knowledge about the targets state is available. An image based visual servoing (IBVS) approach is applied in combination with a Linear Quadratic Regulator (LQR). The IBVS is integrated into the LQR and an optimal trajectory is computed within the projected three-dimensional image-space. The approach has been evaluated on real quadcopter systems in different flight scenarios to demonstrate the system's stability.Keywords: image based visual servoing, quadcopter, dynamic object tracking, linear-quadratic regulator
Procedia PDF Downloads 1573228 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —
Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno
Abstract:
STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.Keywords: rule induction, decision table, missing data, noise
Procedia PDF Downloads 3973227 A Survey of Feature-Based Steganalysis for JPEG Images
Authors: Syeda Mainaaz Unnisa, Deepa Suresh
Abstract:
Due to the increase in usage of public domain channels, such as the internet, and communication technology, there is a concern about the protection of intellectual property and security threats. This interest has led to growth in researching and implementing techniques for information hiding. Steganography is the art and science of hiding information in a private manner such that its existence cannot be recognized. Communication using steganographic techniques makes not only the secret message but also the presence of hidden communication, invisible. Steganalysis is the art of detecting the presence of this hidden communication. Parallel to steganography, steganalysis is also gaining prominence, since the detection of hidden messages can prevent catastrophic security incidents from occurring. Steganalysis can also be incredibly helpful in identifying and revealing holes with the current steganographic techniques, which makes them vulnerable to attacks. Through the formulation of new effective steganalysis methods, further research to improve the resistance of tested steganography techniques can be developed. Feature-based steganalysis method for JPEG images calculates the features of an image using the L1 norm of the difference between a stego image and the calibrated version of the image. This calibration can help retrieve some of the parameters of the cover image, revealing the variations between the cover and stego image and enabling a more accurate detection. Applying this method to various steganographic schemes, experimental results were compared and evaluated to derive conclusions and principles for more protected JPEG steganography.Keywords: cover image, feature-based steganalysis, information hiding, steganalysis, steganography
Procedia PDF Downloads 2173226 Sentiment Classification Using Enhanced Contextual Valence Shifters
Authors: Vo Ngoc Phu, Phan Thi Tuoi
Abstract:
We have explored different methods of improving the accuracy of sentiment classification. The sentiment orientation of a document can be positive (+), negative (-), or neutral (0). We combine five dictionaries from [2, 3, 4, 5, 6] into the new one with 21137 entries. The new dictionary has many verbs, adverbs, phrases and idioms, that are not in five ones before. The paper shows that our proposed method based on the combination of Term-Counting method and Enhanced Contextual Valence Shifters method has improved the accuracy of sentiment classification. The combined method has accuracy 68.984% on the testing dataset, and 69.224% on the training dataset. All of these methods are implemented to classify the reviews based on our new dictionary and the Internet Movie data set.Keywords: sentiment classification, sentiment orientation, valence shifters, contextual, valence shifters, term counting
Procedia PDF Downloads 5073225 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 2993224 The Use of Appeals in Green Printed Advertisements: A Case of Product Orientation and Organizational Image Orientation Ads
Authors: Chutima Ruanguttamanun
Abstract:
Despite the relatively large number of studies that have examined the use of appeals in advertisements, research on the use of appeals in green advertisements is still underdeveloped and needs to be investigated further, as it is definitely a tool for marketers to create illustrious ads. In this study, content analysis was employed to examine the nature of green advertising appeals and to match the appeals with the green advertisements. Two different types of green print advertisings, product orientation and organizational image orientation were used. Thirty highly educated participants with different backgrounds were asked individually to ascertain three appeals out of thirty-four given appeals found among forty real green advertisements. To analyze participant responses and to group them based on common appeals, two-step K-mean clustering is used. The clustering solution indicates that eye-catching graphics and imaginative appeals are highly notable in both types of green ads. Depressed, meaningful and sad appeals are found to be highly used in organizational image orientation ads, whereas, corporate image, informative and natural appeals are found to be essential for product orientation ads.Keywords: advertising appeals, green marketing, green advertisement, printed advertisement
Procedia PDF Downloads 2803223 FMR1 Gene Carrier Screening for Premature Ovarian Insufficiency in Females: An Indian Scenario
Authors: Sarita Agarwal, Deepika Delsa Dean
Abstract:
Like the task of transferring photo images to artistic images, image-to-image translation aims to translate the data to the imitated data which belongs to the target domain. Neural Style Transfer and CycleGAN are two well-known deep learning architectures used for photo image-to-art image transfer. However, studies involving these two models concentrate on one-to-one domain translation, not one-to-multi domains translation. Our study tries to investigate deep learning architectures, which can be controlled to yield multiple artistic style translation only by adding a conditional vector. We have expanded CycleGAN and constructed Conditional CycleGAN for 5 kinds of categories translation. Our study found that the architecture inserting conditional vector into the middle layer of the Generator could output multiple artistic images.Keywords: genetic counseling, FMR1 gene, fragile x-associated primary ovarian insufficiency, premutation
Procedia PDF Downloads 1333222 A Robust Hybrid Blind Digital Image Watermarking System Using Discrete Wavelet Transform and Contourlet Transform
Authors: Nidal F. Shilbayeh, Belal AbuHaija, Zainab N. Al-Qudsy
Abstract:
In this paper, a hybrid blind digital watermarking system using Discrete Wavelet Transform (DWT) and Contourlet Transform (CT) has been implemented and tested. The implemented combined digital watermarking system has been tested against five common types of image attacks. The performance evaluation shows improved results in terms of imperceptibility, robustness, and high tolerance against these attacks; accordingly, the system is very effective and applicable.Keywords: discrete wavelet transform (DWT), contourlet transform (CT), digital image watermarking, copyright protection, geometric attack
Procedia PDF Downloads 3983221 An Application-Driven Procedure for Optimal Signal Digitization of Automotive-Grade Ultrasonic Sensors
Authors: Mohamed Shawki Elamir, Heinrich Gotzig, Raoul Zoellner, Patrick Maeder
Abstract:
In this work, a methodology is presented for identifying the optimal digitization parameters for the analog signal of ultrasonic sensors. These digitization parameters are the resolution of the analog to digital conversion and the sampling rate. This is accomplished through the derivation of characteristic curves based on Fano inequality and the calculation of the mutual information content over a given dataset. The mutual information is calculated between the examples in the dataset and the corresponding variation in the feature that needs to be estimated. The optimal parameters are identified in a manner that ensures optimal estimation performance while preventing inefficiency in using unnecessarily powerful analog to digital converters.Keywords: analog to digital conversion, digitization, sampling rate, ultrasonic
Procedia PDF Downloads 2083220 TACTICAL: Ram Image Retrieval in Linux Using Protected Mode Architecture’s Paging Technique
Authors: Sedat Aktas, Egemen Ulusoy, Remzi Yildirim
Abstract:
This article explains how to get a ram image from a computer with a Linux operating system and what steps should be followed while getting it. What we mean by taking a ram image is the process of dumping the physical memory instantly and writing it to a file. This process can be likened to taking a picture of everything in the computer’s memory at that moment. This process is very important for tools that analyze ram images. Volatility can be given as an example because before these tools can analyze ram, images must be taken. These tools are used extensively in the forensic world. Forensic, on the other hand, is a set of processes for digitally examining the information on any computer or server on behalf of official authorities. In this article, the protected mode architecture in the Linux operating system is examined, and the way to save the image sample of the kernel driver and system memory to disk is followed. Tables and access methods to be used in the operating system are examined based on the basic architecture of the operating system, and the most appropriate methods and application methods are transferred to the article. Since there is no article directly related to this study on Linux in the literature, it is aimed to contribute to the literature with this study on obtaining ram images. LIME can be mentioned as a similar tool, but there is no explanation about the memory dumping method of this tool. Considering the frequency of use of these tools, the contribution of the study in the field of forensic medicine has been the main motivation of the study due to the intense studies on ram image in the field of forensics.Keywords: linux, paging, addressing, ram-image, memory dumping, kernel modules, forensic
Procedia PDF Downloads 1213219 The Relationship of the Marketing Mix, Brand Image and Consumer Behavior of the Low-Cost Airline Service
Authors: Bundit Pungnirund
Abstract:
This research aimed to investigate the relationship between attitude towards marketing mix, brand image and consumer behavior of the passengers of low-cost airlines service. This study employed by quantitative research and the questionnaire was used to collect the data from 400 sampled of the passengers who have ever used the low-cost airline services based in Bangkok, Thailand. The descriptive statistics and Pearson’s correlation analysis were used to analyze data. The research results revealed that the attitude of the marketing mix of the low-cost airline services including product, price, place, promotion and process had related to the consumer behavior on the aspects of duration of service and frequency of service. While, the brand image of the low cost airline including the characteristics of organization, service quality and company identity had related to the consumer behavior on duration of service, frequency of service and cost of service at the significant statistically acceptable levels.Keywords: brand image, consumer behavior, low-cost airline, marketing mix
Procedia PDF Downloads 318