Search results for: hyperspectral image classification using tree search algorithm
8578 A User Interface for Easiest Way Image Encryption with Chaos
Authors: D. López-Mancilla, J. M. Roblero-Villa
Abstract:
Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.Keywords: image encryption, chaos, secure communications, user interface
Procedia PDF Downloads 4938577 An Automated R-Peak Detection Method Using Common Vector Approach
Authors: Ali Kirkbas
Abstract:
R peaks in an electrocardiogram (ECG) are signs of cardiac activity in individuals that reveal valuable information about cardiac abnormalities, which can lead to mortalities in some cases. This paper examines the problem of detecting R-peaks in ECG signals, which is a two-class pattern classification problem in fact. To handle this problem with a reliable high accuracy, we propose to use the common vector approach which is a successful machine learning algorithm. The dataset used in the proposed method is obtained from MIT-BIH, which is publicly available. The results are compared with the other popular methods under the performance metrics. The obtained results show that the proposed method shows good performance than that of the other. methods compared in the meaning of diagnosis accuracy and simplicity which can be operated on wearable devices.Keywords: ECG, R-peak classification, common vector approach, machine learning
Procedia PDF Downloads 658576 WSN System Warns Atta Cephalotes Climbing in Mango Fruit Trees
Authors: Federico Hahn Schlam, Fermín Martínez Solís
Abstract:
Leaf-cutting ants (Atta cephalotes) forage from mango tree leaves and flowers to feed their colony. Farmers find it difficult to control ants due to the great quantity of trees grown in commercial orchards. In this article, IoT can support farmers for ant detection in real time, as production losses can be considered of 324 US per tree.A wireless sensor network, WSN, was developed to warn the farmer from ant presence in trees during a night. Mango trees were gathered into groups of 9 trees, where the central tree holds the master microcontroller, and the other eight trees presented slave microcontrollers (nodes). At each node, anemitter diode-photodiode unitdetects ants climbing up. A capacitor is chargedand discharged after being sampled every ten minutes. The system usesBLE (Bluetooth Low Energy) to communicate between the master microcontroller by BLE.When ants were detected the number of the tree was transmitted via LoRa from the masterto the producer smartphone to warn him. In this paper, BLE, LoRa, and energy consumption were studied under variable vegetation in the orchard. During 2018, 19 trees were attacked by ants, and ants fed 26.3% of flowers and 73.7% of leaves.Keywords: BLE, atta cephalotes, LoRa, WSN-smartphone, energy consumption
Procedia PDF Downloads 1598575 Active Contours for Image Segmentation Based on Complex Domain Approach
Authors: Sajid Hussain
Abstract:
The complex domain approach for image segmentation based on active contour has been designed, which deforms step by step to partition an image into numerous expedient regions. A novel region-based trigonometric complex pressure force function is proposed, which propagates around the region of interest using image forces. The signed trigonometric force function controls the propagation of the active contour and the active contour stops on the exact edges of the object accurately. The proposed model makes the level set function binary and uses Gaussian smoothing kernel to adjust and escape the re-initialization procedure. The working principle of the proposed model is as follows: The real image data is transformed into complex data by iota (i) times of image data and the average iota (i) times of horizontal and vertical components of the gradient of image data is inserted in the proposed model to catch complex gradient of the image data. A simple finite difference mathematical technique has been used to implement the proposed model. The efficiency and robustness of the proposed model have been verified and compared with other state-of-the-art models.Keywords: image segmentation, active contour, level set, Mumford and Shah model
Procedia PDF Downloads 1148574 Endocardial Ultrasound Segmentation using Level Set method
Authors: Daoudi Abdelaziz, Mahmoudi Saïd, Chikh Mohamed Amine
Abstract:
This paper presents a fully automatic segmentation method of the left ventricle at End Systolic (ES) and End Diastolic (ED) in the ultrasound images by means of an implicit deformable model (level set) based on Geodesic Active Contour model. A pre-processing Gaussian smoothing stage is applied to the image, which is essential for a good segmentation. Before the segmentation phase, we locate automatically the area of the left ventricle by using a detection approach based on the Hough Transform method. Consequently, the result obtained is used to automate the initialization of the level set model. This initial curve (zero level set) deforms to search the Endocardial border in the image. On the other hand, quantitative evaluation was performed on a data set composed of 15 subjects with a comparison to ground truth (manual segmentation).Keywords: level set method, transform Hough, Gaussian smoothing, left ventricle, ultrasound images.
Procedia PDF Downloads 4658573 Structural Analysis of Kamaluddin Behzad's Works Based on Roland Barthes' Theory of Communication, 'Text and Image'
Authors: Mahsa Khani Oushani, Mohammad Kazem Hasanvand
Abstract:
Text and image have always been two important components in Iranian layout. The interactive connection between text and image has shaped the art of book design with multiple patterns. In this research, first the structure and visual elements in the research data were analyzed and then the position of the text element and the image element in relation to each other based on Roland Barthes theory on the three theories of text and image, were studied and analyzed and the results were compared, and interpreted. The purpose of this study is to investigate the pattern of text and image in the works of Kamaluddin Behzad based on three Roland Barthes communication theories, 1. Descriptive communication, 2. Reference communication, 3. Matched communication. The questions of this research are what is the relationship between text and image in Behzad's works? And how is it defined according to Roland Barthes theory? The method of this research has been done with a structuralist approach with a descriptive-analytical method in a library collection method. The information has been collected in the form of documents (library) and is a tool for collecting online databases. Findings show that the dominant element in Behzad's drawings is with the image and has created a reference relationship in the layout of the drawings, but in some cases it achieves a different relationship that despite the preference of the image on the page, the text is dispersed proportionally on the page and plays a more active role, played within the image. The text and the image support each other equally on the page; Roland Barthes equates this connection.Keywords: text, image, Kamaluddin Behzad, Roland Barthes, communication theory
Procedia PDF Downloads 1938572 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement
Authors: Hu Zhenxing, Gao Jianxin
Abstract:
Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D
Procedia PDF Downloads 5008571 A Pilot Study of Influences of Scan Speed on Image Quality for Digital Tomosynthesis
Authors: Li-Ting Huang, Yu-Hsiang Shen, Cing-Ciao Ke, Sheng-Pin Tseng, Fan-Pin Tseng, Yu-Ching Ni, Chia-Yu Lin
Abstract:
Chest radiography is the most common technique for the diagnosis and follow-up of pulmonary diseases. However, the lesions superimposed with normal structures are difficult to be detected in chest radiography. Chest tomosynthesis is a relatively new technique to obtain 3D section images from a set of low-dose projections acquired over a limited angular range. However, there are some limitations with chest tomosynthesis. Patients undergoing tomosynthesis have to be able to hold their breath firmly for 10 seconds. A digital tomosynthesis system with advanced reconstruction algorithm and high-stability motion mechanism was developed by our research group. The potential for the system to perform a bidirectional chest scan within 10 seconds is expected. The purpose of this study is to realize the influences of the scan speed on the image quality for our digital tomosynthesis system. The major factors that lead image blurring are the motion of the X-ray source and the patient. For the fore one, an experiment of imaging a chest phantom with three different scan speeds, which are 6 cm/s, 8 cm/s, and 15 cm/s, was proceeded to understand the scan speed influences on the image quality. For the rear factor, a normal SD (Sprague-Dawley) rat was imaged with it alive and sacrificed to assess the impact on the image quality due to breath motion. In both experiments, the profile of the ROIs (region of interest) and the CNRs (contrast-to-noise ratio) of the ROIs to the normal tissue of the reconstructed images was examined to realize the degradations of the qualities of the images. The preliminary results show that no obvious degradation of the image quality was observed with increasing scan speed, possibly due to the advanced designs for the hardware and software of the system. It implies that higher speed (15 cm/s) than that of the commercialized tomosynthesis system (12 cm/s) for the proposed system is achieved, and therefore a complete chest scan within 10 seconds is expected.Keywords: chest radiography, digital tomosynthesis, image quality, scan speed
Procedia PDF Downloads 3328570 Crater Detection Using PCA from Captured CMOS Camera Data
Authors: Tatsuya Takino, Izuru Nomura, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata
Abstract:
We propose a method of detecting the craters from the image of the lunar surface. This proposal assumes that it is applied to SLIM (Smart Lander for Investigating Moon) working group aiming at the pinpoint landing on the lunar surface and investigating scientific research. It is difficult to equip and use high-performance computers for the small space probe. So, it is necessary to use a small computer with an exclusive hardware such as FPGA. We have studied the crater detection using principal component analysis (PCA), In this paper, We implement detection algorithm into the FPGA, and the detection is performed on the data that was captured from the CMOS camera.Keywords: crater detection, PCA, FPGA, image processing
Procedia PDF Downloads 5508569 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 1778568 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis
Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin
Abstract:
Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve
Procedia PDF Downloads 3408567 Lossless Secret Image Sharing Based on Integer Discrete Cosine Transform
Authors: Li Li, Ahmed A. Abd El-Latif, Aya El-Fatyany, Mohamed Amin
Abstract:
This paper proposes a new secret image sharing method based on integer discrete cosine transform (IntDCT). It first transforms the original image into the frequency domain (DCT coefficients) using IntDCT, which are operated on each block with size 8*8. Then, it generates shares among each DCT coefficients in the same place of each block, that is, all the DC components are used to generate DC shares, the ith AC component in each block are utilized to generate ith AC shares, and so on. The DC and AC shares components with the same number are combined together to generate DCT shadows. Experimental results and analyses show that the proposed method can recover the original image lossless than those methods based on traditional DCT and is more sensitive to tiny change in both the coefficients and the content of the image.Keywords: secret image sharing, integer DCT, lossless recovery, sensitivity
Procedia PDF Downloads 4008566 A New 3D Shape Descriptor Based on Multi-Resolution and Multi-Block CS-LBP
Authors: Nihad Karim Chowdhury, Mohammad Sanaullah Chowdhury, Muhammed Jamshed Alam Patwary, Rubel Biswas
Abstract:
In content-based 3D shape retrieval system, achieving high search performance has become an important research problem. A challenging aspect of this problem is to find an effective shape descriptor which can discriminate similar shapes adequately. To address this problem, we propose a new shape descriptor for 3D shape models by combining multi-resolution with multi-block center-symmetric local binary pattern operator. Given an arbitrary 3D shape, we first apply pose normalization, and generate a set of multi-viewed 2D rendered images. Second, we apply Gaussian multi-resolution filter to generate several levels of images from each of 2D rendered image. Then, overlapped sub-images are computed for each image level of a multi-resolution image. Our unique multi-block CS-LBP comes next. It allows the center to be composed of m-by-n rectangular pixels, instead of a single pixel. This process is repeated for all the 2D rendered images, derived from both ‘depth-buffer’ and ‘silhouette’ rendering. Finally, we concatenate all the features vectors into one dimensional histogram as our proposed 3D shape descriptor. Through several experiments, we demonstrate that our proposed 3D shape descriptor outperform the previous methods by using a benchmark dataset.Keywords: 3D shape retrieval, 3D shape descriptor, CS-LBP, overlapped sub-images
Procedia PDF Downloads 4488565 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid
Authors: Min Wang, Sergey Utev
Abstract:
The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial
Procedia PDF Downloads 1408564 Remote Sensing of Urban Land Cover Change: Trends, Driving Forces, and Indicators
Authors: Wei Ji
Abstract:
This study was conducted in the Kansas City metropolitan area of the United States, which has experienced significant urban sprawling in recent decades. The remote sensing of land cover changes in this area spanned over four decades from 1972 through 2010. The project was implemented in two stages: the first stage focused on detection of long-term trends of urban land cover change, while the second one examined how to detect the coupled effects of human impact and climate change on urban landscapes. For the first-stage study, six Landsat images were used with a time interval of about five years for the period from 1972 through 2001. Four major land cover types, built-up land, forestland, non-forest vegetation land, and surface water, were mapped using supervised image classification techniques. The study found that over the three decades the built-up lands in the study area were more than doubled, which was mainly at the expense of non-forest vegetation lands. Surprisingly and interestingly, the area also saw a significant gain in surface water coverage. This observation raised questions: How have human activities and precipitation variation jointly impacted surface water cover during recent decades? How can we detect such coupled impacts through remote sensing analysis? These questions led to the second stage of the study, in which we designed and developed approaches to detecting fine-scale surface waters and analyzing coupled effects of human impact and precipitation variation on the waters. To effectively detect urban landscape changes that might be jointly shaped by precipitation variation, our study proposed “urban wetscapes” (loosely-defined urban wetlands) as a new indicator for remote sensing detection. The study examined whether urban wetscape dynamics was a sensitive indicator of the coupled effects of the two driving forces. To better detect this indicator, a rule-based classification algorithm was developed to identify fine-scale, hidden wetlands that could not be appropriately detected based on their spectral differentiability by a traditional image classification. Three SPOT images for years 1992, 2008, and 2010, respectively were classified with this technique to generate the four types of land cover as described above. The spatial analyses of remotely-sensed wetscape changes were implemented at the scales of metropolitan, watershed, and sub-watershed, as well as based on the size of surface water bodies in order to accurately reveal urban wetscape change trends in relation to the driving forces. The study identified that urban wetscape dynamics varied in trend and magnitude from the metropolitan, watersheds, to sub-watersheds in response to human impacts at different scales. The study also found that increased precipitation in the region in the past decades swelled larger wetlands in particular while generally smaller wetlands decreased mainly due to human development activities. These results confirm that wetscape dynamics can effectively reveal the coupled effects of human impact and climate change on urban landscapes. As such, remote sensing of this indicator provides new insights into the relationships between urban land cover changes and driving forces.Keywords: urban land cover, human impact, climate change, rule-based classification, across-scale analysis
Procedia PDF Downloads 3098563 Hybrid Reliability-Similarity-Based Approach for Supervised Machine Learning
Authors: Walid Cherif
Abstract:
Data mining has, over recent years, seen big advances because of the spread of internet, which generates everyday a tremendous volume of data, and also the immense advances in technologies which facilitate the analysis of these data. In particular, classification techniques are a subdomain of Data Mining which determines in which group each data instance is related within a given dataset. It is used to classify data into different classes according to desired criteria. Generally, a classification technique is either statistical or machine learning. Each type of these techniques has its own limits. Nowadays, current data are becoming increasingly heterogeneous; consequently, current classification techniques are encountering many difficulties. This paper defines new measure functions to quantify the resemblance between instances and then combines them in a new approach which is different from actual algorithms by its reliability computations. Results of the proposed approach exceeded most common classification techniques with an f-measure exceeding 97% on the IRIS Dataset.Keywords: data mining, knowledge discovery, machine learning, similarity measurement, supervised classification
Procedia PDF Downloads 4658562 Decision-Tree-Based Foot Disorders Classification Using Demographic Variable
Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi
Abstract:
Background:-Due to the essential role of the foot in movement, foot disorders (FDs) have significant impacts on activity and quality of life. Many studies confirmed the association between FDs and demographic characteristics. On the other hand, recent advances in data collection and statistical analysis led to an increase in the volume of databases. Analysis of patient’s data through the decision tree can be used to explore the relationship between demographic characteristics and FDs. Significance of the study: This study aimed to investigate the relationship between demographic characteristics with common FDs. The second purpose is to better inform foot intervention, we classify FDs based on demographic variables. Methodologies: We analyzed 2323 subjects with pes-planus (PP), pes-cavus (PC), hallux-valgus (HV) and plantar-fasciitis (PF) who were referred to a foot therapy clinic between 2015 and 2021. Subjects had to fulfill the following inclusion criteria: (1) weight between 14 to 150 kilogram, (2) height between 30 to 220, (3) age between 3 to 100 years old, and (4) BMI between 12 to 35. Medical archives of 2323 subjects were recorded retrospectively and all the subjects examined by an experienced physician. Age and BMI were classified into five and four groups, respectively. 80% of the data were randomly selected as training data and 20% tested. We build a decision tree model to classify FDs using demographic characteristics. Findings: Results demonstrated 981 subjects from 2323 (41.9%) of people who were referred to the clinic with FDs were diagnosed as PP, 657 (28.2%) PC, 628 (27%) HV and 213 (9%) identified with PF. The results revealed that the prevalence of PP decreased in people over 18 years of age and in children over 7 years. In adults, the prevalence depends first on BMI and then on gender. About 10% of adults and 81% of children with low BMI have PP. There is no relationship between gender and PP. PC is more dependent on age and gender. In children under 7 years, the prevalence was twice in girls (10%) than boys (5%) and in adults over 18 years slightly higher in men (62% vs 57%). HV increased with age in women and decreased in men. Aging and obesity have increased the prevalence of PF. We conclude that the accuracy of our approach is sufficient for most research applications in FDs. Conclusion:-The increased prevalence of PP in children is probably due to the formation of the arch of the foot at this age. Increasing BMI by applying high pressure on the foot can increase the prevalence of this disorder in the foot. In PC, the Increasing prevalence of PC from women to men with age may be due to genetics and innate susceptibility of men to this disorder. HV is more common in adult women, which may be due to environmental reasons such as shoes, and the prevalence of PF in obese adult women may also be due to higher foot pressure and housekeeping activities.Keywords: decision tree, demographic characteristics, foot disorders, machine learning
Procedia PDF Downloads 2638561 Presenting a Job Scheduling Algorithm Based on Learning Automata in Computational Grid
Authors: Roshanak Khodabakhsh Jolfaei, Javad Akbari Torkestani
Abstract:
As a cooperative environment for problem-solving, it is necessary that grids develop efficient job scheduling patterns with regard to their goals, domains and structure. Since the Grid environments facilitate distributed calculations, job scheduling appears in the form of a critical problem for the management of Grid sources that influences severely on the efficiency for the whole Grid environment. Due to the existence of some specifications such as sources dynamicity and conditions of the network in Grid, some algorithm should be presented to be adjustable and scalable with increasing the network growth. For this purpose, in this paper a job scheduling algorithm has been presented on the basis of learning automata in computational Grid which the performance of its results were compared with FPSO algorithm (Fuzzy Particle Swarm Optimization algorithm) and GJS algorithm (Grid Job Scheduling algorithm). The obtained numerical results indicated the superiority of suggested algorithm in comparison with FPSO and GJS. In addition, the obtained results classified FPSO and GJS in the second and third position respectively after the mentioned algorithm.Keywords: computational grid, job scheduling, learning automata, dynamic scheduling
Procedia PDF Downloads 3448560 The Artificial Intelligence Technologies Used in PhotoMath Application
Authors: Tala Toonsi, Marah Alagha, Lina Alnowaiser, Hala Rajab
Abstract:
This report is about the Photomath app, which is an AI application that uses image recognition technology, specifically optical character recognition (OCR) algorithms. The (OCR) algorithm translates the images into a mathematical equation, and the app automatically provides a step-by-step solution. The application supports decimals, basic arithmetic, fractions, linear equations, and multiple functions such as logarithms. Testing was conducted to examine the usage of this app, and results were collected by surveying ten participants. Later, the results were analyzed. This paper seeks to answer the question: To what level the artificial intelligence features are accurate and the speed of process in this app. It is hoped this study will inform about the efficiency of AI in Photomath to the users.Keywords: photomath, image recognition, app, OCR, artificial intelligence, mathematical equations.
Procedia PDF Downloads 1728559 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image
Authors: Salah Abdul Hameed Saleh, Ghada Hasan
Abstract:
The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area
Procedia PDF Downloads 4428558 Statistical Wavelet Features, PCA, and SVM-Based Approach for EEG Signals Classification
Authors: R. K. Chaurasiya, N. D. Londhe, S. Ghosh
Abstract:
The study of the electrical signals produced by neural activities of human brain is called Electroencephalography. In this paper, we propose an automatic and efficient EEG signal classification approach. The proposed approach is used to classify the EEG signal into two classes: epileptic seizure or not. In the proposed approach, we start with extracting the features by applying Discrete Wavelet Transform (DWT) in order to decompose the EEG signals into sub-bands. These features, extracted from details and approximation coefficients of DWT sub-bands, are used as input to Principal Component Analysis (PCA). The classification is based on reducing the feature dimension using PCA and deriving the support-vectors using Support Vector Machine (SVM). The experimental are performed on real and standard dataset. A very high level of classification accuracy is obtained in the result of classification.Keywords: discrete wavelet transform, electroencephalogram, pattern recognition, principal component analysis, support vector machine
Procedia PDF Downloads 6398557 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging
Procedia PDF Downloads 1588556 Lipschitz Classifiers Ensembles: Usage for Classification of Target Events in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
This paper introduces an original method for guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers. The solution was obtained as a finite closed set of alternative hypotheses, which contains an object of classification with a probability of not less than the specified value. Thus, the classification is represented by a set of hypothetical classes. In this case, the smaller the cardinality of the discrete set of hypothetical classes is, the higher is the classification accuracy. Experiments have shown that if the cardinality of the classifiers ensemble is increased then the cardinality of this set of hypothetical classes is reduced. The problem of the guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers is relevant in the multichannel classification of target events in C-OTDR monitoring systems. Results of suggested approach practical usage to accuracy control in C-OTDR monitoring systems are present.Keywords: Lipschitz classifiers, confidence set, C-OTDR monitoring, classifiers accuracy, classifiers ensemble
Procedia PDF Downloads 4938555 Heuristic of Style Transfer for Real-Time Detection or Classification of Weather Conditions from Camera Images
Authors: Hamed Ouattara, Pierre Duthon, Frédéric Bernardin, Omar Ait Aider, Pascal Salmane
Abstract:
In this article, we present three neural network architectures for real-time classification of weather conditions (sunny, rainy, snowy, foggy) from images. Inspired by recent advances in style transfer, two of these architectures -Truncated ResNet50 and Truncated ResNet50 with Gram Matrix and Attention- surpass the state of the art and demonstrate re-markable generalization capability on several public databases, including Kaggle (2000 images), Kaggle 850 images, MWI (1996 images) [1], and Image2Weather [2]. Although developed for weather detection, these architectures are also suitable for other appearance-based classification tasks, such as animal species recognition, texture classification, disease detection in medical images, and industrial defect identification. We illustrate these applications in the section “Applications of Our Models to Other Tasks” with the “SIIM-ISIC Melanoma Classification Challenge 2020” [3].Keywords: weather simulation, weather measurement, weather classification, weather detection, style transfer, Pix2Pix, CycleGAN, CUT, neural style transfer
Procedia PDF Downloads 138554 Land Use Change Detection Using Remote Sensing and GIS
Authors: Naser Ahmadi Sani, Karim Solaimani, Lida Razaghnia, Jalal Zandi
Abstract:
In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.Keywords: Haraz basin, change detection, land-use, satellite data
Procedia PDF Downloads 4158553 Human Performance Evaluating of Advanced Cardiac Life Support Procedure Using Fault Tree and Bayesian Network
Authors: Shokoufeh Abrisham, Seyed Mahmoud Hossieni, Elham Pishbin
Abstract:
In this paper, a hybrid method based on the fault tree analysis (FTA) and Bayesian networks (BNs) are employed to evaluate the team performance quality of advanced cardiac life support (ACLS) procedures in emergency department. According to American Heart Association (AHA) guidelines, a category relying on staff action leading to clinical incidents and also some discussions with emergency medicine experts, a fault tree model for ACLS procedure is obtained based on the human performance. The obtained FTA model is converted into BNs, and some different scenarios are defined to demonstrate the efficiency and flexibility of the presented model of BNs. Also, a sensitivity analysis is conducted to indicate the effects of team leader presence and uncertainty knowledge of experts on the quality of ACLS. The proposed model based on BNs shows that how the results of risk analysis can be closed to reality comparing to the obtained results based on only FTA in medical procedures.Keywords: advanced cardiac life support, fault tree analysis, Bayesian belief networks, numan performance, healthcare systems
Procedia PDF Downloads 1488552 Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method
Authors: Z. Mortezaie, H. Hassanpour, S. Asadi Amiri
Abstract:
Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.Keywords: unsharp masking, blur image, sub-region gradient, image enhancement
Procedia PDF Downloads 2168551 Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line
Authors: Fidel Lòpez Saca, Carlos Avilés-Cruz, Miguel Magos-Rivera, José Antonio Lara-Chávez
Abstract:
Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor’s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time.Keywords: deep-learning, image classification, image identification, industrial engineering.
Procedia PDF Downloads 1628550 Cloud Shield: Model to Secure User Data While Using Content Delivery Network Services
Authors: Rachna Jain, Sushila Madan, Bindu Garg
Abstract:
Cloud computing is the key powerhouse in numerous organizations due to shifting of their data to the cloud environment. In recent years it has been observed that cloud-based-services are being used on large scale for content storage, distribution and processing. Various issues have been observed in cloud computing environment that need to be addressed. Security and privacy are found topmost concern area. In this paper, a novel security model is proposed to secure data by utilizing CDN services like image to icon conversion. CDN Service is a content delivery service which converts an image to icon, word to pdf & Latex to pdf etc. Presented model is used to convert an image into icon by keeping image secret. Here security of image is imparted so that image should be encrypted and decrypted by data owners only. It is also discussed in the paper that how server performs multiplication and selection on encrypted data without decryption. The data can be image file, word file, audio or video file. Moreover, the proposed model is capable enough to multiply images, encrypt them and send to a server application for conversion. Eventually, the prime objective is to encrypt an image and convert the encrypted image to image Icon by utilizing homomorphic encryption.Keywords: cloud computing, user data security, homomorphic encryption, image multiplication, CDN service
Procedia PDF Downloads 3368549 Breast Cancer Diagnosing Based on Online Sequential Extreme Learning Machine Approach
Authors: Musatafa Abbas Abbood Albadr, Masri Ayob, Sabrina Tiun, Fahad Taha Al-Dhief, Mohammad Kamrul Hasan
Abstract:
Breast Cancer (BC) is considered one of the most frequent reasons of cancer death in women between 40 to 55 ages. The BC is diagnosed by using digital images of the FNA (Fine Needle Aspirate) for both benign and malignant tumors of the breast mass. Therefore, this work proposes the Online Sequential Extreme Learning Machine (OSELM) algorithm for diagnosing BC by using the tumor features of the breast mass. The current work has used the Wisconsin Diagnosis Breast Cancer (WDBC) dataset, which contains 569 samples (i.e., 357 samples for benign class and 212 samples for malignant class). Further, numerous measurements of assessment were used in order to evaluate the proposed OSELM algorithm, such as specificity, precision, F-measure, accuracy, G-mean, MCC, and recall. According to the outcomes of the experiment, the highest performance of the proposed OSELM was accomplished with 97.66% accuracy, 98.39% recall, 95.31% precision, 97.25% specificity, 96.83% F-measure, 95.00% MCC, and 96.84% G-Mean. The proposed OSELM algorithm demonstrates promising results in diagnosing BC. Besides, the performance of the proposed OSELM algorithm was superior to all its comparatives with respect to the rate of classification.Keywords: breast cancer, machine learning, online sequential extreme learning machine, artificial intelligence
Procedia PDF Downloads 113