Search results for: image dataset
2626 Seawater Changes' Estimation at Tidal Flat in Korean Peninsula Using Drone Stereo Images
Authors: Hyoseong Lee, Duk-jin Kim, Jaehong Oh, Jungil Shin
Abstract:
Tidal flat in Korean peninsula is one of the largest biodiversity tidal flats in the world. Therefore, digital elevation models (DEM) is continuously demanded to monitor of the tidal flat. In this study, DEM of tidal flat, according to different times, was produced by means of the Drone and commercial software in order to measure seawater change during high tide at water-channel in tidal flat. To correct the produced DEMs of the tidal flat where is inaccessible to collect control points, the DEM matching method was applied by using the reference DEM instead of the survey. After the ortho-image was made from the corrected DEM, the land cover classified image was produced. The changes of seawater amount according to the times were analyzed by using the classified images and DEMs. As a result, it was confirmed that the amount of water rapidly increased as the time passed during high tide.Keywords: tidal flat, drone, DEM, seawater change
Procedia PDF Downloads 2052625 Tinder, Image Merchandise and Desire: The Configuration of Social Ties in Today's Neoliberalism
Authors: Daniel Alvarado Valencia
Abstract:
Nowadays, the market offers us solutions for everything, creating the idea of an immediate availability of anything we could desire, and the Internet is the mean through which to obtain all this. The proposal of this conference is that this logic puts the subjects in a situation of self-exploitation, and considers the psyche as a productive force by configuring affection and desire from a neoliberal value perspective. It uses Tinder, starting from ethnographical data from Mexico City users, as an example for this. Tinder is an application created to get dates, have sexual encounters and find a partner. It works from the creation and management of a digital profile. It is an example of how futuristic and lonely the current era can be since we got used to interact with other people through screens and images. However, at the same time, it provides solutions to loneliness, since technology transgresses, invades and alters social practices in different ways. Tinder fits into this contemporary context, it is a concrete example of the processes of technification in which social bonds develop through certain devices offered by neoliberalism, through consumption, and where the search of love and courtship are possible through images and their consumption.Keywords: desire, image, merchandise, neoliberalism
Procedia PDF Downloads 1252624 A Comparative Assessment of Industrial Composites Using Thermography and Ultrasound
Authors: Mosab Alrashed, Wei Xu, Stephen Abineri, Yifan Zhao, Jörn Mehnen
Abstract:
Thermographic inspection is a relatively new technique for Non-Destructive Testing (NDT) which has been gathering increasing interest due to its relatively low cost hardware and extremely fast data acquisition properties. This technique is especially promising in the area of rapid automated damage detection and quantification. In collaboration with a major industry partner from the aerospace sector advanced thermography-based NDT software for impact damaged composites is introduced. The software is based on correlation analysis of time-temperature profiles in combination with an image enhancement process. The prototype software is aiming to a) better visualise the damages in a relatively easy-to-use way and b) automatically and quantitatively measure the properties of the degradation. Knowing that degradation properties play an important role in the identification of degradation types, tests and results on specimens which were artificially damaged have been performed and analyzed.Keywords: NDT, correlation analysis, image processing, damage, inspection
Procedia PDF Downloads 5502623 Diagnostic Efficacy and Usefulness of Digital Breast Tomosynthesis (DBT) in Evaluation of Breast Microcalcifications as a Pre-Procedural Study for Stereotactic Biopsy
Authors: Okhee Woo, Hye Seon Shin
Abstract:
Purpose: To investigate the diagnostic power of digital breast tomosynthesis (DBT) in evaluation of breast microcalcifications and usefulness as a pre-procedural study for stereotactic biopsy in comparison with full-field digital mammogram (FFDM) and FFDM plus magnification image (FFDM+MAG). Methods and Materials: An IRB approved retrospective observer performance study on DBT, FFDM, and FFDM+MAG was done. Image quality was rated in 5-point scoring system for lesion clarity (1, very indistinct; 2, indistinct; 3, fair; 4, clear; 5, very clear) and compared by Wilcoxon test. Diagnostic power was compared by diagnostic values and AUC with 95% confidence interval. Additionally, procedural report of biopsy was analysed for patient positioning and adequacy of instruments. Results: DBT showed higher lesion clarity (median 5, interquartile range 4-5) than FFDM (3, 2-4, p-value < 0.0001), and no statistically significant difference to FFDM+MAG (4, 4-5, p-value=0.3345). Diagnostic sensitivity and specificity of DBT were 86.4% and 92.5%; FFDM 70.4% and 66.7%; FFDM+MAG 93.8% and 89.6%. The AUCs of DBT (0.88) and FFDM+MAG (0.89) were larger than FFDM (0.59, p-values < 0.0001) but there was no statistically significant difference between DBT and FFDM+MAG (p-value=0.878). In 2 cases with DBT, petit needle could be appropriately prepared; and other 3 without DBT, patient repositioning was needed. Conclusion: DBT showed better image quality and diagnostic values than FFDM and equivalent to FFDM+MAG in the evaluation of breast microcalcifications. Evaluation with DBT as a pre-procedural study for breast stereotactic biopsy can lead to more accurate localization and successful biopsy and also waive the need for additional magnification images.Keywords: DBT, breast cancer, stereotactic biopsy, mammography
Procedia PDF Downloads 3052622 Investigating the Impact of Super Bowl Participation on Local Economy: A Perspective of Stock Market
Authors: Rui Du
Abstract:
This paper attempts to assess the impact of a major sporting event —the Super Bowl on the local economies. The identification strategy is to compare the winning and losing cities at the National Football League (NFL) conference finals under the assumption of similar pre-treatment trends. The stock market performances of companies headquartered in these cities are used to capture the sudden changes in local economic activities during a short time span. The exogenous variations in the football game outcome allow a straightforward difference-in-differences approach to identify the effect. This study finds that the post-event trends in winning and losing cities diverge despite the fact that both cities have economically and statistically similar pre-event trends. Empirical analysis provides suggestive evidence of a positive, significant local economic impact of conference final wins, possibly through city image enhancement. Further empirical evidence shows the presence of heterogeneous effects across industrial sectors, suggesting that city image enhancing the effect of the Super Bowl participation is empirically relevant for the changes in the composition of local industries. Also, this study also adopts a similar strategy to examine the local economic impact of Super Bowl successes, however, finds no statistically significant effect.Keywords: Super Bowl Participation, local economies, city image enhancement, difference-in-differences, industrial sectors
Procedia PDF Downloads 2422621 Dogs Chest Homogeneous Phantom for Image Optimization
Authors: Maris Eugênia Dela Rosa, Ana Luiza Menegatti Pavan, Marcela De Oliveira, Diana Rodrigues De Pina, Luis Carlos Vulcano
Abstract:
In medical veterinary as well as in human medicine, radiological study is essential for a safe diagnosis in clinical practice. Thus, the quality of radiographic image is crucial. In last year’s there has been an increasing substitution of image acquisition screen-film systems for computed radiology equipment (CR) without technical charts adequacy. Furthermore, to carry out a radiographic examination in veterinary patient is required human assistance for restraint this, which can compromise image quality by generating dose increasing to the animal, for Occupationally Exposed and also the increased cost to the institution. The image optimization procedure and construction of radiographic techniques are performed with the use of homogeneous phantoms. In this study, we sought to develop a homogeneous phantom of canine chest to be applied to the optimization of these images for the CR system. In carrying out the simulator was created a database with retrospectives chest images of computed tomography (CT) of the Veterinary Hospital of the Faculty of Veterinary Medicine and Animal Science - UNESP (FMVZ / Botucatu). Images were divided into four groups according to the animal weight employing classification by sizes proposed by Hoskins & Goldston. The thickness of biological tissues were quantified in a 80 animals, separated in groups of 20 animals according to their weights: (S) Small - equal to or less than 9.0 kg, (M) Medium - between 9.0 and 23.0 kg, (L) Large – between 23.1 and 40.0kg and (G) Giant – over 40.1 kg. Mean weight for group (S) was 6.5±2.0 kg, (M) 15.0±5.0 kg, (L) 32.0±5.5 kg and (G) 50.0 ±12.0 kg. An algorithm was developed in Matlab in order to classify and quantify biological tissues present in CT images and convert them in simulator materials. To classify tissues presents, the membership functions were created from the retrospective CT scans according to the type of tissue (adipose, muscle, bone trabecular or cortical and lung tissue). After conversion of the biologic tissue thickness in equivalent material thicknesses (acrylic simulating soft tissues, bone tissues simulated by aluminum and air to the lung) were obtained four different homogeneous phantoms, with (S) 5 cm of acrylic, 0,14 cm of aluminum and 1,8 cm of air; (M) 8,7 cm of acrylic, 0,2 cm of aluminum and 2,4 cm of air; (L) 10,6 cm of acrylic, 0,27 cm of aluminum and 3,1 cm of air and (G) 14,8 cm of acrylic, 0,33 cm of aluminum and 3,8 cm of air. The developed canine homogeneous phantom is a practical tool, which will be employed in future, works to optimize veterinary X-ray procedures.Keywords: radiation protection, phantom, veterinary radiology, computed radiography
Procedia PDF Downloads 4192620 Optimized Road Lane Detection Through a Combined Canny Edge Detection, Hough Transform, and Scaleable Region Masking Toward Autonomous Driving
Authors: Samane Sharifi Monfared, Lavdie Rada
Abstract:
Nowadays, autonomous vehicles are developing rapidly toward facilitating human car driving. One of the main issues is road lane detection for a suitable guidance direction and car accident prevention. This paper aims to improve and optimize road line detection based on a combination of camera calibration, the Hough transform, and Canny edge detection. The video processing is implemented using the Open CV library with the novelty of having a scale able region masking. The aim of the study is to introduce automatic road lane detection techniques with the user’s minimum manual intervention.Keywords: hough transform, canny edge detection, optimisation, scaleable masking, camera calibration, improving the quality of image, image processing, video processing
Procedia PDF Downloads 1012619 X-Ray Detector Technology Optimization In CT Imaging
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 2782618 Anti-Forensic Countermeasure: An Examination and Analysis Extended Procedure for Information Hiding of Android SMS Encryption Applications
Authors: Ariq Bani Hardi
Abstract:
Empowerment of smartphone technology is growing very rapidly in various fields of science. One of the mobile operating systems that dominate the smartphone market today is Android by Google. Unfortunately, the expansion of mobile technology is misused by criminals to hide the information that they store or exchange with each other. It makes law enforcement more difficult to prove crimes committed in the judicial process (anti-forensic). One of technique that used to hide the information is encryption, such as the usages of SMS encryption applications. A Mobile Forensic Examiner or an investigator should prepare a countermeasure technique if he finds such things during the investigation process. This paper will discuss an extension procedure if the investigator found unreadable SMS in android evidence because of encryption. To define the extended procedure, we create and analyzing a dataset of android SMS encryption application. The dataset was grouped by application characteristics related to communication permissions, as well as the availability of source code and the documentation of encryption scheme. Permissions indicate the possibility of how applications exchange the data and keys. Availability of the source code and the encryption scheme documentation can show what the cryptographic algorithm specification is used, how long the key length, how the process of key generation, key exchanges, encryption/decryption is done, and other related information. The output of this paper is an extended or alternative procedure for examination and analysis process of android digital forensic. It can be used to help the investigators while they got a confused cause of SMS encryption during examining and analyzing. What steps should the investigator take, so they still have a chance to discover the encrypted SMS in android evidence?Keywords: anti-forensic countermeasure, SMS encryption android, examination and analysis, digital forensic
Procedia PDF Downloads 1302617 An Improved Circulating Tumor Cells Analysis Method for Identifying Tumorous Blood Cells
Authors: Salvador Garcia Bernal, Chi Zheng, Keqi Zhang, Lei Mao
Abstract:
Circulating Tumor Cells (CTC) is used to detect tumoral cell metastases using blood samples of patients with cancer (lung, breast, etc.). Using an immunofluorescent method a three channel image (Red, Green, and Blue) are obtained. These set of images usually overpass the 11 x 30 M pixels in size. An aided tool is designed for imaging cell analysis to segmented and identify the tumorous cell based on the three markers signals. Our Method, it is cell-based (area and cell shape) considering each channel information and extracting and making decisions if it is a valid CTC. The system also gives information about number and size of tumor cells found in the sample. We present results in real-life samples achieving acceptable performance in identifying CTCs in short time.Keywords: Circulating Tumor Cells (CTC), cell analysis, immunofluorescent, medical image analysis
Procedia PDF Downloads 2162616 Mobile Crowdsensing Scheme by Predicting Vehicle Mobility Using Deep Learning Algorithm
Authors: Monojit Manna, Arpan Adhikary
Abstract:
In Mobile cloud sensing across the globe, an emerging paradigm is selected by the user to compute sensing tasks. In urban cities current days, Mobile vehicles are adapted to perform the task of data sensing and data collection for universality and mobility. In this work, we focused on the optimality and mobile nodes that can be selected in order to collect the maximum amount of data from urban areas and fulfill the required data in the future period within a couple of minutes. We map out the requirement of the vehicle to configure the maximum data optimization problem and budget. The Application implementation is basically set up to generalize a realistic online platform in which real-time vehicles are moving apparently in a continuous manner. The data center has the authority to select a set of vehicles immediately. A deep learning-based scheme with the help of mobile vehicles (DLMV) will be proposed to collect sensing data from the urban environment. From the future time perspective, this work proposed a deep learning-based offline algorithm to predict mobility. Therefore, we proposed a greedy approach applying an online algorithm step into a subset of vehicles for an NP-complete problem with a limited budget. Real dataset experimental extensive evaluations are conducted for the real mobility dataset in Rome. The result of the experiment not only fulfills the efficiency of our proposed solution but also proves the validity of DLMV and improves the quantity of collecting the sensing data compared with other algorithms.Keywords: mobile crowdsensing, deep learning, vehicle recruitment, sensing coverage, data collection
Procedia PDF Downloads 812615 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea
Authors: K. S. Sreejith, C. Shaji
Abstract:
Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis
Procedia PDF Downloads 2782614 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 4562613 Assessment of Sleep Disorders in Moroccan Women with Gynecological Cancer: Cross-Sectional Study
Authors: Amina Aquil, Abdeljalil El Got
Abstract:
Background: Sleep quality is one of the most important indicators related to the quality of life of patients suffering from cancer. Many factors could affect this quality of sleep and then be considered as associated predictors. Methods: The aim of this study was to assess the prevalence of sleep disorders and the associated factors with impaired sleep quality in Moroccan women with gynecological cancer. A cross-sectional study was carried out within the oncology department of the Ibn Rochd University Hospital, Casablanca, on Moroccan women who had undergone radical surgery for gynecological cancer (n=100). Translated and validated Arabic versions of the following international scales were used: Pittsburgh sleep quality index (PSQI), Hospital Anxiety and Depression Scale (HADS), Rosenberg's self-esteem scale (RSES), and Body image scale (BIS). Results: 78% of participants were considered poor sleepers. Most of the patients exhibited very poor subjective quality, low sleep latency, a short period of sleep, and a low rate of usual sleep efficiency. The vast majority of these patients were in poor shape during the day and did not use sleep medication. Waking up in the middle of the night or early in the morning and getting up to use the bathroom were the main reasons for poor sleep quality. PSQI scores were positively correlated with anxiety, depression, body image dissatisfaction, and lower self-esteem (p < 0.001). Conclusion: Sleep quality and its predictors require a systematic evaluation and adequate management to prevent sleep disturbances and mental distress as well as to improve the quality of life of these patients.Keywords: body image, gynecological cancer, self esteem, sleep quality
Procedia PDF Downloads 1272612 Intelligent Grading System of Apple Using Neural Network Arbitration
Authors: Ebenezer Obaloluwa Olaniyi
Abstract:
In this paper, an intelligent system has been designed to grade apple based on either its defective or healthy for production in food processing. This paper is segmented into two different phase. In the first phase, the image processing techniques were employed to extract the necessary features required in the apple. These techniques include grayscale conversion, segmentation where a threshold value is chosen to separate the foreground of the images from the background. Then edge detection was also employed to bring out the features in the images. These extracted features were then fed into the neural network in the second phase of the paper. The second phase is a classification phase where neural network employed to classify the defective apple from the healthy apple. In this phase, the network was trained with back propagation and tested with feed forward network. The recognition rate obtained from our system shows that our system is more accurate and faster as compared with previous work.Keywords: image processing, neural network, apple, intelligent system
Procedia PDF Downloads 4022611 X-Ray Detector Technology Optimization in Computed Tomography
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 1962610 INRAM-3DCNN: Multi-Scale Convolutional Neural Network Based on Residual and Attention Module Combined with Multilayer Perceptron for Hyperspectral Image Classification
Authors: Jianhong Xiang, Rui Sun, Linyu Wang
Abstract:
In recent years, due to the continuous improvement of deep learning theory, Convolutional Neural Network (CNN) has played a great superior performance in the research of Hyperspectral Image (HSI) classification. Since HSI has rich spatial-spectral information, only utilizing a single dimensional or single size convolutional kernel will limit the detailed feature information received by CNN, which limits the classification accuracy of HSI. In this paper, we design a multi-scale CNN with MLP based on residual and attention modules (INRAM-3DCNN) for the HSI classification task. We propose to use multiple 3D convolutional kernels to extract the packet feature information and fully learn the spatial-spectral features of HSI while designing residual 3D convolutional branches to avoid the decline of classification accuracy due to network degradation. Secondly, we also design the 2D Inception module with a joint channel attention mechanism to quickly extract key spatial feature information at different scales of HSI and reduce the complexity of the 3D model. Due to the high parallel processing capability and nonlinear global action of the Multilayer Perceptron (MLP), we use it in combination with the previous CNN structure for the final classification process. The experimental results on two HSI datasets show that the proposed INRAM-3DCNN method has superior classification performance and can perform the classification task excellently.Keywords: INRAM-3DCNN, residual, channel attention, hyperspectral image classification
Procedia PDF Downloads 852609 Digital Watermarking Based on Visual Cryptography and Histogram
Authors: R. Rama Kishore, Sunesh
Abstract:
Nowadays, robust and secure watermarking algorithm and its optimization have been need of the hour. A watermarking algorithm is presented to achieve the copy right protection of the owner based on visual cryptography, histogram shape property and entropy. In this, both host image and watermark are preprocessed. Host image is preprocessed by using Butterworth filter, and watermark is with visual cryptography. Applying visual cryptography on water mark generates two shares. One share is used for embedding the watermark, and the other one is used for solving any dispute with the aid of trusted authority. Usage of histogram shape makes the process more robust against geometric and signal processing attacks. The combination of visual cryptography, Butterworth filter, histogram, and entropy can make the algorithm more robust, imperceptible, and copy right protection of the owner.Keywords: digital watermarking, visual cryptography, histogram, butter worth filter
Procedia PDF Downloads 3602608 Classifier for Liver Ultrasound Images
Authors: Soumya Sajjan
Abstract:
Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix
Procedia PDF Downloads 4142607 Damage Micromechanisms of Coconut Fibers and Chopped Strand Mats of Coconut Fibers
Authors: Rios A. S., Hild F., Deus E. P., Aimedieu P., Benallal A.
Abstract:
The damage micromechanisms of chopped strand mats manufactured by compression of Brazilian coconut fiber and coconut fibers in different external conditions (chemical treatment) were used in this study. Mechanical analysis testing uniaxial traction were used with Digital Image Correlation (DIC). The images captured during the tensile test in the coconut fibers and coconut fiber mats showed an uncertainty of measurement in order centipixels. The initial modulus (modulus of elasticity) and tensile strength decreased with increasing diameter for the four conditions of coconut fibers. The DIC showed heterogeneous deformation fields for coconut fibers and mats and the displacement fields showed the rupture process of coconut fiber. The determination of poisson’s ratio of the mat was performed through of transverse and longitudinal deformations found in the elastic region.Keywords: coconut fiber, mechanical behavior, digital image correlation, micromechanism
Procedia PDF Downloads 4612606 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments
Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea
Abstract:
The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.Keywords: deep learning, data mining, gender predication, MOOCs
Procedia PDF Downloads 1512605 Hand Symbol Recognition Using Canny Edge Algorithm and Convolutional Neural Network
Authors: Harshit Mittal, Neeraj Garg
Abstract:
Hand symbol recognition is a pivotal component in the domain of computer vision, with far-reaching applications spanning sign language interpretation, human-computer interaction, and accessibility. This research paper discusses the approach with the integration of the Canny Edge algorithm and convolutional neural network. The significance of this study lies in its potential to enhance communication and accessibility for individuals with hearing impairments or those engaged in gesture-based interactions with technology. In the experiment mentioned, the data is manually collected by the authors from the webcam using Python codes, to increase the dataset augmentation, is applied to original images, which makes the model more compatible and advanced. Further, the dataset of about 6000 coloured images distributed equally in 5 classes (i.e., 1, 2, 3, 4, 5) are pre-processed first to gray images and then by the Canny Edge algorithm with threshold 1 and 2 as 150 each. After successful data building, this data is trained on the Convolutional Neural Network model, giving accuracy: 0.97834, precision: 0.97841, recall: 0.9783, and F1 score: 0.97832. For user purposes, a block of codes is built in Python to enable a window for hand symbol recognition. This research, at its core, seeks to advance the field of computer vision by providing an advanced perspective on hand sign recognition. By leveraging the capabilities of the Canny Edge algorithm and convolutional neural network, this study contributes to the ongoing efforts to create more accurate, efficient, and accessible solutions for individuals with diverse communication needs.Keywords: hand symbol recognition, computer vision, Canny edge algorithm, convolutional neural network
Procedia PDF Downloads 702604 Retina Registration for Biometrics Based on Characterization of Retinal Feature Points
Authors: Nougrara Zineb
Abstract:
The unique structure of the blood vessels in the retina has been used for biometric identification. The retina blood vessel pattern is a unique pattern in each individual and it is almost impossible to forge that pattern in a false individual. The retina biometrics’ advantages include high distinctiveness, universality, and stability overtime of the blood vessel pattern. Once the creases have been extracted from the images, a registration stage is necessary, since the position of the retinal vessel structure could change between acquisitions due to the movements of the eye. Image registration consists of following steps: Feature detection, feature matching, transform model estimation and image resembling and transformation. In this paper, we present an algorithm of registration; it is based on the characterization of retinal feature points. For experiments, retinal images from the DRIVE database have been tested. The proposed methodology achieves good results for registration in general.Keywords: fovea, optic disc, registration, retinal images
Procedia PDF Downloads 2692603 An Advanced Automated Brain Tumor Diagnostics Approach
Authors: Berkan Ural, Arif Eser, Sinan Apaydin
Abstract:
Medical image processing is generally become a challenging task nowadays. Indeed, processing of brain MRI images is one of the difficult parts of this area. This study proposes a hybrid well-defined approach which is consisted from tumor detection, extraction and analyzing steps. This approach is mainly consisted from a computer aided diagnostics system for identifying and detecting the tumor formation in any region of the brain and this system is commonly used for early prediction of brain tumor using advanced image processing and probabilistic neural network methods, respectively. For this approach, generally, some advanced noise removal functions, image processing methods such as automatic segmentation and morphological operations are used to detect the brain tumor boundaries and to obtain the important feature parameters of the tumor region. All stages of the approach are done specifically with using MATLAB software. Generally, for this approach, firstly tumor is successfully detected and the tumor area is contoured with a specific colored circle by the computer aided diagnostics program. Then, the tumor is segmented and some morphological processes are achieved to increase the visibility of the tumor area. Moreover, while this process continues, the tumor area and important shape based features are also calculated. Finally, with using the probabilistic neural network method and with using some advanced classification steps, tumor area and the type of the tumor are clearly obtained. Also, the future aim of this study is to detect the severity of lesions through classes of brain tumor which is achieved through advanced multi classification and neural network stages and creating a user friendly environment using GUI in MATLAB. In the experimental part of the study, generally, 100 images are used to train the diagnostics system and 100 out of sample images are also used to test and to check the whole results. The preliminary results demonstrate the high classification accuracy for the neural network structure. Finally, according to the results, this situation also motivates us to extend this framework to detect and localize the tumors in the other organs.Keywords: image processing algorithms, magnetic resonance imaging, neural network, pattern recognition
Procedia PDF Downloads 4222602 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa
Authors: Samy A. Khalil, U. Ali Rahoma
Abstract:
The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa
Procedia PDF Downloads 1042601 Statistical Feature Extraction Method for Wood Species Recognition System
Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof
Abstract:
Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.Keywords: classification, feature extraction, fuzzy, inspection system, image analysis, macroscopic images
Procedia PDF Downloads 4292600 Late Roman-Byzantine Glass Bracelet Finds at Amorium and Comparison with Other Cultures
Authors: Atilla Tekin
Abstract:
Amorium was one of the biggest cities of Byzantine Empire, located under and around the modern village of Hisarköy, Emirdağ, Afyonkarahisar Province, Turkey. It was situated on the routes of trades and Byzantine military road from Constantinople to Cilicia. In addition, it was on the routes of trades and a center of bishopric. After Arab invasion, Amorium gradually lost importance. The research consists of 1372 pieces of glass bracelet finds from mostly at 1998- 2009 excavations. Most of them were found as glass bracelets fragments. The fragments are of various size, forms, colors, and decorations. During the research, they were measured and grouped according to their crossings, at first. After being photographed, they were sketched by Adobe Illustrator and decoupaged by Photoshop. All forms, colors, and decorations were specified and compared to each other. Thus, they have been tried to be dated and uncovered the place of manufacture. The importance of the research is presenting the perception of image and admiration and comparing with other cultures.Keywords: Amorium, glass bracelets, image, Byzantine empire, jewelry
Procedia PDF Downloads 2032599 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population
Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath
Abstract:
Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics
Procedia PDF Downloads 1662598 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi
Abstract:
One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)
Procedia PDF Downloads 4452597 Traffic Sign Recognition System Using Convolutional Neural NetworkDevineni
Authors: Devineni Vijay Bhaskar, Yendluri Raja
Abstract:
We recommend a model for traffic sign detection stranded on Convolutional Neural Networks (CNN). We first renovate the unique image into the gray scale image through with support vector machines, then use convolutional neural networks with fixed and learnable layers for revealing and understanding. The permanent layer can reduction the amount of attention areas to notice and crop the limits very close to the boundaries of traffic signs. The learnable coverings can rise the accuracy of detection significantly. Besides, we use bootstrap procedures to progress the accuracy and avoid overfitting problem. In the German Traffic Sign Detection Benchmark, we obtained modest results, with an area under the precision-recall curve (AUC) of 99.49% in the group “Risk”, and an AUC of 96.62% in the group “Obligatory”.Keywords: convolutional neural network, support vector machine, detection, traffic signs, bootstrap procedures, precision-recall curve
Procedia PDF Downloads 126