Search results for: pixel normalization
350 Investigation for Pixel-Based Accelerated Aging of Large Area Picosecond Photo-Detectors
Authors: I. Tzoka, V. A. Chirayath, A. Brandt, J. Asaadi, Melvin J. Aviles, Stephen Clarke, Stefan Cwik, Michael R. Foley, Cole J. Hamel, Alexey Lyashenko, Michael J. Minot, Mark A. Popecki, Michael E. Stochaj, S. Shin
Abstract:
Micro-channel plate photo-multiplier tubes (MCP-PMTs) have become ubiquitous and are widely considered potential candidates for next generation High Energy Physics experiments due to their picosecond timing resolution, ability to operate in strong magnetic fields, and low noise rates. A key factor that determines the applicability of MCP-PMTs in their lifetime, especially when they are used in high event rate experiments. We have developed a novel method for the investigation of the aging behavior of an MCP-PMT on an accelerated basis. The method involves exposing a localized region of the MCP-PMT to photons at a high repetition rate. This pixel-based method was inspired by earlier results showing that damage to the photocathode of the MCP-PMT occurs primarily at the site of light exposure and that the surrounding region undergoes minimal damage. One advantage of the pixel-based method is that it allows the dynamics of photo-cathode damage to be studied at multiple locations within the same MCP-PMT under different operating conditions. In this work, we use the pixel-based accelerated lifetime test to investigate the aging behavior of a 20 cm x 20 cm Large Area Picosecond Photo Detector (LAPPD) manufactured by INCOM Inc. at multiple locations within the same device under different operating conditions. We compare the aging behavior of the MCP-PMT obtained from the first lifetime test conducted under high gain conditions to the lifetime obtained at a different gain. Through this work, we aim to correlate the lifetime of the MCP-PMT and the rate of ion feedback, which is a function of the gain of each MCP, and which can also vary from point to point across a large area (400 $cm^2$) MCP. The tests were made possible by the uniqueness of the LAPPD design, which allows independent control of the gain of the chevron stacked MCPs. We will further discuss the implications of our results for optimizing the operating conditions of the detector when used in high event rate experiments.Keywords: electron multipliers (vacuum), LAPPD, lifetime, micro-channel plate photo-multipliers tubes, photoemission, time-of-flight
Procedia PDF Downloads 180349 Suppression Subtractive Hybridization Technique for Identification of the Differentially Expressed Genes
Authors: Tuhina-khatun, Mohamed Hanafi Musa, Mohd Rafii Yosup, Wong Mui Yun, Aktar-uz-Zaman, Mahbod Sahebi
Abstract:
Suppression subtractive hybridization (SSH) method is valuable tool for identifying differentially regulated genes in disease specific or tissue specific genes important for cellular growth and differentiation. It is a widely used method for separating DNA molecules that distinguish two closely related DNA samples. SSH is one of the most powerful and popular methods for generating subtracted cDNA or genomic DNA libraries. It is based primarily on a suppression polymerase chain reaction (PCR) technique and combines normalization and subtraction in a solitary procedure. The normalization step equalizes the abundance of DNA fragments within the target population, and the subtraction step excludes sequences that are common to the populations being compared. This dramatically increases the probability of obtaining low-abundance differentially expressed cDNAs or genomic DNA fragments and simplifies analysis of the subtracted library. SSH technique is applicable to many comparative and functional genetic studies for the identification of disease, developmental, tissue specific, or other differentially expressed genes, as well as for the recovery of genomic DNA fragments distinguishing the samples under comparison.Keywords: suppression subtractive hybridization, differentially expressed genes, disease specific genes, tissue specific genes
Procedia PDF Downloads 433348 An Improved Convolution Deep Learning Model for Predicting Trip Mode Scheduling
Authors: Amin Nezarat, Naeime Seifadini
Abstract:
Trip mode selection is a behavioral characteristic of passengers with immense importance for travel demand analysis, transportation planning, and traffic management. Identification of trip mode distribution will allow transportation authorities to adopt appropriate strategies to reduce travel time, traffic and air pollution. The majority of existing trip mode inference models operate based on human selected features and traditional machine learning algorithms. However, human selected features are sensitive to changes in traffic and environmental conditions and susceptible to personal biases, which can make them inefficient. One way to overcome these problems is to use neural networks capable of extracting high-level features from raw input. In this study, the convolutional neural network (CNN) architecture is used to predict the trip mode distribution based on raw GPS trajectory data. The key innovation of this paper is the design of the layout of the input layer of CNN as well as normalization operation, in a way that is not only compatible with the CNN architecture but can also represent the fundamental features of motion including speed, acceleration, jerk, and Bearing rate. The highest prediction accuracy achieved with the proposed configuration for the convolutional neural network with batch normalization is 85.26%.Keywords: predicting, deep learning, neural network, urban trip
Procedia PDF Downloads 139347 Dynamics of Follicle Vascular Perfusion, Dimensions, Antrum Growth, Circulating Angiogenic Mediators from Deviation to Ovulation
Authors: Elshymaa A. Abdelnaby, Amal M. Abo El-Maaty
Abstract:
This study aimed to investigate dynamics of dominant and subordinate follicles change in dimensions, vascularity and angiogenic hormones after completing deviation till ovulation. Five cyclic mares were subjected to daily blood sampling and rectal Doppler ultrasonographic examination along two estrous cycles. Using electronic calipers, three diameters were recorded for each follicle to estimate area and volume. Leptin, Insulin-like growth factor-I (IGF-1), nitric oxide (NO) and estradiol (E2) were measured. Area of color- and power- Doppler modes with area and circumference of the first (preovulatory) and subordinate follicles were measured in pixels. Follicles were classified into F1O (preovulatory), F2O (subordinate), F3O (third ovulatory) on the dominant ovary and F1C (first contra) and F2C (second contra) on the contralateral ovary. Days before ovulation significantly (P < 0.0001) affected diameter, circumference, area, volume, area/pixel and antrum area of the preovulatory follicle. With the increase of diameter, area, volume area/pixel, antrum area/pixel and circumference of F1O, those of all subordinates were decreasing. The blue blood flow area, power and power minus red blood flow area of F1O increased from day -6 till day of ovulation (day 0), but red blood flow area significantly decreased. F1O had the lowest percent of colored pixels and percent of the colored pixels without antrum. Estradiol and leptin increased from day -6 till day 0 but IGF-1 decreased till day -1 but NO achieved a peak on day -3 then decreased till day 0. In conclusion, antrum growth, blood flow and angiogenic hormones play a role in maturation and ovulation of the dominant follicle in mares.Keywords: angiogenic hormones, blood flow, mare, preovulatory follicle
Procedia PDF Downloads 313346 Surface Hole Defect Detection of Rolled Sheets Based on Pixel Classification Approach
Authors: Samira Taleb, Sakina Aoun, Slimane Ziani, Zoheir Mentouri, Adel Boudiaf
Abstract:
Rolling is a pressure treatment technique that modifies the shape of steel ingots or billets between rotating rollers. During this process, defects may form on the surface of the rolled sheets and are likely to affect the performance and quality of the finished product. In our study, we developed a method for detecting surface hole defects using a pixel classification approach. This work includes several steps. First, we performed image preprocessing to delimit areas with and without hole defects on the sheet image. Then, we developed the histograms of each area to generate the gray level membership intervals of the pixels that characterize each area. As we noticed an intersection between the characteristics of the gray level intervals of the images of the two areas, we finally performed a learning step based on a series of detection tests to refine the membership intervals of each area, and to choose the defect detection criterion in order to optimize the recognition of the surface hole.Keywords: classification, defect, surface, detection, hole
Procedia PDF Downloads 20345 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning
Authors: Yanwen Li, Shuguo Xie
Abstract:
In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.Keywords: gradient image, segmentation and extract, mean-shift algorithm, dictionary iearning
Procedia PDF Downloads 267344 Enhancement of X-Rays Images Intensity Using Pixel Values Adjustments Technique
Authors: Yousif Mohamed Y. Abdallah, Razan Manofely, Rajab M. Ben Yousef
Abstract:
X-Ray images are very popular as a first tool for diagnosis. Automating the process of analysis of such images is important in order to help physician procedures. In this practice, teeth segmentation from the radiographic images and feature extraction are essential steps. The main objective of this study was to study correction preprocessing of x-rays images using local adaptive filters in order to evaluate contrast enhancement pattern in different x-rays images such as grey color and to evaluate the usage of new nonlinear approach for contrast enhancement of soft tissues in x-rays images. The data analyzed by using MatLab program to enhance the contrast within the soft tissues, the gray levels in both enhanced and unenhanced images and noise variance. The main techniques of enhancement used in this study were contrast enhancement filtering and deblurring images using the blind deconvolution algorithm. In this paper, prominent constraints are firstly preservation of image's overall look; secondly, preservation of the diagnostic content in the image and thirdly detection of small low contrast details in diagnostic content of the image.Keywords: enhancement, x-rays, pixel intensity values, MatLab
Procedia PDF Downloads 486343 Digital Image Steganography with Multilayer Security
Authors: Amar Partap Singh Pharwaha, Balkrishan Jindal
Abstract:
In this paper, a new method is developed for hiding image in a digital image with multilayer security. In the proposed method, the secret image is encrypted in the first instance using a flexible matrix based symmetric key to add first layer of security. Then another layer of security is added to the secret data by encrypting the ciphered data using Pythagorean Theorem method. The ciphered data bits (4 bits) produced after double encryption are then embedded within digital image in the spatial domain using Least Significant Bits (LSBs) substitution. To improve the image quality of the stego-image, an improved form of pixel adjustment process is proposed. To evaluate the effectiveness of the proposed method, image quality metrics including Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), entropy, correlation, mean value and Universal Image Quality Index (UIQI) are measured. It has been found experimentally that the proposed method provides higher security as well as robustness. In fact, the results of this study are quite promising.Keywords: Pythagorean theorem, pixel adjustment, ciphered data, image hiding, least significant bit, flexible matrix
Procedia PDF Downloads 337342 Individualized Emotion Recognition Through Dual-Representations and Ground-Established Ground Truth
Authors: Valentina Zhang
Abstract:
While facial expression is a complex and individualized behavior, all facial emotion recognition (FER) systems known to us rely on a single facial representation and are trained on universal data. We conjecture that: (i) different facial representations can provide different, sometimes complementing views of emotions; (ii) when employed collectively in a discussion group setting, they enable more accurate emotion reading which is highly desirable in autism care and other applications context sensitive to errors. In this paper, we first study FER using pixel-based DL vs semantics-based DL in the context of deepfake videos. Our experiment indicates that while the semantics-trained model performs better with articulated facial feature changes, the pixel-trained model outperforms on subtle or rare facial expressions. Armed with these findings, we have constructed an adaptive FER system learning from both types of models for dyadic or small interacting groups and further leveraging the synthesized group emotions as the ground truth for individualized FER training. Using a collection of group conversation videos, we demonstrate that FER accuracy and personalization can benefit from such an approach.Keywords: neurodivergence care, facial emotion recognition, deep learning, ground truth for supervised learning
Procedia PDF Downloads 147341 Dynamic Gabor Filter Facial Features-Based Recognition of Emotion in Video Sequences
Authors: T. Hari Prasath, P. Ithaya Rani
Abstract:
In the world of visual technology, recognizing emotions from the face images is a challenging task. Several related methods have not utilized the dynamic facial features effectively for high performance. This paper proposes a method for emotions recognition using dynamic facial features with high performance. Initially, local features are captured by Gabor filter with different scale and orientations in each frame for finding the position and scale of face part from different backgrounds. The Gabor features are sent to the ensemble classifier for detecting Gabor facial features. The region of dynamic features is captured from the Gabor facial features in the consecutive frames which represent the dynamic variations of facial appearances. In each region of dynamic features is normalized using Z-score normalization method which is further encoded into binary pattern features with the help of threshold values. The binary features are passed to Multi-class AdaBoost classifier algorithm with the well-trained database contain happiness, sadness, surprise, fear, anger, disgust, and neutral expressions to classify the discriminative dynamic features for emotions recognition. The developed method is deployed on the Ryerson Multimedia Research Lab and Cohn-Kanade databases and they show significant performance improvement owing to their dynamic features when compared with the existing methods.Keywords: detecting face, Gabor filter, multi-class AdaBoost classifier, Z-score normalization
Procedia PDF Downloads 279340 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce
Authors: Jiao Sun, Li Pan, Shijun Liu
Abstract:
Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.Keywords: collaborative filtering, recommendation, data normalization, mapreduce
Procedia PDF Downloads 217339 Training a Neural Network to Segment, Detect and Recognize Numbers
Authors: Abhisek Dash
Abstract:
This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.Keywords: convolutional neural networks, OCR, text detection, text segmentation
Procedia PDF Downloads 163338 Image Enhancement of Histological Slides by Using Nonlinear Transfer Function
Authors: D. Suman, B. Nikitha, J. Sarvani, V. Archana
Abstract:
Histological slides provide clinical diagnostic information about the subjects from the ancient times. Even with the advent of high resolution imaging cameras the image tend to have some background noise which makes the analysis complex. A study of the histological slides is done by using a nonlinear transfer function based image enhancement method. The method processes the raw, color images acquired from the biological microscope, which, in general, is associated with background noise. The images usually appearing blurred does not convey the intended information. In this regard, an enhancement method is proposed and implemented on 50 histological slides of human tissue by using nonlinear transfer function method. The histological image is converted into HSV color image. The luminance value of the image is enhanced (V component) because change in the H and S components could change the color balance between HSV components. The HSV image is divided into smaller blocks for carrying out the dynamic range compression by using a linear transformation function. Each pixel in the block is enhanced based on the contrast of the center pixel and its neighborhood. After the processing the V component, the HSV image is transformed into a colour image. The study has shown improvement of the characteristics of the image so that the significant details of the histological images were improved.Keywords: HSV space, histology, enhancement, image
Procedia PDF Downloads 329337 Comparison of EMG Normalization Techniques Recommended for Back Muscles Used in Ergonomics Research
Authors: Saif Al-Qaisi, Alif Saba
Abstract:
Normalization of electromyography (EMG) data in ergonomics research is a prerequisite for interpreting the data. Normalizing accounts for variability in the data due to differences in participants’ physical characteristics, electrode placement protocols, time of day, and other nuisance factors. Typically, normalized data is reported as a percentage of the muscle’s isometric maximum voluntary contraction (%MVC). Various MVC techniques have been recommended in the literature for normalizing EMG activity of back muscles. This research tests and compares the recommended MVC techniques in the literature for three back muscles commonly used in ergonomics research, which are the lumbar erector spinae (LES), latissimus dorsi (LD), and thoracic erector spinae (TES). Six healthy males from a university population participated in this research. Five different MVC exercises were compared for each muscle using the Tringo wireless EMG system (Delsys Inc.). Since the LES and TES share similar functions in controlling trunk movements, their MVC exercises were the same, which included trunk extension at -60°, trunk extension at 0°, trunk extension while standing, hip extension, and the arch test. The MVC exercises identified in the literature for the LD were chest-supported shoulder extension, prone shoulder extension, lat-pull down, internal shoulder rotation, and abducted shoulder flexion. The maximum EMG signal was recorded during each MVC trial, and then the averages were computed across participants. A one-way analysis of variance (ANOVA) was utilized to determine the effect of MVC technique on muscle activity. Post-hoc analyses were performed using the Tukey test. The MVC technique effect was statistically significant for each of the muscles (p < 0.05); however, a larger sample of participants was needed to detect significant differences in the Tukey tests. The arch test was associated with the highest EMG average at the LES, and also it resulted in the maximum EMG activity more often than the other techniques (three out of six participants). For the TES, trunk extension at 0° was associated with the largest EMG average, and it resulted in the maximum EMG activity the most often (three out of six participants). For the LD, participants obtained their maximum EMG either from chest-supported shoulder extension (three out of six participants) or prone shoulder extension (three out of six participants). Chest-supported shoulder extension, however, had a larger average than prone shoulder extension (0.263 and 0.240, respectively). Although all the aforementioned techniques were superior in their averages, they did not always result in the maximum EMG activity. If an accurate estimate of the true MVC is desired, more than one technique may have to be performed. This research provides additional MVC techniques for each muscle that may elicit the maximum EMG activity.Keywords: electromyography, maximum voluntary contraction, normalization, physical ergonomics
Procedia PDF Downloads 194336 Decision Making System for Clinical Datasets
Authors: P. Bharathiraja
Abstract:
Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.Keywords: decision making, data mining, normalization, fuzzy rule, classification
Procedia PDF Downloads 518335 Correlation of Building Density toward Land Surface Temperature 2018 in Medan City
Authors: Andi Syahputra, R. H. Jatmiko, D. R. Hizbaron
Abstract:
Land surface temperature (LST) in an area is influenced by conditions of vegetation density, building density, and the number of inhabitants who live in the area. Medan City is one of the largest cities in Indonesia, with a high rate of change from vegetation to developed land. This study aims to identify the relationship between the percentage of building density and land surface temperature in Medan City. Pixel image analysis method is carried out to obtain the value of building density in pixel images of Landsat 8 images with the help of WorldView-2 satellite imagery. The results showed the highest land surface temperature in 2018 of 35, 4°C was found in Medan Perjuangan District, and the lowest was 22.5°C in Medan Belawan District. Building density samples with a density level of 889.17 m were also found in Medan Perjuangan District, while the lowest building density sample was found in Medan Timur District. Linear regression analysis of the effect of building density with land surface temperature obtained a correlation (R) was 0.64, and a coefficient of determination (R²) was 0.411 and modeling of building density based on the LST has a correlation (R), and a coefficient of determination (R²) was 0.72 with The RMSE obtained 0.853.Keywords: land surface temperature, Landsat, imagery, building density, vegetation, density
Procedia PDF Downloads 152334 Obstacle Avoidance Using Image-Based Visual Servoing Based on Deep Reinforcement Learning
Authors: Tong He, Long Chen, Irag Mantegh, Wen-Fang Xie
Abstract:
This paper proposes an image-based obstacle avoidance and tracking target identification strategy in GPS-degraded or GPS-denied environment for an Unmanned Aerial Vehicle (UAV). The traditional force algorithm for obstacle avoidance could produce local minima area, in which UAV cannot get away obstacle effectively. In order to eliminate it, an artificial potential approach based on harmonic potential is proposed to guide the UAV to avoid the obstacle by using the vision system. And image-based visual servoing scheme (IBVS) has been adopted to implement the proposed obstacle avoidance approach. In IBVS, the pixel accuracy is a key factor to realize the obstacle avoidance. In this paper, the deep reinforcement learning framework has been applied by reducing pixel errors through constant interaction between the environment and the agent. In addition, the combination of OpenTLD and Tensorflow based on neural network is used to identify the type of tracking target. Numerical simulation in Matlab and ROS GAZEBO show the satisfactory result in target identification and obstacle avoidance.Keywords: image-based visual servoing, obstacle avoidance, tracking target identification, deep reinforcement learning, artificial potential approach, neural network
Procedia PDF Downloads 143333 Field-Programmable Gate Arrays Based High-Efficiency Oriented Fast and Rotated Binary Robust Independent Elementary Feature Extraction Method Using Feature Zone Strategy
Authors: Huang Bai-Cheng
Abstract:
When deploying the Oriented Fast and Rotated Binary Robust Independent Elementary Feature (BRIEF) (ORB) extraction algorithm on field-programmable gate arrays (FPGA), the access of global storage for 31×31 pixel patches of the features has become the bottleneck of the system efficiency. Therefore, a feature zone strategy has been proposed. Zones are searched as features are detected. Pixels around the feature zones are extracted from global memory and distributed into patches corresponding to feature coordinates. The proposed FPGA structure is targeted on a Xilinx FPGA development board of Zynq UltraScale+ series, and multiple datasets are tested. Compared with the streaming pixel patch extraction method, the proposed architecture obtains at least two times acceleration consuming extra 3.82% Flip-Flops (FFs) and 7.78% Look-Up Tables (LUTs). Compared with the non-streaming one, the proposed architecture saves 22.3% LUT and 1.82% FF, causing a latency of only 0.2ms and a drop in frame rate for 1. Compared with the related works, the proposed strategy and hardware architecture have the superiority of keeping a balance between FPGA resources and performance.Keywords: feature extraction, real-time, ORB, FPGA implementation
Procedia PDF Downloads 122332 Accurate Cortical Reconstruction in Narrow Sulci with Zero-Non-Zero Distance (ZNZD) Vector Field
Authors: Somojit Saha, Rohit K. Chatterjee, Sarit K. Das, Avijit Kar
Abstract:
A new force field is designed for propagation of the parametric contour into deep narrow cortical fold in the application of knowledge based reconstruction of cerebral cortex from MR image of brain. Designing of this force field is highly inspired by the Generalized Gradient Vector Flow (GGVF) model and markedly differs in manipulation of image information in order to determine the direction of propagation of the contour. While GGVF uses edge map as its main driving force, the newly designed force field uses the map of distance between zero valued pixels and their nearest non-zero valued pixel as its main driving force. Hence, it is called Zero-Non-Zero Distance (ZNZD) force field. The objective of this force field is forceful propagation of the contour beyond spurious convergence due to partial volume effect (PVE) in to narrow sulcal fold. Being function of the corresponding non-zero pixel value, the force field has got an inherent property to determine spuriousness of the edge automatically. It is effectively applied along with some morphological processing in the application of cortical reconstruction to breach the hindrance of PVE in narrow sulci where conventional GGVF fails.Keywords: deformable model, external force field, partial volume effect, cortical reconstruction, MR image of brain
Procedia PDF Downloads 398331 Content-Aware Image Augmentation for Medical Imaging Applications
Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang
Abstract:
Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving
Procedia PDF Downloads 224330 Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion
Authors: Xudong Guan, Ainong Li, Gaohuan Liu, Chong Huang, Wei Zhao
Abstract:
Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications.Keywords: image classification, decision fusion, multi-temporal, remote sensing
Procedia PDF Downloads 124329 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation
Authors: Aicha Majda, Abdelhamid El Hassani
Abstract:
Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric
Procedia PDF Downloads 169328 Madness in Susanna Kaysen’s Girl, Interrupted: A Focouldian Reading
Authors: Somaye Sabetnia
Abstract:
This paper is accomplished to probe Susanna Kaysen’s memoir Girl, Interrupted in the light of Michel Foucault’s theory of madness comprehensively set forth in his History of Madness (1961). It is an endeavor to analysis this novel based on Foucault’s idea of madness. In his archeological study of madness, Foucault introduces a way to perceive madness and its association with dominant discourses. He argues that the concept of madness is constructed within the social context, and different institutions affect its definition. Furthermore, he takes into consideration how each era treats madness, and affirms that in modern times, people considered mad are exiled out of cities, confined in madhouses, and later in clinics where they are treated with drugs. Set after World War II, the novel under observation highlights women’s conditions in which they were becoming a housewife or following their own desires; in fact, choosing the second one results in labeling mad. The protagonist of novel is labeled 'mad,' and is hence impelled to go to asylums where so-called patients are under the vigilant surveillance of the authorities to go through the process of 'normalization.' To discern how she is considered 'mad,' this article probes the dominant discourse of the time when the stories take place to provide a better understanding of madness under the impact of social, cultural, and political conditions. It examines how a so-called mad considered 'Other' and treated after being confined by the disciplinary system of the asylum in a panoptic world. In addition to, it describes the aim of treatment is to punish and control a patient not to cure. This article aims to indicate that Susanna Kaysen tries to picture what is defined as women’s madness is the result of the patriarchal society of the post-war America as well as the mental illness has nothing to do with blood; it is rather the result of the social inequality of the age.Keywords: clinical treatment, disciplining and punishment, dominant discourse, normalization, other, panoptic world, reason vs. unreason
Procedia PDF Downloads 323327 Task Scheduling and Resource Allocation in Cloud-based on AHP Method
Authors: Zahra Ahmadi, Fazlollah Adibnia
Abstract:
Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow
Procedia PDF Downloads 146326 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique
Authors: Ahmet Karagoz, Irfan Karagoz
Abstract:
Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.Keywords: automatic target recognition, sparse representation, image classification, SAR images
Procedia PDF Downloads 367325 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score
Procedia PDF Downloads 134324 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 132323 Vulnerability of People to Climate Change: Influence of Methods and Computation Approaches on Assessment Outcomes
Authors: Adandé Belarmain Fandohan
Abstract:
Climate change has become a major concern globally, particularly in rural communities that have to find rapid coping solutions. Several vulnerability assessment approaches have been developed in the last decades. This comes along with a higher risk for different methods to result in different conclusions, thereby making comparisons difficult and decision-making non-consistent across areas. The effect of methods and computational approaches on estimates of people’s vulnerability was assessed using data collected from the Gambia. Twenty-four indicators reflecting vulnerability components: (exposure, sensitivity, and adaptive capacity) were selected for this purpose. Data were collected through household surveys and key informant interviews. One hundred and fifteen respondents were surveyed across six communities and two administrative districts. Results were compared over three computational approaches: the maximum value transformation normalization, the z-score transformation normalization, and simple averaging. Regardless of the approaches used, communities that have high exposure to climate change and extreme events were the most vulnerable. Furthermore, the vulnerability was strongly related to the socio-economic characteristics of farmers. The survey evidenced variability in vulnerability among communities and administrative districts. Comparing output across approaches, overall, people in the study area were found to be highly vulnerable using the simple average and maximum value transformation, whereas they were only moderately vulnerable using the z-score transformation approach. It is suggested that assessment approach-induced discrepancies be accounted for in international debates to harmonize/standardize assessment approaches to the end of making outputs comparable across regions. This will also likely increase the relevance of decision-making for adaptation policies.Keywords: maximum value transformation, simple averaging, vulnerability assessment, West Africa, z-score transformation
Procedia PDF Downloads 105322 Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis
Authors: Deng Zengming, Wang Mingjiang
Abstract:
As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.Keywords: fusion method, Gaussian mixture model, hybrid framework, view synthesis
Procedia PDF Downloads 251321 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers
Authors: C. V. Aravinda, H. N. Prakash
Abstract:
In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages
Procedia PDF Downloads 497