Search results for: probabilistic matching model for binary images
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19407

Search results for: probabilistic matching model for binary images

18957 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 335
18956 A Comprehensive Study of Camouflaged Object Detection Using Deep Learning

Authors: Khalak Bin Khair, Saqib Jahir, Mohammed Ibrahim, Fahad Bin, Debajyoti Karmaker

Abstract:

Object detection is a computer technology that deals with searching through digital images and videos for occurrences of semantic elements of a particular class. It is associated with image processing and computer vision. On top of object detection, we detect camouflage objects within an image using Deep Learning techniques. Deep learning may be a subset of machine learning that's essentially a three-layer neural network Over 6500 images that possess camouflage properties are gathered from various internet sources and divided into 4 categories to compare the result. Those images are labeled and then trained and tested using vgg16 architecture on the jupyter notebook using the TensorFlow platform. The architecture is further customized using Transfer Learning. Methods for transferring information from one or more of these source tasks to increase learning in a related target task are created through transfer learning. The purpose of this transfer of learning methodologies is to aid in the evolution of machine learning to the point where it is as efficient as human learning.

Keywords: deep learning, transfer learning, TensorFlow, camouflage, object detection, architecture, accuracy, model, VGG16

Procedia PDF Downloads 139
18955 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 58
18954 Red Green Blue Image Encryption Based on Paillier Cryptographic System

Authors: Mamadou I. Wade, Henry C. Ogworonjo, Madiha Gul, Mandoye Ndoye, Mohamed Chouikha, Wayne Patterson

Abstract:

In this paper, we present a novel application of the Paillier cryptographic system to the encryption of RGB (Red Green Blue) images. In this method, an RGB image is first separated into its constituent channel images, and the Paillier encryption function is applied to each of the channels pixel intensity values. Next, the encrypted image is combined and compressed if necessary before being transmitted through an unsecured communication channel. The transmitted image is subsequently recovered by a decryption process. We performed a series of security and performance analyses to the recovered images in order to verify their robustness to security attack. The results show that the proposed image encryption scheme produces highly secured encrypted images.

Keywords: image encryption, Paillier cryptographic system, RBG image encryption, Paillier

Procedia PDF Downloads 233
18953 Human Behavior Modeling in Video Surveillance of Conference Halls

Authors: Nour Charara, Hussein Charara, Omar Abou Khaled, Hani Abdallah, Elena Mugellini

Abstract:

In this paper, we present a human behavior modeling approach in videos scenes. This approach is used to model the normal behaviors in the conference halls. We exploited the Probabilistic Latent Semantic Analysis technique (PLSA), using the 'Bag-of-Terms' paradigm, as a tool for exploring video data to learn the model by grouping similar activities. Our term vocabulary consists of 3D spatio-temporal patch groups assigned by the direction of motion. Our video representation ensures the spatial information, the object trajectory, and the motion. The main importance of this approach is that it can be adapted to detect abnormal behaviors in order to ensure and enhance human security.

Keywords: activity modeling, clustering, PLSA, video representation

Procedia PDF Downloads 391
18952 A Novel Probabilistic Spatial Locality of Reference Technique for Automatic Cleansing of Digital Maps

Authors: A. Abdullah, S. Abushalmat, A. Bakshwain, A. Basuhail, A. Aslam

Abstract:

GIS (Geographic Information System) applications require geo-referenced data, this data could be available as databases or in the form of digital or hard-copy agro-meteorological maps. These parameter maps are color-coded with different regions corresponding to different parameter values, converting these maps into a database is not very difficult. However, text and different planimetric elements overlaid on these maps makes an accurate image to database conversion a challenging problem. The reason being, it is almost impossible to exactly replace what was underneath the text or icons; thus, pointing to the need for inpainting. In this paper, we propose a probabilistic inpainting approach that uses the probability of spatial locality of colors in the map for replacing overlaid elements with underlying color. We tested the limits of our proposed technique using non-textual simulated data and compared text removing results with a popular image editing tool using public domain data with promising results.

Keywords: noise, image, GIS, digital map, inpainting

Procedia PDF Downloads 348
18951 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 63
18950 Osteoarthritis (OA): A Total Knee Replacement Surgery

Authors: Loveneet Kaur

Abstract:

Introduction: Osteoarthritis (OA) is one of the leading causes of disability, and the knee is the most commonly affected joint in the body. The last resort for treatment of knee OA is Total Knee Replacement (TKR) surgery. Despite numerous advances in prosthetic design, patients do not reach normal function after surgery. Current surgical decisions are made on 2D radiographs and patient interviews. Aims: The aim of this study was to compare knee kinematics pre and post-TKR surgery using computer-animated images of patient-specific models under everyday conditions. Methods: 7 subjects were recruited for the study. Subjects underwent 3D gait analysis during 4 everyday activities and medical imaging of the knee joint pre- and one-month post-surgery. A 3D model was created from each of the scans, and the kinematic gait analysis data was used to animate the images. Results: Improvements were seen in a range of motion in all 4 activities 1-year post-surgery. The preoperative 3D images provide detailed information on the anatomy of the osteoarthritic knee. The postoperative images demonstrate potential future problems associated with the implant. Although not accurate enough to be of clinical use, the animated data can provide valuable insight into what conditions cause damage to both the osteoarthritic and prosthetic knee joints. As the animated data does not require specialist training to view, the images can be utilized across the fields of health professionals and manufacturing in the assessment and treatment of patients pre and post-knee replacement surgery. Future improvements in the collection and processing of data may yield clinically useful data. Conclusion: Although not yet of clinical use, the potential application of 3D animations of the knee joint pre and post-surgery is widespread.

Keywords: Orthoporosis, Ortharthritis, knee replacement, TKR

Procedia PDF Downloads 43
18949 Active Contours for Image Segmentation Based on Complex Domain Approach

Authors: Sajid Hussain

Abstract:

The complex domain approach for image segmentation based on active contour has been designed, which deforms step by step to partition an image into numerous expedient regions. A novel region-based trigonometric complex pressure force function is proposed, which propagates around the region of interest using image forces. The signed trigonometric force function controls the propagation of the active contour and the active contour stops on the exact edges of the object accurately. The proposed model makes the level set function binary and uses Gaussian smoothing kernel to adjust and escape the re-initialization procedure. The working principle of the proposed model is as follows: The real image data is transformed into complex data by iota (i) times of image data and the average iota (i) times of horizontal and vertical components of the gradient of image data is inserted in the proposed model to catch complex gradient of the image data. A simple finite difference mathematical technique has been used to implement the proposed model. The efficiency and robustness of the proposed model have been verified and compared with other state-of-the-art models.

Keywords: image segmentation, active contour, level set, Mumford and Shah model

Procedia PDF Downloads 104
18948 Unspoken Playground Rules Prompt Adolescents to Avoid Physical Activity: A Focus Group Study of Constructs in the Prototype Willingness Model

Authors: Catherine Wheatley, Emma L. Davies, Helen Dawes

Abstract:

The health benefits of exercise are widely recognised, but numerous interventions have failed to halt a sharp decline in physical activity during early adolescence. Many such projects are underpinned by the Theory of Planned Behaviour, yet this model of rational decision-making leaves variance in behavior unexplained. This study investigated whether the Prototype Willingness Model, which proposes a second, reactive decision-making path to account for spontaneous responses to the social environment, has potential to improve understanding of adolescent exercise behaviour in school by exploring constructs in the model with young people. PE teachers in 4 Oxfordshire schools each nominated 6 pupils who were active in school, and 6 who were inactive, to participate in the study. Of these, 45 (22 male) aged 12-13 took part in 8 focus group discussions. These were transcribed and subjected to deductive thematic analysis to search for themes relating to the prototype willingness model. Participants appeared to make rational decisions about commuting to school or attending sports clubs, but spontaneous choices to be inactive during both break and PE. These reactive decisions seemed influenced by a social context described as more ‘judgmental’ than primary school, characterised by anxiety about physical competence, negative peer evaluation and inactive playground norms. Participants described their images of typical active and inactive adolescents: active images included negative social characteristics including ‘show-off’. There was little concern about the long-term risks of inactivity, although participants seemed to recognise that physical activity is healthy. The Prototype Willingness Model might more fully explain young adolescents’ physical activity in school than rational behavioural models, indicating potential for physical activity interventions that target social anxieties in response to the changing playground environment. Images of active types could be more complex than earlier research has suggested, and their negative characteristics might influence willingness to be active.

Keywords: adolescence, physical activity, prototype willingness model, school

Procedia PDF Downloads 344
18947 Video Stabilization Using Feature Point Matching

Authors: Shamsundar Kulkarni

Abstract:

Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

Keywords: video stabilization, point feature matching, salient points, image quality measurement

Procedia PDF Downloads 304
18946 Post-Earthquake Damage Detection Using System Identification with a Pair of Seismic Recordings

Authors: Lotfi O. Gargab, Ruichong R. Zhang

Abstract:

A wave-based framework is presented for modeling seismic motion in multistory buildings and using measured response for system identification which can be utilized to extract important information regarding structure integrity. With one pair of building response at two locations, a generalized model response is formulated based on wave propagation features and expressed as frequency and time response functions denoted, respectively, as GFRF and GIRF. In particular, GIRF is fundamental in tracking arrival times of impulsive wave motion initiated at response level which is dependent on local model properties. Matching model and measured-structure responses can help in identifying model parameters and infer building properties. To show the effectiveness of this approach, the Millikan Library in Pasadena, California is identified with recordings of the Yorba Linda earthquake of September 3, 2002.

Keywords: system identification, continuous-discrete mass modeling, damage detection, post-earthquake

Procedia PDF Downloads 367
18945 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 216
18944 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 153
18943 Nature Writing in Margaret Atwood’s 'The Testaments'

Authors: Natalia Fontes De Oliveira

Abstract:

Nature and women have a long age association that has persisted throughout history, cultures, literature, and arts. Women’s physiological functions of reproduction and childbearing are viewed as closer to nature as a binary opposition to men, who have metaphorically and historically been associated with culture. To liberate from strictures of phallogocentric rhetoric, a radical critique of the categories of nature and culture must be undertaken. This paper proposes that nature writing in Margaret Atwood’s The Testaments is used subversively as a form of rebellion to disrupt the metaphorical relationship between women and nature. In tune with ecofeminist concerns, the imagery rewrites patriarchal paradigms of binary oppositions as the protagonists narrate a complex and plural relationship between nature and women.

Keywords: ecofeminism, Margaret Atwood, nature writing, women's writing

Procedia PDF Downloads 162
18942 Production Optimization under Geological Uncertainty Using Distance-Based Clustering

Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe

Abstract:

It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.

Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization

Procedia PDF Downloads 142
18941 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization

Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu

Abstract:

This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.

Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection

Procedia PDF Downloads 57
18940 Impact of Output Market Participation on Cassava-Based Farming Households' Welfare in Nigeria

Authors: Seyi Olalekan Olawuyi, Abbyssiania Mushunje

Abstract:

The potential benefits of agricultural production to improve the welfare condition of smallholder farmers in developing countries is no more a news because it has been widely documented. Yet majority of these farming households suffer from shortfall in production output to meet both the consumption needs and market demand which adversely affects output market participation and by extension welfare condition. Therefore, this study investigated the impacts of output market participation on households’ welfare of cassava-based farmers in Oyo State, Nigeria. Multistage sampling technique was used to select 324 sample size used for this study. The findings from the data obtained and analyzed through composite score and crosstab analysis revealed that there is varying degree of output market participation among the farmers which also translate to the observed welfare profile differentials in the study area. The probit model analysis with respect to the selection equation identified gender of household head, household size, access to remittance, off-farm income and ownership of farmland as significant drivers of output market participation in the study area. Furthermore, the treatment effect model of the welfare equation and propensity score matching (PSM) technique were used as robust checks; and the findings attest to the fact that, complimentarily with other significant variables highlighted in this study, output market participation indeed has a significant impact on farming households’ welfare. As policy implication inferences, the study recommends female active inclusiveness and empowerment in farming activities, birth control strategies, secondary income smoothing activities and discouragement of land fragmentation habits, to boost productivity and output market participation, which by extension can significantly improve farming households’ welfare.

Keywords: Cassava market participation, households' welfare, propensity score matching, treatment effect model

Procedia PDF Downloads 157
18939 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning

Authors: Kaushik Sathupadi, Sandesh Achar

Abstract:

Human action recognition modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view football datasets. Our HMR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH multi-view football datasets, respectively.

Keywords: computer vision, human motion analysis, random forest, machine learning

Procedia PDF Downloads 26
18938 Enhanced Image Representation for Deep Belief Network Classification of Hyperspectral Images

Authors: Khitem Amiri, Mohamed Farah

Abstract:

Image classification is a challenging task and is gaining lots of interest since it helps us to understand the content of images. Recently Deep Learning (DL) based methods gave very interesting results on several benchmarks. For Hyperspectral images (HSI), the application of DL techniques is still challenging due to the scarcity of labeled data and to the curse of dimensionality. Among other approaches, Deep Belief Network (DBN) based approaches gave a fair classification accuracy. In this paper, we address the problem of the curse of dimensionality by reducing the number of bands and replacing the HSI channels by the channels representing radiometric indices. Therefore, instead of using all the HSI bands, we compute the radiometric indices such as NDVI (Normalized Difference Vegetation Index), NDWI (Normalized Difference Water Index), etc, and we use the combination of these indices as input for the Deep Belief Network (DBN) based classification model. Thus, we keep almost all the pertinent spectral information while reducing considerably the size of the image. In order to test our image representation, we applied our method on several HSI datasets including the Indian pines dataset, Jasper Ridge data and it gave comparable results to the state of the art methods while reducing considerably the time of training and testing.

Keywords: hyperspectral images, deep belief network, radiometric indices, image classification

Procedia PDF Downloads 277
18937 A Calibration Method for Temperature Distribution Measurement of Thermochromic Liquid Crystal Based on Mathematical Morphology of Hue Image

Authors: Risti Suryantari, Flaviana

Abstract:

The aim of this research is to design calibration method of Thermochromic Liquid Crystal for temperature distribution measurement based on mathematical morphology of hue image A glass of water is placed on the surface of sample TLC R25C5W at certain temperature. We use scanner for image acquisition. The true images in RGB format is converted to HSV (hue, saturation, value) by taking of hue without saturation and value. Then the hue images is processed based on mathematical morphology using Matlab2013a software to get better images. There are differences on the final images after processing at each temperature variation based on visualization observation and the statistic value. The value of maximum and mean increase with rising temperature. It could be parameter to identify the temperature of the human body surface like hand or foot surface.

Keywords: thermochromic liquid crystal, TLC, mathematical morphology, hue image

Procedia PDF Downloads 470
18936 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study

Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming

Abstract:

Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.

Keywords: binary outcomes, statistical methods, clinical trials, simulation study

Procedia PDF Downloads 108
18935 Use of Segmentation and Color Adjustment for Skin Tone Classification in Dermatological Images

Authors: Fernando Duarte

Abstract:

The work aims to evaluate the use of classical image processing methodologies towards skin tone classification in dermatological images. The skin tone is an important attribute when considering several factor for skin cancer diagnosis. Currently, there is a lack of clear methodologies to classify the skin tone based only on the dermatological image. In this work, a recent released dataset with the label for skin tone was used as reference for the evaluation of classical methodologies for segmentation and adjustment of color space for classification of skin tone in dermatological images. It was noticed that even though the classical methodologies can work fine for segmentation and color adjustment, classifying the skin tone without proper control of the aquisition of the sample images ended being very unreliable.

Keywords: segmentation, classification, color space, skin tone, Fitzpatrick

Procedia PDF Downloads 27
18934 Application of Flory Paterson’s Theory on the Volumetric Properties of Liquid Mixtures: 1,2-Dichloroethane with Aliphatic and Cyclic Ethers

Authors: Linda Boussaid, Farid Brahim Belaribi

Abstract:

The physico-chemical properties of liquid materials in the industrial field, in general, and in that of the chemical industries, in particular, constitutes a prerequisite for the design of equipment, for the resolution of specific problems (related to the techniques of purification and separation, at risk in the transport of certain materials, etc.) and, therefore, at the production stage. Chloroalkanes, ethers constitute three chemical families having an industrial, theoretical and environmental interest. For example, these compounds are used in various applications in the chemical and pharmaceutical industries. In addition, they contribute to the particular thermodynamic behavior (deviation from ideality, association, etc.) of certain mixtures which constitute a severe test for predictive theoretical models. Finally, due to the degradation of the environment in the world, a renewed interest is observed for ethers, because some of their physicochemical properties could contribute to lower pollution (ethers would be used as additives in aqueous fuels.). This work is a thermodynamic, experimental and theoretical study of the volumetric properties of liquid binary systems formed from compounds belonging to the chemical families of chloroalkanes, ethers, having an industrial, theoretical and environmental interest. Experimental determination of the densities and excess volumes of the systems studied, at different temperatures in the interval [278.15-333.15] K and at atmospheric pressure, using an AntonPaar vibrating tube densitometer of the DMA5000 type. This contribution of experimental data, on the volumetric properties of the binary liquid mixtures of 1,2-dichloroethane with an ether, supplemented by an application of the theoretical model of Prigogine-Flory-Patterson PFP, will probably contribute to the enrichment of the thermodynamic database and the further development of the theory of Flory in its Prigogine-Flory-Patterson (PFP) version, for a better understanding of the thermodynamic behavior of these liquid binary mixtures

Keywords: prigogine-flory-patterson (pfp), propriétés volumétrique , volume d’excés, ethers

Procedia PDF Downloads 86
18933 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet

Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi

Abstract:

One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).

Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)

Procedia PDF Downloads 434
18932 Using Scale Invariant Feature Transform Features to Recognize Characters in Natural Scene Images

Authors: Belaynesh Chekol, Numan Çelebi

Abstract:

The main purpose of this work is to recognize individual characters extracted from natural scene images using scale invariant feature transform (SIFT) features as an input to K-nearest neighbor (KNN); a classification learner algorithm. For this task, 1,068 and 78 images of English alphabet characters taken from Chars74k data set is used to train and test the classifier respectively. For each character image, We have generated describing features by using SIFT algorithm. This set of features is fed to the learner so that it can recognize and label new images of English characters. Two types of KNN (fine KNN and weighted KNN) were trained and the resulted classification accuracy is 56.9% and 56.5% respectively. The training time taken was the same for both fine and weighted KNN.

Keywords: character recognition, KNN, natural scene image, SIFT

Procedia PDF Downloads 277
18931 Use of Multistage Transition Regression Models for Credit Card Income Prediction

Authors: Denys Osipenko, Jonathan Crook

Abstract:

Because of the variety of the card holders’ behaviour types and income sources each consumer account can be transferred to a variety of states. Each consumer account can be inactive, transactor, revolver, delinquent, defaulted and requires an individual model for the income prediction. The estimation of transition probabilities between statuses at the account level helps to avoid the memorylessness of the Markov Chains approach. This paper investigates the transition probabilities estimation approaches to credit cards income prediction at the account level. The key question of empirical research is which approach gives more accurate results: multinomial logistic regression or multistage conditional logistic regression with binary target. Both models have shown moderate predictive power. Prediction accuracy for conditional logistic regression depends on the order of stages for the conditional binary logistic regression. On the other hand, multinomial logistic regression is easier for usage and gives integrate estimations for all states without priorities. Thus further investigations can be concentrated on alternative modeling approaches such as discrete choice models.

Keywords: multinomial regression, conditional logistic regression, credit account state, transition probability

Procedia PDF Downloads 481
18930 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid

Authors: Min Wang, Sergey Utev

Abstract:

The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.

Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial

Procedia PDF Downloads 132
18929 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 267
18928 Post-Earthquake Road Damage Detection by SVM Classification from Quickbird Satellite Images

Authors: Moein Izadi, Ali Mohammadzadeh

Abstract:

Detection of damaged parts of roads after earthquake is essential for coordinating rescuers. In this study, an approach is presented for the semi-automatic detection of damaged roads in a city using pre-event vector maps and both pre- and post-earthquake QuickBird satellite images. Damage is defined in this study as the debris of damaged buildings adjacent to the roads. Some spectral and texture features are considered for SVM classification step to detect damages. Finally, the proposed method is tested on QuickBird pan-sharpened images from the Bam City earthquake and the results show that an overall accuracy of 81% and a kappa coefficient of 0.71 are achieved for the damage detection. The obtained results indicate the efficiency and accuracy of the proposed approach.

Keywords: SVM classifier, disaster management, road damage detection, quickBird images

Procedia PDF Downloads 619