Search results for: Body images
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1971

Search results for: Body images

1371 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation

Authors: Constantin Z. Leshan

Abstract:

Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.

Keywords: Border of the universe, causality violation, perfect isolation, quantum jumps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1208
1370 Photo Mosaic Smartphone Application in Client-Server Based Large-Scale Image Databases

Authors: Sang-Hun Lee, Bum-Soo Kim, Yang-Sae Moon, Jinho Kim

Abstract:

In this paper we present a photo mosaic smartphone application in client-server based large-scale image databases. Photo mosaic is not a new concept, but there are very few smartphone applications especially for a huge number of images in the client-server environment. To support large-scale image databases, we first propose an overall framework working as a client-server model. We then present a concept of image-PAA features to efficiently handle a huge number of images and discuss its lower bounding property. We also present a best-match algorithm that exploits the lower bounding property of image-PAA. We finally implement an efficient Android-based application and demonstrate its feasibility.

Keywords: smartphone applications; photo mosaic; similarity search; data mining; large-scale image databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
1369 A Prediction-Based Reversible Watermarking for MRI Images

Authors: Nuha Omran Abokhdair, Azizah Bt Abdul Manaf

Abstract:

Reversible watermarking is a special branch of image watermarking, that is able to recover the original image after extracting the watermark from the image. In this paper, an adaptive prediction-based reversible watermarking scheme is presented, in order to increase the payload capacity of MRI medical images. The scheme divides the image into two parts, Region of Interest (ROI) and Region of Non-Interest (RONI). Two bits are embedded in each embeddable pixel of RONI and one bit is embedded in each embeddable pixel of ROI. The experimental results demonstrate that the proposed scheme is able to achieve high embedding capacity. This is mainly caused by two reasons. First, the pixels that were excluded from data embedding due to overflow/underflow are used for data embedding. Second, large location map that need to be added to watermark data as overhead is eliminated and thus lower data embedding capacity is prevented. Moreover, the scheme provides good visual quality to the watermarked image.

Keywords: Medical image watermarking, reversible watermarking, Difference Expansion, Prediction-Error Expansion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
1368 Automatically Driven Vector for Guidewire Segmentation in 2D and Biplane Fluoroscopy

Authors: Simon Lessard, Pascal Bigras, Caroline Lau, Daniel Roy, Gilles Soulez, Jacques A. de Guise

Abstract:

The segmentation of endovascular tools in fluoroscopy images can be accurately performed automatically or by minimum user intervention, using known modern techniques. It has been proven in literature, but no clinical implementation exists so far because the computational time requirements of such technology have not yet been met. A classical segmentation scheme is composed of edge enhancement filtering, line detection, and segmentation. A new method is presented that consists of a vector that propagates in the image to track an edge as it advances. The filtering is performed progressively in the projected path of the vector, whose orientation allows for oriented edge detection, and a minimal image area is globally filtered. Such an algorithm is rapidly computed and can be implemented in real-time applications. It was tested on medical fluoroscopy images from an endovascular cerebral intervention. Ex- periments showed that the 2D tracking was limited to guidewires without intersection crosspoints, while the 3D implementation was able to cope with such planar difficulties.

Keywords: Edge detection, Line Enhancement, Segmentation, Fluoroscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
1367 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program

Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata

Abstract:

The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.

Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1837
1366 An Additive Watermarking Technique in Gray Scale Images Using Discrete Wavelet Transformation and Its Analysis on Watermark Strength

Authors: Kamaldeep Joshi, Rajkumar Yadav, Ashok Kumar Yadav

Abstract:

Digital Watermarking is a procedure to prevent the unauthorized access and modification of personal data. It assures that the communication between two parties remains secure and their communication should be undetected. This paper investigates the consequence of the watermark strength of the grayscale image using a Discrete Wavelet Transformation (DWT) additive technique. In this method, the gray scale host image is divided into four sub bands: LL (Low-Low), HL (High-Low), LH (Low-High), HH (High-High) and the watermark is inserted in an LL sub band using DWT technique. As the image is divided into four sub bands, a watermark of equal size of the LL sub band has been inserted and the results are discussed. LL represents the average component of the host image which contains the maximum information of the image. Two kinds of experiments are performed. In the first, the same watermark is embedded in different images and in the later on the strength of the watermark varies by a factor of s i.e. (s=10, 20, 30, 40, 50) and it is inserted in the same image.

Keywords: Watermarking, discrete wavelet transform, scaling factor, steganography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1416
1365 A Unified Robust Algorithm for Detection of Human and Non-human Object in Intelligent Safety Application

Authors: M A Hannan, A. Hussain, S. A. Samad, K. A. Ishak, A. Mohamed

Abstract:

This paper presents a general trainable framework for fast and robust upright human face and non-human object detection and verification in static images. To enhance the performance of the detection process, the technique we develop is based on the combination of fast neural network (FNN) and classical neural network (CNN). In FNN, a useful correlation is exploited to sustain high level of detection accuracy between input image and the weight of the hidden neurons. This is to enable the use of Fourier transform that significantly speed up the time detection. The combination of CNN is responsible to verify the face region. A bootstrap algorithm is used to collect non human object, which adds the false detection to the training process of the human and non-human object. Experimental results on test images with both simple and complex background demonstrate that the proposed method has obtained high detection rate and low false positive rate in detecting both human face and non-human object.

Keywords: Algorithm, detection of human and non-human object, FNN, CNN, Image training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612
1364 The SAFRS System : A Case-Based Reasoning Training Tool for Capturing and Re-Using Knowledge

Authors: Souad Demigha

Abstract:

The paper aims to specify and build a system, a learning support in radiology-senology (breast radiology) dedicated to help assist junior radiologists-senologists in their radiologysenology- related activity based on experience of expert radiologistssenologists. This system is named SAFRS (i.e. system supporting the training of radiologists-senologists). It is based on the exploitation of radiologic-senologic images (primarily mammograms but also echographic images or MRI) and their related clinical files. The aim of such a system is to help breast cancer screening in education. In order to acquire this expert radiologist-senologist knowledge, we have used the CBR (case-based reasoning) approach. The SAFRS system will promote the evolution of teaching in radiology-senology by offering the “junior radiologist" trainees an advanced pedagogical product. It will permit a strengthening of knowledge together with a very elaborate presentation of results. At last, the know-how will derive from all these factors.

Keywords: Learning support, radiology-senology, training, education, CBR, accumulated experience.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
1363 Improved Processing Speed for Text Watermarking Algorithm in Color Images

Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari

Abstract:

Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.

Keywords: Steganography, watermarking, private keys, time complexity measurements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 798
1362 Numerical Simulation of Plasma Actuator Using OpenFOAM

Authors: H. Yazdani, K. Ghorbanian

Abstract:

This paper deals with modeling and simulation of the plasma actuator with OpenFOAM. Plasma actuator is one of the newest devices in flow control techniques which can delay separation by inducing external momentum to the boundary layer of the flow. The effects of the plasma actuators on the external flow are incorporated into Navier-Stokes computations as a body force vector which is obtained as a product of the net charge density and the electric field. In order to compute this body force vector, the model solves two equations: One for the electric field due to the applied AC voltage at the electrodes and the other for the charge density representing the ionized air. The simulation result is compared to the experimental and typical values which confirms the validity of the modeling.

Keywords: Active flow control, flow field, OpenFOAM, plasma actuator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2518
1361 Effect of Dietary Supplementation of Different Levels of Black Seed (Nigella Sativa L.) on Growth Performance, Immunological, Hematological and Carcass Parameters of Broiler Chicks

Authors: R. S. Shewita, A. E. Taha

Abstract:

This experiment was conducted to investigate the effect of dietary supplementation of different levels of black seed (Nigella sativa L.) on the performance and immune response of broiler chicks. A total 240 day-old broiler chicks were used and randomly allotted equally into six experimental groups designated as 1, 2, 3, 4, 5 and 6 having black seed at the rate of 0, 2, 4, 6, 8 and 10 g /kg diet respectively. The study was lasted for 42 days. Average body weight, weight gain, relative growth rate, feed conversion, antibody titer against Newcastle disease, phagocytic activity and phagocytic index, some blood parameters(GOT, GPT, Glucose, Cholesterol, Triglyceride, Total protein, Albumen, WBCs, RBCs, Hb and PCV), dressing percentage, weight of different body organs, abdominal fat weight, were determined. It was found that, N. Sativa significantly improved final body weight, total body gain and feed conversion ratio of groups 2 and 3 when compared with the control group. Higher levels of N. Sativa did not improve growth performance of the chicks. Non significant differences were observed for antibody titer against Newcastle virus, WBCs count, serum GOT, glucose level, dressing %, relative liver, spleen, heart and head percentages. Lymphoid organs (Bursa and Thymus) improved significantly with increasing N. Sativa level in all supplemented groups. Serum cholesterol, triglyceride and visible fat % significantly decreased with Nigella sativa supplementation while serum GPT level significantly increased with nigella sativa supplementation.

Keywords: Nigella Sativa, broiler, growth, carcass traits, serum, blood

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3045
1360 Automatic Extraction of Arbitrarily Shaped Buildings from VHR Satellite Imagery

Authors: Evans Belly, Imdad Rizvi, M. M. Kadam

Abstract:

Satellite imagery is one of the emerging technologies which are extensively utilized in various applications such as detection/extraction of man-made structures, monitoring of sensitive areas, creating graphic maps etc. The main approach here is the automated detection of buildings from very high resolution (VHR) optical satellite images. Initially, the shadow, the building and the non-building regions (roads, vegetation etc.) are investigated wherein building extraction is mainly focused. Once all the landscape is collected a trimming process is done so as to eliminate the landscapes that may occur due to non-building objects. Finally the label method is used to extract the building regions. The label method may be altered for efficient building extraction. The images used for the analysis are the ones which are extracted from the sensors having resolution less than 1 meter (VHR). This method provides an efficient way to produce good results. The additional overhead of mid processing is eliminated without compromising the quality of the output to ease the processing steps required and time consumed.

Keywords: Building detection, shadow detection, landscape generation, label, partitioning, very high resolution satellite imagery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 799
1359 Algorithm for Path Recognition in-between Tree Rows for Agricultural Wheeled-Mobile Robots

Authors: Anderson Rocha, Pedro Miguel de Figueiredo Dinis Oliveira Gaspar

Abstract:

Machine vision has been widely used in recent years in agriculture, as a tool to promote the automation of processes and increase the levels of productivity. The aim of this work is the development of a path recognition algorithm based on image processing to guide a terrestrial robot in-between tree rows. The proposed algorithm was developed using the software MATLAB, and it uses several image processing operations, such as threshold detection, morphological erosion, histogram equalization and the Hough transform, to find edge lines along tree rows on an image and to create a path to be followed by a mobile robot. To develop the algorithm, a set of images of different types of orchards was used, which made possible the construction of a method capable of identifying paths between trees of different heights and aspects. The algorithm was evaluated using several images with different characteristics of quality and the results showed that the proposed method can successfully detect a path in different types of environments.

Keywords: Agricultural mobile robot, image processing, path recognition, Hough transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1745
1358 Analysis of Take-off Phase of Somersaults with Twisting along the Longitudinal Body Axis

Authors: P. Hedbávný, M. Kalichová

Abstract:

The contribution deals with problem of take-off phase of back somersault with twisting with various numbers of twists along longitudinal body axis. The aim was to evaluate the changes in angles during transition phase from back handspring to back somersault using 3D kinematic analysis of the somersaults. We used Simi Motion System for the 3D kinematic analysis of the observed gymnastic element performed by Czech Republic female representative and 2008 Summer Olympic Games participant. The results showed that the higher the number of twists, the smaller the touchdown angle in which the gymnasts lands on the pad in the beginning of take-off phase. In back somersault with one twist (180°) the average angle is 54°, in 1080° back somersault the average angle is 45.9°. These results may help to improve technical training of sports gymnasts.

Keywords: back somersault with twisting, biomechanicalanalysis, take-off

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2495
1357 Optical Verification of an Ophthalmological Examination Apparatus Employing the Electroretinogram Function on Fundus-Related Perimetry

Authors: Naoto Suzuki

Abstract:

Japanese are affected by the most common causes of eyesight loss such as glaucoma, diabetic retinopathy, pigmentary retinal degeneration, and age-related macular degeneration. We developed an ophthalmological examination apparatus with a fundus camera, precisely fundus-related perimetry (microperimetry), and electroretinogram (ERG) functions to diagnose a variety of diseases that cause eyesight loss. The experimental apparatus was constructed with the same optical system as a fundus camera. The microperimetry optical system was calculated and added to the experimental apparatus using the German company Optenso's optical engineering software (OpTaliX-LT 10.8). We also added an Edmund infrared camera (EO-0413), a lens with a 25 mm focal length, a 45° cold mirror, a 12 V/50 W halogen lamp, and an 8-inch monitor. We made the artificial eye of a plane-convex lens, a black spacer, and a hemispherical cup. The hemispherical cup had a small section of the paper at the bottom. The artificial eye was photographed five times using the experimental apparatus. The software was created to display the examination target on the monitor and save examination data using C++Builder 10.2. The retinal fundus was displayed on the monitor at a length and width of 1 mm and a resolution of 70.4 ± 4.1 and 74.7 ± 6.8 pixels, respectively. The microperimetry and ERG functions were successfully added to the experimental ophthalmological apparatus. A moving machine was developed to measure the artificial eye's movement. The artificial eye's rear part was painted black and white in the central area. It was rotated 10 degrees from one side to the other. The movement was captured five times as motion videos. Three static images were extracted from one of the motion videos captured. The images display the artificial eye facing the center, right, and left directions. The three images were processed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2, including trimming, binarization, making a window, deleting peripheral area, and morphological operations. To calculate the artificial eye's fundus center, we added a gravity method to the program to calculate the gravity position of connected components. From the three images, the image processing could calculate the center position.

Keywords: Ophthalmological examination apparatus, microperimetry, electroretinogram, eye movement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 519
1356 Brain Image Segmentation Using Conditional Random Field Based On Modified Artificial Bee Colony Optimization Algorithm

Authors: B. Thiagarajan, R. Bremananth

Abstract:

Tumor is an uncontrolled growth of tissues in any part of the body. Tumors are of different types and they have different characteristics and treatments. Brain tumor is inherently serious and life-threatening because of its character in the limited space of the intracranial cavity (space formed inside the skull). Locating the tumor within MR (magnetic resonance) image of brain is integral part of the treatment of brain tumor. This segmentation task requires classification of each voxel as either tumor or non-tumor, based on the description of the voxel under consideration. Many studies are going on in the medical field using Markov Random Fields (MRF) in segmentation of MR images. Even though the segmentation process is better, computing the probability and estimation of parameters is difficult. In order to overcome the aforementioned issues, Conditional Random Field (CRF) is used in this paper for segmentation, along with the modified artificial bee colony optimization and modified fuzzy possibility c-means (MFPCM) algorithm. This work is mainly focused to reduce the computational complexities, which are found in existing methods and aimed at getting higher accuracy. The efficiency of this work is evaluated using the parameters such as region non-uniformity, correlation and computation time. The experimental results are compared with the existing methods such as MRF with improved Genetic Algorithm (GA) and MRF-Artificial Bee Colony (MRF-ABC) algorithm.

Keywords: Conditional random field, Magnetic resonance, Markov random field, Modified artificial bee colony.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2924
1355 An Optical Flow Based Segmentation Method for Objects Extraction

Authors: C. Lodato, S. Lopes

Abstract:

This paper describes a segmentation algorithm based on the cooperation of an optical flow estimation method with edge detection and region growing procedures. The proposed method has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. The addressed problem consists in extracting whole objects from background for producing images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The first task of the algorithm exploits the cues from motion analysis for moving area detection. Objects and background are then refined using respectively edge detection and region growing procedures. These tasks are iteratively performed until objects and background are completely resolved. The developed method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.

Keywords: Motion Detection, Object Extraction, Optical Flow, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872
1354 Land Use Change Detection Using Remote Sensing and GIS

Authors: Naser Ahmadi Sani, Karim Solaimani, Lida Razaghnia, Jalal Zandi

Abstract:

In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.

Keywords: HARAZ Basin, Change Detection, Land-use, Satellite Data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2295
1353 An Indispensable Parameter in Lipid Ratios to Discriminate between Morbid Obesity and Metabolic Syndrome in Children: High Density Lipoprotein Cholesterol

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Obesity is a low-grade inflammatory disease and may lead to health problems such as hypertension, dyslipidemia, diabetes. It is also associated with important risk factors for cardiovascular diseases. This requires the detailed evaluation of obesity, particularly in children. The aim of this study is to enlighten the potential associations between lipid ratios and obesity indices and to introduce those with discriminating features among children with obesity and metabolic syndrome (MetS). A total of 408 children (aged between six and eighteen years) participated in the scope of the study. Informed consent forms were taken from the participants and their parents. Ethical Committee approval was obtained. Anthropometric measurements such as weight, height as well as waist, hip, head, neck circumferences and body fat mass were taken. Systolic and diastolic blood pressure values were recorded. Body mass index (BMI), diagnostic obesity notation model assessment index-II (D2 index), waist-to-hip, head-to-neck ratios were calculated. Total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDLChol), low-density lipoprotein cholesterol (LDLChol) analyses were performed in blood samples drawn from 110 children with normal body weight, 164 morbid obese (MO) children and 134 children with MetS. Age- and sex-adjusted BMI percentiles tabulated by World Health Organization were used to classify groups; normal body weight, MO and MetS. 15th-to-85th percentiles were used to define normal body weight children. Children, whose values were above the 99th percentile, were described as MO. MetS criteria were defined. Data were evaluated statistically by SPSS Version 20. The degree of statistical significance was accepted as p≤0.05. Mean±standard deviation values of BMI for normal body weight children, MO children and those with MetS were 15.7±1.1, 27.1±3.8 and 29.1±5.3 kg/m2, respectively. Corresponding values for the D2 index were calculated as 3.4±0.9, 14.3±4.9 and 16.4±6.7. Both BMI and D2 index were capable of discriminating the groups from one another (p≤0.01). As far as other obesity indices were considered, waist-to hip and head-to-neck ratios did not exhibit any statistically significant difference between MO and MetS groups (p≥0.05). Diagnostic obesity notation model assessment index-II was correlated with the triglycerides-to-HDL-C ratio in normal body weight and MO (r=0.413, p≤0.01 and r=0.261, (p≤0.05, respectively). Total cholesterol-to-HDL-C and LDL-C-to-HDL-C showed statistically significant differences between normal body weight and MO as well as MO and MetS (p≤0.05). The only group in which these two ratios were significantly correlated with waist-to-hip ratio was MetS group (r=0.332 and r=0.334, p≤0.01, respectively). Lack of correlation between the D2 index and the triglycerides-to-HDL-C ratio was another important finding in MetS group. In this study, parameters and ratios, whose associations were defined previously with increased cardiovascular risk or cardiac death have been evaluated along with obesity indices in children with morbid obesity and MetS. Their profiles during childhood have been investigated. Aside from the nature of the correlation between the D2 index and triglycerides-to-HDL-C ratio, total cholesterol-to-HDL-C as well as LDL-C-to- HDL-C ratios along with their correlations with waist-to-hip ratio showed that the combination of obesity-related parameters predicts better than one parameter and appears to be helpful for discriminating MO children from MetS group.

Keywords: Children, lipid ratios, metabolic syndrome, obesity indices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 808
1352 On the EM Algorithm and Bootstrap Approach Combination for Improving Satellite Image Fusion

Authors: Tijani Delleji, Mourad Zribi, Ahmed Ben Hamida

Abstract:

This paper discusses EM algorithm and Bootstrap approach combination applied for the improvement of the satellite image fusion process. This novel satellite image fusion method based on estimation theory EM algorithm and reinforced by Bootstrap approach was successfully implemented and tested. The sensor images are firstly split by a Bayesian segmentation method to determine a joint region map for the fused image. Then, we use the EM algorithm in conjunction with the Bootstrap approach to develop the bootstrap EM fusion algorithm, hence producing the fused targeted image. We proposed in this research to estimate the statistical parameters from some iterative equations of the EM algorithm relying on a reference of representative Bootstrap samples of images. Sizes of those samples are determined from a new criterion called 'hybrid criterion'. Consequently, the obtained results of our work show that using the Bootstrap EM (BEM) in image fusion improve performances of estimated parameters which involve amelioration of the fused image quality; and reduce the computing time during the fusion process.

Keywords: Satellite image fusion, Bayesian segmentation, Bootstrap approach, EM algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2236
1351 Optimized Facial Features-based Age Classification

Authors: Md. Zahangir Alom, Mei-Lan Piao, Md. Shariful Islam, Nam Kim, Jae-Hyeung Park

Abstract:

The evaluation and measurement of human body dimensions are achieved by physical anthropometry. This research was conducted in view of the importance of anthropometric indices of the face in forensic medicine, surgery, and medical imaging. The main goal of this research is to optimization of facial feature point by establishing a mathematical relationship among facial features and used optimize feature points for age classification. Since selected facial feature points are located to the area of mouth, nose, eyes and eyebrow on facial images, all desire facial feature points are extracted accurately. According this proposes method; sixteen Euclidean distances are calculated from the eighteen selected facial feature points vertically as well as horizontally. The mathematical relationships among horizontal and vertical distances are established. Moreover, it is also discovered that distances of the facial feature follows a constant ratio due to age progression. The distances between the specified features points increase with respect the age progression of a human from his or her childhood but the ratio of the distances does not change (d = 1 .618 ) . Finally, according to the proposed mathematical relationship four independent feature distances related to eight feature points are selected from sixteen distances and eighteen feature point-s respectively. These four feature distances are used for classification of age using Support Vector Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm and shown around 96 % accuracy. Experiment result shows the proposed system is effective and accurate for age classification.

Keywords: 3D Face Model, Face Anthropometrics, Facial Features Extraction, Feature distances, SVM-SMO

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2023
1350 A Hybrid Distributed Vision System for Robot Localization

Authors: Hsiang-Wen Hsieh, Chin-Chia Wu, Hung-Hsiu Yu, Shu-Fan Liu

Abstract:

Localization is one of the critical issues in the field of robot navigation. With an accurate estimate of the robot pose, robots will be capable of navigating in the environment autonomously and efficiently. In this paper, a hybrid Distributed Vision System (DVS) for robot localization is presented. The presented approach integrates odometry data from robot and images captured from overhead cameras installed in the environment to help reduce possibilities of fail localization due to effects of illumination, encoder accumulated errors, and low quality range data. An odometry-based motion model is applied to predict robot poses, and robot images captured by overhead cameras are then used to update pose estimates with HSV histogram-based measurement model. Experiment results show the presented approach could localize robots in a global world coordinate system with localization errors within 100mm.

Keywords: Distributed Vision System, Localization, Measurement model, Motion model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1318
1349 Use of Fuzzy Edge Image in Block Truncation Coding for Image Compression

Authors: Amarunnishad T.M., Govindan V.K., Abraham T. Mathew

Abstract:

An image compression method has been developed using fuzzy edge image utilizing the basic Block Truncation Coding (BTC) algorithm. The fuzzy edge image has been validated with classical edge detectors on the basis of the results of the well-known Canny edge detector prior to applying to the proposed method. The bit plane generated by the conventional BTC method is replaced with the fuzzy bit plane generated by the logical OR operation between the fuzzy edge image and the corresponding conventional BTC bit plane. The input image is encoded with the block mean and standard deviation and the fuzzy bit plane. The proposed method has been tested with test images of 8 bits/pixel and size 512×512 and found to be superior with better Peak Signal to Noise Ratio (PSNR) when compared to the conventional BTC, and adaptive bit plane selection BTC (ABTC) methods. The raggedness and jagged appearance, and the ringing artifacts at sharp edges are greatly reduced in reconstructed images by the proposed method with the fuzzy bit plane.

Keywords: Image compression, Edge detection, Ground truth image, Peak signal to noise ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672
1348 Methods of Geodesic Distance in Two-Dimensional Face Recognition

Authors: Rachid Ahdid, Said Safi, Bouzid Manaut

Abstract:

In this paper, we present a comparative study of three methods of 2D face recognition system such as: Iso-Geodesic Curves (IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram (GIH). These approaches are based on computing of geodesic distance between points of facial surface and between facial curves. In this study we represented the image at gray level as a 2D surface in a 3D space, with the third coordinate proportional to the intensity values of pixels. In the classifying step, we use: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The images used in our experiments are from two wellknown databases of face images ORL and YaleB. ORL data base was used to evaluate the performance of methods under conditions where the pose and sample size are varied, and the database YaleB was used to examine the performance of the systems when the facial expressions and lighting are varied.

Keywords: 2D face recognition, Geodesic distance, Iso-Geodesic Curves, Geodesic-Intensity Histogram, facial surface, Neural Networks, K-Nearest Neighbor, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797
1347 Pre-Analysis of Printed Circuit Boards Based On Multispectral Imaging for Vision Based Recognition of Electronics Waste

Authors: Florian Kleber, Martin Kampel

Abstract:

The increasing demand of gallium, indium and rare-earth elements for the production of electronics, e.g. solid state-lighting, photovoltaics, integrated circuits, and liquid crystal displays, will exceed the world-wide supply according to current forecasts. Recycling systems to reclaim these materials are not yet in place, which challenges the sustainability of these technologies. This paper proposes a multispectral imaging system as a basis for a vision based recognition system for valuable components of electronics waste. Multispectral images intend to enhance the contrast of images of printed circuit boards (single components, as well as labels) for further analysis, such as optical character recognition and entire printed circuit board recognition. The results show, that a higher contrast is achieved in the near infrared compared to ultraviolett and visible light.

Keywords: Electronic Waste, Recycling, Multispectral Imaging, Printed Circuit Boards, Rare-Earth Elements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2658
1346 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition

Authors: L. Hamsaveni, Navya Prakash, Suresha

Abstract:

Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.

Keywords: Grayscale image format, image fusing, SURF detection, YCbCr image format.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1140
1345 Associations between Metabolic Syndrome and Bone Mineral Density and Trabecular Bone Score in Postmenopausal Women with Non-Vertebral Fractures

Authors: Vladyslav Povoroznyuk, Larysa Martynyuk, Iryna Syzonenko, Liliya Martynyuk

Abstract:

Medical, social, and economic relevance of osteoporosis is caused by reducing quality of life, increasing disability and mortality of the patients as a result of fractures due to the low-energy trauma. This study is aimed to examine the associations of metabolic syndrome components, bone mineral density (BMD) and trabecular bone score (TBS) in menopausal women with non-vertebral fractures. 1161 menopausal women aged 50-79 year-old were examined and divided into three groups: A included 419 women with increased body weight (BMI - 25.0-29.9 kg/m2), B – 442 females with obesity (BMI >29.9 kg/m2)i and C – 300 women with metabolic syndrome (diagnosis according to IDF criteria, 2005). BMD of lumbar spine (L1-L4), femoral neck, total body and forearm was investigated with usage of dual-energy X-ray absorptiometry. The bone quality indexes were measured according to Med-Imaps installation. All analyses were performed using Statistical Package 6.0. BMD of lumbar spine (L1-L4), femoral neck, total body, and ultradistal radius was significant higher in women with obesity and metabolic syndrome compared to the pre-obese ones (p<0.001). TBS was significantly higher in women with increased body weight compared to obese and metabolic syndrome patients. Analysis showed significant positive correlation between waist circumference, triglycerides level and BMD of lumbar spine and femur. Significant negative association between serum HDL level and BMD of investigated sites was established. The TBS (L1-L4) indexes positively correlated with HDL (high-density lipoprotein) level. Despite the fact that BMD indexes were better in women with metabolic syndrome, the frequency of non-vertebral fractures was significantly higher in this group of patients.

Keywords: Bone mineral density, trabecular bone score, metabolic syndrome, fracture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 776
1344 RoboWeedSupport-Sub Millimeter Weed Image Acquisition in Cereal Crops with Speeds up till 50 Km/H

Authors: Morten Stigaard Laursen, Rasmus Nyholm Jørgensen, Mads Dyrmann, Robert Poulsen

Abstract:

For the past three years, the Danish project, RoboWeedSupport, has sought to bridge the gap between the potential herbicide savings using a decision support system and the required weed inspections. In order to automate the weed inspections it is desired to generate a map of the weed species present within the field, to generate the map images must be captured with samples covering the field. This paper investigates the economical cost of performing this data collection based on a camera system mounted on a all-terain vehicle (ATV) able to drive and collect data at up to 50 km/h while still maintaining a image quality sufficient for identifying newly emerged grass weeds. The economical estimates are based on approximately 100 hectares recorded at three different locations in Denmark. With an average image density of 99 images per hectare the ATV had an capacity of 28 ha per hour, which is estimated to cost 6.6 EUR/ha. Alternatively relying on a boom solution for an existing tracktor it was estimated that a cost of 2.4 EUR/ha is obtainable under equal conditions.

Keywords: Weed mapping, integrated weed management, weed recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433
1343 Risk in the South African Sectional Title Industry: An Assurance Perspective

Authors: Leandi Steenkamp

Abstract:

The sectional title industry has been a part of the property landscape in South Africa for almost half a century, and plays a significant role in addressing the housing problem in the country. Stakeholders such as owners and investors in sectional title property are in most cases not directly involved in the management thereof, and place reliance on the audited annual financial statements of bodies corporate for decision-making purposes. Although the industry seems to be highly regulated, the legislation regarding accounting and auditing of sectional title is vague and ambiguous. Furthermore, there are no industry-specific auditing and accounting standards to guide accounting and auditing practitioners in performing their work and industry financial benchmarks are not readily available. In addition, financial pressure on sectional title schemes is often very high due to the fact that some owners exercise unrealistic pressure to keep monthly levies as low as possible. All these factors have an impact on the business risk as well as audit risk of bodies corporate. Very little academic research has been undertaken on the sectional title industry in South Africa from an accounting and auditing perspective. The aim of this paper is threefold: Firstly, to discuss the findings of a literature review on uncertainties, ambiguity and confusing aspects in current legislation regarding the audit of a sectional title property that may cause or increase audit and business risk. Secondly, empirical findings of risk-related aspects from the results of interviews with three groups of body corporate role-players will be discussed. The role-players were body corporate trustee chairpersons, body corporate managing agents and accounting and auditing practitioners of bodies corporate. Specific reference will be made to business risk and audit risk. Thirdly, practical recommendations will be made on possibilities of closing the audit expectation gap, and further research opportunities in this regard will be discussed.

Keywords: Assurance, audit, audit risk, body corporate, corporate governance, sectional title.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1252
1342 CBIR Using Multi-Resolution Transform for Brain Tumour Detection and Stages Identification

Authors: H. Benjamin Fredrick David, R. Balasubramanian, A. Anbarasa Pandian

Abstract:

Image retrieval is the most interesting technique which is being used today in our digital world. CBIR, commonly expanded as Content Based Image Retrieval is an image processing technique which identifies the relevant images and retrieves them based on the patterns that are extracted from the digital images. In this paper, two research works have been presented using CBIR. The first work provides an automated and interactive approach to the analysis of CBIR techniques. CBIR works on the principle of supervised machine learning which involves feature selection followed by training and testing phase applied on a classifier in order to perform prediction. By using feature extraction, the image transforms such as Contourlet, Ridgelet and Shearlet could be utilized to retrieve the texture features from the images. The features extracted are used to train and build a classifier using the classification algorithms such as Naïve Bayes, K-Nearest Neighbour and Multi-class Support Vector Machine. Further the testing phase involves prediction which predicts the new input image using the trained classifier and label them from one of the four classes namely 1- Normal brain, 2- Benign tumour, 3- Malignant tumour and 4- Severe tumour. The second research work includes developing a tool which is used for tumour stage identification using the best feature extraction and classifier identified from the first work. Finally, the tool will be used to predict tumour stage and provide suggestions based on the stage of tumour identified by the system. This paper presents these two approaches which is a contribution to the medical field for giving better retrieval performance and for tumour stages identification.

Keywords: Brain tumour detection, content based image retrieval, classification of tumours, image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 738