Search results for: retinal vessel segmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 442

Search results for: retinal vessel segmentation

52 Binarization of Text Region based on Fuzzy Clustering and Histogram Distribution in Signboards

Authors: Jonghyun Park, Toan Nguyen Dinh, Gueesang Lee

Abstract:

In this paper, we present a novel approach to accurately detect text regions including shop name in signboard images with complex background for mobile system applications. The proposed method is based on the combination of text detection using edge profile and region segmentation using fuzzy c-means method. In the first step, we perform an elaborate canny edge operator to extract all possible object edges. Then, edge profile analysis with vertical and horizontal direction is performed on these edge pixels to detect potential text region existing shop name in a signboard. The edge profile and geometrical characteristics of each object contour are carefully examined to construct candidate text regions and classify the main text region from background. Finally, the fuzzy c-means algorithm is performed to segment and detected binarize text region. Experimental results show that our proposed method is robust in text detection with respect to different character size and color and can provide reliable text binarization result.

Keywords: Text detection, edge profile, signboard image, fuzzy clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2184
51 Stock Market Integration Measurement: Investigation of Malaysia and Singapore Stock Markets

Authors: B. K. Yeoh, Z. Arsad, C. W. Hooy

Abstract:

This paper tests the level of market integration between Malaysia and Singapore stock markets with the world market. Kalman Filter (KF) methodology is used on the International Capital Asset Pricing Model (ICAPM) and the pricing errors estimated within the framework of ICAPM are used as a measure of market integration or segmentation. The advantage of the KF technique is that it allows for time-varying coefficients in estimating ICAPM and hence able to capture the varying degree of market integration. Empirical results show clear evidence of varying degree of market integration for both case of Malaysia and Singapore. Furthermore, the results show that the changes in the level of market integration are found to coincide with certain economic events that have taken placed. The findings certainly provide evidence on the practicability of the KF technique to estimate stock markets integration. In the comparison between Malaysia and Singapore stock market, the result shows that the trends of the market integration indices for Malaysia and Singapore look similar through time but the magnitude is notably different with the Malaysia stock market showing greater degree of market integration. Finally, significant evidence of varying degree of market integration shows the inappropriate use of OLS in estimating the level of market integration.

Keywords: ICAPM, Kalman filter, stock market integration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
50 Antiangiogenic Potential of Phellodendron amurense Bark Extract Observed on Chorioallantoic Membrane

Authors: Ľudmila Ballová, Slavomír Kurhajec, Eva Petrovová, Jarmila Eftimová

Abstract:

Angiogenesis, a formation of new blood vessels from a pre-existing vasculature, plays an important role in pathologic processes such as the growth and metastasis of tumours. Tumours cannot grow beyond a few millimetres without blood supply from the newly formed blood vessels from the host tissue, a process called tumour-induced angiogenesis. The successful research of antiangiogenic treatment of cancer has focused on nutraceuticals with angiogenesis-modulating properties. Berberine, as a major active component of the bark of Phellodendron amurense Rupr., has shown antitumour activity by intervening into different steps of carcinogenesis. The influence of ethanolic extract of Phellodendron amurese bark on the angiogenesis was tested in vivo on chick chorioallantoic membrane (CAM). The irritancy of the CAM after the application of the crude bark extract dissolved in normal saline (10 mg/mL) was investigated on embryonic day 7. No significant signs of the irritancy, such as vasoconstriction, hyperaemia, haemorrhage or coagulation were observed which indicates the harmless character of the extract. A significant reduction in vessel sprouting and higher percentage of avascular zone was observed in the case of CAM treated with the extract in comparison with non-treated CAM (control), which is a proof of the antiangiogenic potential of the extract. These results could contribute to the development of novel drugs for the treatment of cancer or other diseases, in which angiogenesis plays a significant role.

Keywords: Angiogenesis, berberine, chorioallantoic membrane, Phellodendron amurense.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 807
49 Fuzzy Mathematical Morphology approach in Image Processing

Authors: Yee Yee Htun, Dr. Khaing Khaing Aye

Abstract:

Morphological operators transform the original image into another image through the interaction with the other image of certain shape and size which is known as the structure element. Mathematical morphology provides a systematic approach to analyze the geometric characteristics of signals or images, and has been applied widely too many applications such as edge detection, objection segmentation, noise suppression and so on. Fuzzy Mathematical Morphology aims to extend the binary morphological operators to grey-level images. In order to define the basic morphological operations such as fuzzy erosion, dilation, opening and closing, a general method based upon fuzzy implication and inclusion grade operators is introduced. The fuzzy morphological operations extend the ordinary morphological operations by using fuzzy sets where for fuzzy sets, the union operation is replaced by a maximum operation, and the intersection operation is replaced by a minimum operation. In this work, it consists of two articles. In the first one, fuzzy set theory, fuzzy Mathematical morphology which is based on fuzzy logic and fuzzy set theory; fuzzy Mathematical operations and their properties will be studied in details. As a second part, the application of fuzziness in Mathematical morphology in practical work such as image processing will be discussed with the illustration problems.

Keywords: Binary Morphological, Fuzzy sets, Grayscalemorphology, Image processing, Mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3207
48 The Effect of Bottom Shape and Baffle Length on the Flow Field in Stirred Tanks in Turbulent and Transitional Flow

Authors: Jie Dong, Binjie Hu, Andrzej W Pacek, Xiaogang Yang, Nicholas J. Miles

Abstract:

The effect of the shape of the vessel bottom and the length of baffles on the velocity distributions in a turbulent and in a transitional flow has been simulated. The turbulent flow was simulated using standard k-ε model and simulation was verified using LES whereas transitional flow was simulated using only LES. It has been found that both the shape of tank bottom and the baffles’ length has significant effect on the flow pattern and velocity distribution below the impeller. In the dished bottom tank with baffles reaching the edge of the dish, the large rotating volume of liquid was formed below the impeller. Liquid in this rotating region was not fully mixing. A dead zone was formed here. The size and the intensity of circulation within this zone calculated by k-ε model and LES were practically identical what reinforces the accuracy of the numerical simulations. Both types of simulations also show that employing full-length baffles can reduce the size of dead zone formed below the impeller. The LES was also used to simulate the velocity distribution below the impeller in transitional flow and it has been found that secondary circulation loops were formed near the tank bottom in all investigated geometries. However, in this case the length of baffles has smaller effect on the volume of rotating liquid than in the turbulent flow.

Keywords: Baffles length, dished bottom, dead zone, flow field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044
47 Flow Regime Characterization in a Diseased Artery Model

Authors: Anis S. Shuib, Peter R. Hoskins, William J. Easson

Abstract:

Cardiovascular disease mostly in the form of atherosclerosis is responsible for 30% of all world deaths amounting to 17 million people per year. Atherosclerosis is due to the formation of plaque. The fatty plaque may be at risk of rupture, leading typically to stroke and heart attack. The plaque is usually associated with a high degree of lumen reduction, called a stenosis. The initiation and progression of the disease is strongly linked to the hemodynamic environment near the vessel wall. The aim of this study is to validate the flow of blood mimic through an arterial stenosis model with computational fluid dynamics (CFD) package. In experiment, an axisymmetric model constructed consists of contraction and expansion region that follow a mathematical form of cosine function. A 30% diameter reduction was used in this study. Particle image velocimetry (PIV) was used to characterize the flow. The fluid consists of rigid spherical particles suspended in waterglycerol- NaCl mixture. The particles with 20 μm diameter were selected to follow the flow of fluid. The flow at Re=155, 270 and 390 were investigated. The experimental result is compared with FLUENT simulated flow that account for viscous laminar flow model. The results suggest that laminar flow model was sufficient to predict flow velocity at the inlet but the velocity at stenosis throat at Re =390 was overestimated. Hence, a transition to turbulent regime might have been developed at throat region as the flow rate increases.

Keywords: Atherosclerosis, Particle-laden flow, Particle imagevelocimetry, Stenosis artery

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679
46 Using Mean-Shift Tracking Algorithms for Real-Time Tracking of Moving Images on an Autonomous Vehicle Testbed Platform

Authors: Benjamin Gorry, Zezhi Chen, Kevin Hammond, Andy Wallace, Greg Michaelson

Abstract:

This paper describes new computer vision algorithms that have been developed to track moving objects as part of a long-term study into the design of (semi-)autonomous vehicles. We present the results of a study to exploit variable kernels for tracking in video sequences. The basis of our work is the mean shift object-tracking algorithm; for a moving target, it is usual to define a rectangular target window in an initial frame, and then process the data within that window to separate the tracked object from the background by the mean shift segmentation algorithm. Rather than use the standard, Epanechnikov kernel, we have used a kernel weighted by the Chamfer distance transform to improve the accuracy of target representation and localization, minimising the distance between the two distributions in RGB color space using the Bhattacharyya coefficient. Experimental results show the improved tracking capability and versatility of the algorithm in comparison with results using the standard kernel. These algorithms are incorporated as part of a robot test-bed architecture which has been used to demonstrate their effectiveness.

Keywords: Hume, functional programming, autonomous vehicle, pioneer robot, vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
45 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: Active Contour, Bayesian, Echocardiographic image, Feature vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1664
44 Opinion Mining Framework in the Education Domain

Authors: A. M. H. Elyasir, K. S. M. Anbananthen

Abstract:

The internet is growing larger and becoming the most popular platform for the people to share their opinion in different interests. We choose the education domain specifically comparing some Malaysian universities against each other. This comparison produces benchmark based on different criteria shared by the online users in various online resources including Twitter, Facebook and web pages. The comparison is accomplished using opinion mining framework to extract, process the unstructured text and classify the result to positive, negative or neutral (polarity). Hence, we divide our framework to three main stages; opinion collection (extraction), unstructured text processing and polarity classification. The extraction stage includes web crawling, HTML parsing, Sentence segmentation for punctuation classification, Part of Speech (POS) tagging, the second stage processes the unstructured text with stemming and stop words removal and finally prepare the raw text for classification using Named Entity Recognition (NER). Last phase is to classify the polarity and present overall result for the comparison among the Malaysian universities. The final result is useful for those who are interested to study in Malaysia, in which our final output declares clear winners based on the public opinions all over the web.

Keywords: Entity Recognition, Education Domain, Opinion Mining, Unstructured Text.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2925
43 Automatic Music Score Recognition System Using Digital Image Processing

Authors: Yuan-Hsiang Chang, Zhong-Xian Peng, Li-Der Jeng

Abstract:

Music has always been an integral part of human’s daily lives. But, for the most people, reading musical score and turning it into melody is not easy. This study aims to develop an Automatic music score recognition system using digital image processing, which can be used to read and analyze musical score images automatically. The technical approaches included: (1) staff region segmentation; (2) image preprocessing; (3) note recognition; and (4) accidental and rest recognition. Digital image processing techniques (e.g., horizontal /vertical projections, connected component labeling, morphological processing, template matching, etc.) were applied according to musical notes, accidents, and rests in staff notations. Preliminary results showed that our system could achieve detection and recognition rates of 96.3% and 91.7%, respectively. In conclusion, we presented an effective automated musical score recognition system that could be integrated in a system with a media player to play music/songs given input images of musical score. Ultimately, this system could also be incorporated in applications for mobile devices as a learning tool, such that a music player could learn to play music/songs.

Keywords: Connected component labeling, image processing, morphological processing, optical musical recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
42 Quad Tree Decomposition Based Analysis of Compressed Image Data Communication for Lossy and Lossless Using WSN

Authors: N. Muthukumaran, R. Ravi

Abstract:

The Quad Tree Decomposition based performance analysis of compressed image data communication for lossy and lossless through wireless sensor network is presented. Images have considerably higher storage requirement than text. While transmitting a multimedia content there is chance of the packets being dropped due to noise and interference. At the receiver end the packets that carry valuable information might be damaged or lost due to noise, interference and congestion. In order to avoid the valuable information from being dropped various retransmission schemes have been proposed. In this proposed scheme QTD is used. QTD is an image segmentation method that divides the image into homogeneous areas. In this proposed scheme involves analysis of parameters such as compression ratio, peak signal to noise ratio, mean square error, bits per pixel in compressed image and analysis of difficulties during data packet communication in Wireless Sensor Networks. By considering the above, this paper is to use the QTD to improve the compression ratio as well as visual quality and the algorithm in MATLAB 7.1 and NS2 Simulator software tool.

Keywords: Image compression, Compression Ratio, Quad tree decomposition, Wireless sensor networks, NS2 simulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2349
41 A Robust Diverged Localization and Recognition of License Registration Characters

Authors: M. Sankari, R. Bremananth, C.Meena

Abstract:

Localization and Recognition of License registration characters from the moving vehicle is a computationally complex task in the field of machine vision and is of substantial interest because of its diverse applications such as cross border security, law enforcement and various other intelligent transportation applications. Previous research used the plate specific details such as aspect ratio, character style, color or dimensions of the plate in the complex task of plate localization. In this paper, license registration character is localized by Enhanced Weight based density map (EWBDM) method, which is independent of such constraints. In connection with our previous method, this paper proposes a method that relaxes constraints in lighting conditions, different fonts of character occurred in the plate and plates with hand-drawn characters in various aspect quotients. The robustness of this method is well suited for applications where the appearance of plates seems to be varied widely. Experimental results show that this approach is suited for recognizing license plates in different external environments. 

Keywords: Character segmentation, Connectivity checking, Edge detection, Image analysis, license plate localization, license number recognition, Quality frame selection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1848
40 Distinction between Manifestations of Diabetic Retinopathy and Dust Artifacts Using Three-Dimensional HSV Color Space

Authors: Naoto Suzuki

Abstract:

Many ophthalmologists find it difficult to distinguish between small retinal hemorrhages and dust artifacts when using fundus photography for the diagnosis of diabetic retinopathy. Six patients with diabetic retinopathy underwent fundus photography, which revealed dust artifacts in the photographs of some patients. We constructed an experimental device similar to the optical system of the fundus camera and colored the fundi of the artificial eyes with khaki, sunset, rose and sunflower colors. Using the experimental device, we photographed dust artifacts using each artificial eyes. We used Scilab 5.4.0 and SIVP 0.5.3 softwares to convert the red, green, and blue (RGB) color space to the hue, saturation, and value (HSV) color space. We calculated the differences between the areas of manifestations and perimanifestations and the areas of dust artifacts and periartifacts using average HSVs. The V values in HSV for the manifestations were as follows: hemorrhages, 0.06 ± 0.03; hard exudates, −0.12 ± 0.06; and photocoagulation marks, 0.07 ± 0.02. For dust artifacts, visualized in the human and artificial eyes, the V values were as follows: human eye, 0.19 ± 0.03; khaki, 0.41 ± 0.02; sunset, 0.43 ± 0.04; rose, 0.47 ± 0.11; and sunflower, 0.59 ± 0.07. For the human and artificial eyes, we calculated two sensitivity values of dust artifacts compared to manifestation areas. V values of the HSV color space enabled the differentiation of small hemorrhages, hard exudates, and photocoagulation marks from dust artifacts.

Keywords: Diabetic retinopathy, HSV color space, small hemorrhages, hard exudates, photocoagulation marks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155
39 Inferring Hierarchical Pronunciation Rules from a Phonetic Dictionary

Authors: Erika Pigliapoco, Valerio Freschi, Alessandro Bogliolo

Abstract:

This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.

Keywords: Automatic phonetic transcription, pronunciation rules, hierarchical tree inference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
38 Through Biometric Card in Romania: Person Identification by Face, Fingerprint and Voice Recognition

Authors: Hariton N. Costin, Iulian Ciocoiu, Tudor Barbu, Cristian Rotariu

Abstract:

In this paper three different approaches for person verification and identification, i.e. by means of fingerprints, face and voice recognition, are studied. Face recognition uses parts-based representation methods and a manifold learning approach. The assessment criterion is recognition accuracy. The techniques under investigation are: a) Local Non-negative Matrix Factorization (LNMF); b) Independent Components Analysis (ICA); c) NMF with sparse constraints (NMFsc); d) Locality Preserving Projections (Laplacianfaces). Fingerprint detection was approached by classical minutiae (small graphical patterns) matching through image segmentation by using a structural approach and a neural network as decision block. As to voice / speaker recognition, melodic cepstral and delta delta mel cepstral analysis were used as main methods, in order to construct a supervised speaker-dependent voice recognition system. The final decision (e.g. “accept-reject" for a verification task) is taken by using a majority voting technique applied to the three biometrics. The preliminary results, obtained for medium databases of fingerprints, faces and voice recordings, indicate the feasibility of our study and an overall recognition precision (about 92%) permitting the utilization of our system for a future complex biometric card.

Keywords: Biometry, image processing, pattern recognition, speech analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905
37 Hybrid Heat Pump for Micro Heat Network

Authors: J. M. Counsell, Y. Khalid, M. J. Stewart

Abstract:

Achieving nearly zero carbon heating continues to be identified by UK government analysis as an important feature of any lowest cost pathway to reducing greenhouse gas emissions. Heat currently accounts for 48% of UK energy consumption and approximately one third of UK’s greenhouse gas emissions. Heat Networks are being promoted by UK investment policies as one means of supporting hybrid heat pump based solutions. To this effect the RISE (Renewable Integrated and Sustainable Electric) heating system project is investigating how an all-electric heating sourceshybrid configuration could play a key role in long-term decarbonisation of heat.  For the purposes of this study, hybrid systems are defined as systems combining the technologies of an electric driven air source heat pump, electric powered thermal storage, a thermal vessel and micro-heat network as an integrated system.  This hybrid strategy allows for the system to store up energy during periods of low electricity demand from the national grid, turning it into a dynamic supply of low cost heat which is utilized only when required. Currently a prototype of such a system is being tested in a modern house integrated with advanced controls and sensors. This paper presents the virtual performance analysis of the system and its design for a micro heat network with multiple dwelling units. The results show that the RISE system is controllable and can reduce carbon emissions whilst being competitive in running costs with a conventional gas boiler heating system.

Keywords: Gas boilers, heat pumps, hybrid heating and thermal storage, renewable integrated& sustainable electric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1267
36 A Robust Salient Region Extraction Based on Color and Texture Features

Authors: Mingxin Zhang, Zhaogan Lu, Junyi Shen

Abstract:

In current common research reports, salient regions are usually defined as those regions that could present the main meaningful or semantic contents. However, there are no uniform saliency metrics that could describe the saliency of implicit image regions. Most common metrics take those regions as salient regions, which have many abrupt changes or some unpredictable characteristics. But, this metric will fail to detect those salient useful regions with flat textures. In fact, according to human semantic perceptions, color and texture distinctions are the main characteristics that could distinct different regions. Thus, we present a novel saliency metric coupled with color and texture features, and its corresponding salient region extraction methods. In order to evaluate the corresponding saliency values of implicit regions in one image, three main colors and multi-resolution Gabor features are respectively used for color and texture features. For each region, its saliency value is actually to evaluate the total sum of its Euclidean distances for other regions in the color and texture spaces. A special synthesized image and several practical images with main salient regions are used to evaluate the performance of the proposed saliency metric and other several common metrics, i.e., scale saliency, wavelet transform modulus maxima point density, and important index based metrics. Experiment results verified that the proposed saliency metric could achieve more robust performance than those common saliency metrics.

Keywords: salient regions, color and texture features, image segmentation, saliency metric

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
35 Customer Segmentation Model in E-commerce Using Clustering Techniques and LRFM Model: The Case of Online Stores in Morocco

Authors: Rachid Ait daoud, Abdellah Amine, Belaid Bouikhalene, Rachid Lbibb

Abstract:

Given the increase in the number of e-commerce sites, the number of competitors has become very important. This means that companies have to take appropriate decisions in order to meet the expectations of their customers and satisfy their needs. In this paper, we present a case study of applying LRFM (length, recency, frequency and monetary) model and clustering techniques in the sector of electronic commerce with a view to evaluating customers’ values of the Moroccan e-commerce websites and then developing effective marketing strategies. To achieve these objectives, we adopt LRFM model by applying a two-stage clustering method. In the first stage, the self-organizing maps method is used to determine the best number of clusters and the initial centroid. In the second stage, kmeans method is applied to segment 730 customers into nine clusters according to their L, R, F and M values. The results show that the cluster 6 is the most important cluster because the average values of L, R, F and M are higher than the overall average value. In addition, this study has considered another variable that describes the mode of payment used by customers to improve and strengthen clusters’ analysis. The clusters’ analysis demonstrates that the payment method is one of the key indicators of a new index which allows to assess the level of customers’ confidence in the company's Website.

Keywords: Customer value, LRFM model, Cluster analysis, Self-Organizing Maps method (SOM), K-means algorithm, loyalty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6184
34 Object Identification with Color, Texture, and Object-Correlation in CBIR System

Authors: Awais Adnan, Muhammad Nawaz, Sajid Anwar, Tamleek Ali, Muhammad Ali

Abstract:

Needs of an efficient information retrieval in recent years in increased more then ever because of the frequent use of digital information in our life. We see a lot of work in the area of textual information but in multimedia information, we cannot find much progress. In text based information, new technology of data mining and data marts are now in working that were started from the basic concept of database some where in 1960. In image search and especially in image identification, computerized system at very initial stages. Even in the area of image search we cannot see much progress as in the case of text based search techniques. One main reason for this is the wide spread roots of image search where many area like artificial intelligence, statistics, image processing, pattern recognition play their role. Even human psychology and perception and cultural diversity also have their share for the design of a good and efficient image recognition and retrieval system. A new object based search technique is presented in this paper where object in the image are identified on the basis of their geometrical shapes and other features like color and texture where object-co-relation augments this search process. To be more focused on objects identification, simple images are selected for the work to reduce the role of segmentation in overall process however same technique can also be applied for other images.

Keywords: Object correlation, Geometrical shape, Color, texture, features, contents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981
33 CFD Analysis of the Blood Flow in Left Coronary Bifurcation with Variable Angulation

Authors: Midiya Khademi, Ali Nikoo, Shabnam Rahimnezhad Baghche Jooghi

Abstract:

Cardiovascular diseases (CVDs) are the main cause of death globally. Most CVDs can be prevented by avoiding habitual risk factors. Separate from the habitual risk factors, there are some inherent factors in each individual that can increase the risk potential of CVDs. Vessel shapes and geometry are influential factors, having great impact on the blood flow and the hemodynamic behavior of the vessels. In the present study, the influence of bifurcation angle on blood flow characteristics is studied. In order to approach this topic, by simplifying the details of the bifurcation, three models with angles 30°, 45°, and 60° were created, then by using CFD analysis, the response of these models for stable flow and pulsatile flow was studied. In the conducted simulation in order to eliminate the influence of other geometrical factors, only the angle of the bifurcation was changed and other parameters remained constant during the research. Simulations are conducted under dynamic and stable condition. In the stable flow simulation, a steady velocity of 0.17 m/s at the inlet plug was maintained and in dynamic simulations, a typical LAD flow waveform is implemented. The results show that the bifurcation angle has an influence on the maximum speed of the flow. In the stable flow condition, increasing the angle lead to decrease the maximum flow velocity. In the dynamic flow simulations, increasing the bifurcation angle lead to an increase in the maximum velocity. Since blood flow has pulsatile characteristics, using a uniform velocity during the simulations can lead to a discrepancy between the actual results and the calculated results.

Keywords: Coronary artery, cardiovascular disease, bifurcation, atherosclerosis, CFD, artery wall shear stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887
32 Target Detection using Adaptive Progressive Thresholding Based Shifted Phase-Encoded Fringe-Adjusted Joint Transform Correlator

Authors: Inder K. Purohit, M. Nazrul Islam, K. Vijayan Asari, Mohammad A. Karim

Abstract:

A new target detection technique is presented in this paper for the identification of small boats in coastal surveillance. The proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any objects present in the scene from the background. The preprocessing step results in an image having only the foreground objects, such as boats, trees and other cluttered regions, and hence reduces the search region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform correlator (SPFJTC) technique which produces single and delta-like correlation peak for a potential target present in the input scene. A post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.

Keywords: Adaptive progressive thresholding, fringe adjusted filters, image segmentation, joint transform correlation, synthetic discriminant function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1172
31 Improving the Exploitation of Fluid in Elastomeric Polymeric Isolator

Authors: Haithem Elderrat, Huw Davies, Emmanuel Brousseau

Abstract:

Elastomeric polymer foam has been used widely in the automotive industry, especially for isolating unwanted vibrations. Such material is able to absorb unwanted vibration due to its combination of elastic and viscous properties. However, the ‘creep effect’, poor stress distribution and susceptibility to high temperatures are the main disadvantages of such a system. In this study, improvements in the performance of elastomeric foam as a vibration isolator were investigated using the concept of Foam Filled Fluid (FFFluid). In FFFluid devices, the foam takes the form of capsule shapes, and is mixed with viscous fluid, while the mixture is contained in a closed vessel. When the FFFluid isolator is affected by vibrations, energy is absorbed, due to the elastic strain of the foam. As the foam is compressed, there is also movement of the fluid, which contributes to further energy absorption as the fluid shears. Also, and dependent on the design adopted, the packaging could also attenuate vibration through energy absorption via friction and/or elastic strain. The present study focuses on the advantages of the FFFluid concept over the dry polymeric foam in the role of vibration isolation. This comparative study between the performance of dry foam and the FFFluid was made according to experimental procedures. The paper concludes by evaluating the performance of the FFFluid isolator in the suspension system of a light vehicle. One outcome of this research is that the FFFluid may preferable over elastomer isolators in certain applications, as it enables a reduction in the effects of high temperatures and of ‘creep effects’, thereby increasing the reliability and load distribution. The stiffness coefficient of the system has increased about 60% by using an FFFluid sample. The technology represented by the FFFluid is therefore considered by this research suitable for application in the suspension system of a light vehicle.

Keywords: Anti-vibration devices, dry foam, FFFluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856
30 Feasibility of Integrating Heating Valve Drivers with KNX-standard for Performing Dynamic Hydraulic Balance in Domestic Buildings

Authors: Tobias Teich, Danny Szendrei, Markus Schrader, Franziska Jahn, Susan Franke

Abstract:

The increasing demand for sufficient and clean energy forces industrial and service companies to align their strategies towards efficient consumption. This trend refers also to the residential building sector. There, large amounts of energy consumption are caused by house and facility heating. Many of the operated hot water heating systems lack hydraulic balanced working conditions for heat distribution and –transmission and lead to inefficient heating. Through hydraulic balancing of heating systems, significant energy savings for primary and secondary energy can be achieved. This paper addresses the use of KNX-technology (Smart Buildings) in residential buildings to ensure a dynamic adaption of hydraulic system's performance, in order to increase the heating system's efficiency. In this paper, the procedure of heating system segmentation into hydraulically independent units (meshes) is presented. Within these meshes, the heating valve are addressed and controlled by a central facility server. Feasibility criteria towards such drivers will be named. The dynamic hydraulic balance is achieved by positioning these valves according to heating loads, that are generated from the temperature settings in the corresponding rooms. The energetic advantages of single room heating control procedures, based on the application FacilityManager, is presented.

Keywords: building automation, dynamic hydraulic balance, energy savings, VPN-networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
29 TBC for Protection of Al Alloy Aerospace Component

Authors: P. Niranatlumpong, H. Koiprasert, C. Sukhonket, K. Ninon, N. Coompreedee

Abstract:

The use of a conventional air plasma-sprayed thermal barrier coating (TBC) and a porous, functionally graded TBC as a thermal insulator for Al7075 alloy was explored. A quench test at 1200°C employing fast heating and cooling rates was setup to represent a dynamic thermal condition of an aerospace component. During the test, coated samples were subjected the ambient temperature of 1200°C for a very short time. This was followed by a rapid drop in temperature resulting in cracking of the coatings. For the conventional TBC, it was found that the temperature of the Al7075 substrate decreases with the increase in the ZrO2 topcoat thickness. However, at the topcoat thickness of 1100 µm, large horizontal cracks can be observed in the topcoat and at the topcoat thickness of 1600 µm, the topcoat delaminate during cooling after the quench test. The porous, functionally graded TBC with 600 µm thick topcoat, on the other hand, was found to be as effective at reducing the substrate temperature as the conventional TBC with 1100 µm thick topcoat. The maximum substrate temperature is about 213°C for the former and 208°C for the latter when a heating rate of 38°C/s was used. When the quench tests were conducted with a faster heating rate of 128°C/s, the Al7075 substrate heat up faster with a reduction in the maximum substrate temperatures. The substrate temperatures dropped from 297 to 212°C for the conventional TBC and from 213 to 155°C for the porous TBC, both with 600 µm thick topcoat. Segmentation cracks were observed in both coating after the quench test.

Keywords: Thermal barrier coating, Al7075, porous TBC, Quenching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2413
28 Data Mining for Cancer Management in Egypt Case Study: Childhood Acute Lymphoblastic Leukemia

Authors: Nevine M. Labib, Michael N. Malek

Abstract:

Data Mining aims at discovering knowledge out of data and presenting it in a form that is easily comprehensible to humans. One of the useful applications in Egypt is the Cancer management, especially the management of Acute Lymphoblastic Leukemia or ALL, which is the most common type of cancer in children. This paper discusses the process of designing a prototype that can help in the management of childhood ALL, which has a great significance in the health care field. Besides, it has a social impact on decreasing the rate of infection in children in Egypt. It also provides valubale information about the distribution and segmentation of ALL in Egypt, which may be linked to the possible risk factors. Undirected Knowledge Discovery is used since, in the case of this research project, there is no target field as the data provided is mainly subjective. This is done in order to quantify the subjective variables. Therefore, the computer will be asked to identify significant patterns in the provided medical data about ALL. This may be achieved through collecting the data necessary for the system, determimng the data mining technique to be used for the system, and choosing the most suitable implementation tool for the domain. The research makes use of a data mining tool, Clementine, so as to apply Decision Trees technique. We feed it with data extracted from real-life cases taken from specialized Cancer Institutes. Relevant medical cases details such as patient medical history and diagnosis are analyzed, classified, and clustered in order to improve the disease management.

Keywords: Data Mining, Decision Trees, Knowledge Discovery, Leukemia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165
27 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study

Authors: Faris Tarlochan, Siva Mahesh Tangutooru

Abstract:

Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 μm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.

Keywords: Lateral geniculate nucleus, visual cortex, finite element, glaucoma, neuroprostheses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
26 Indian License Plate Detection and Recognition Using Morphological Operation and Template Matching

Authors: W. Devapriya, C. Nelson Kennedy Babu, T. Srihari

Abstract:

Automatic License plate recognition (ALPR) is a technology which recognizes the registration plate or number plate or License plate of a vehicle. In this paper, an Indian vehicle number plate is mined and the characters are predicted in efficient manner. ALPR involves four major technique i) Pre-processing ii) License Plate Location Identification iii) Individual Character Segmentation iv) Character Recognition. The opening phase, named pre-processing helps to remove noises and enhances the quality of the image using the conception of Morphological Operation and Image subtraction. The second phase, the most puzzling stage ascertain the location of license plate using the protocol Canny Edge detection, dilation and erosion. In the third phase, each characters characterized by Connected Component Approach (CCA) and in the ending phase, each segmented characters are conceptualized using cross correlation template matching- a scheme specifically appropriate for fixed format. Major application of ALPR is Tolling collection, Border Control, Parking, Stolen cars, Enforcement, Access Control, Traffic control. The database consists of 500 car images taken under dissimilar lighting condition is used. The efficiency of the system is 97%. Our future focus is Indian Vehicle License Plate Validation (Whether License plate of a vehicle is as per Road transport and highway standard).

Keywords: Automatic License plate recognition, Character recognition, Number plate Recognition, Template matching, morphological operation, canny edge detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2343
25 Economic Effects of Maritime Environmental Legislation in the North and Baltic Sea Area: An Exploratory Sequential Mixed Methods Approach

Authors: Thea Freese

Abstract:

Environmental legislation to protect North and Baltic Sea areas from harmful vessel-source emissions has received increased political attention in recent years. Legislative measures are expected to show positive effects on the health of the marine environment and society. At the same time, compliance might increase the costs to industry and have effects on freight rates and volumes shipped with potential negative repercussions on the environment. Building on an exploratory sequential mixed methods approach, this research project will study the economic effects of maritime environmental legislation in two phases. In Phase I, exploratory in-depth interviews were conducted with 12 experts from various stakeholder groups aiming at identifying variables influencing the relationship between environmental legislation, freight rates and volumes shipped. Influencing factors like compliance, enforcement and modal shift were identified and studied. Phase II will comprise of a quantitative study conducted with the aim of verifying the theory build in Phase I and quantifying economic effects of rules on shipping pollution. Research in this field might inform policy-makers about determinants of behaviour of ship operators in the face of the law and might further the development of a comprehensive legal system for marine environmental protection. At the present stage of research, first tentative results from the qualitative phase may be examined and open research questions to be addressed in the quantitative phase as well as possible research designs for phase II may be discussed. Input from other researchers will be highly valuable at this point.

Keywords: Clean shipping operations, compliance, maritime environmental legislation, maritime law and economics, mixed methods research, North and Baltic Sea area.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1031
24 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer

Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved

Abstract:

Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.

Keywords: Computer-aided system, detection, image segmentation, morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 478
23 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images

Authors: SP. Chokkalingam, K. Komathy

Abstract:

Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.

Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2432