Search results for: video image processing
779 Automatic Detection and Classification of Microcalcification, Mass, Architectural Distortion and Bilateral Asymmetry in Digital Mammogram
Authors: S. Shanthi, V. Muralibhaskaran
Abstract:
Mammography has been one of the most reliable methods for early detection of breast cancer. There are different lesions which are breast cancer characteristic such as microcalcifications, masses, architectural distortions and bilateral asymmetry. One of the major challenges of analysing digital mammogram is how to extract efficient features from it for accurate cancer classification. In this paper we proposed a hybrid feature extraction method to detect and classify all four signs of breast cancer. The proposed method is based on multiscale surrounding region dependence method, Gabor filters, multi fractal analysis, directional and morphological analysis. The extracted features are input to self adaptive resource allocation network (SRAN) classifier for classification. The validity of our approach is extensively demonstrated using the two benchmark data sets Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammograph (DDSM) and the results have been proved to be progressive.
Keywords: Feature extraction, fractal analysis, Gabor filters, multiscale surrounding region dependence method, SRAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2943778 Cursor Position Estimation Model for Virtual Touch Screen Using Camera
Authors: Somkiat Wangsiripitak
Abstract:
Virtual touch screen using camera is an ordinary screen which uses a camera to imitate the touch screen by taking a picture of an indicator, e.g., finger, which is laid on the screen, converting the indicator tip position on the picture to the position on the screen, and moving the cursor on the screen to that position. In fact, the indicator is not laid on the screen directly, but it is intervened by the cover at some intervals. In spite of this gap, if the eye-indicator-camera angle is not large, the mapping from the indicator tip positions on the image to the corresponding cursor positions on the screen is not difficult and could be done with a little error. However, the larger the angle is, the bigger the error in the mapping occurs. This paper proposes cursor position estimation model for virtual touch screen using camera which could eliminate this kind of error. The proposed model (i) moves the on-screen pilot cursor to the screen position which locates on the screen at the position just behind the indicator tip when the indicator tip has been looked from the camera position, and then (ii) converts that pilot cursor position to the desirable cursor position (the position on the screen when it has been looked from the user-s eye through the indicator tip) by using the bilinear transformation. Simulation results show the correctness of the estimated cursor position by using the proposed model.
Keywords: Bilinear transformation, cursor position, pilot cursor, virtual touch screen.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630777 Cities Simulation and Representation in Locative Games from the Perspective of Cultural Studies
Authors: B. A. A. Paixão, J. V. B. Gomide
Abstract:
This work aims to analyze the locative structure used by the locative games of the company Niantic. To fulfill this objective, a literature review on the representation and simulation of cities was developed; interviews with Ingress players and playing Ingress. Relating these data, it was possible to deepen the relationship between the virtual and the real to create the simulation of cities and their cultural objects in locative games. Cities representation associates geo-location provided by the Global Positioning System (GPS), with augmented reality and digital image, and provides a new paradigm in the city interaction with its parts and real and virtual world elements, homeomorphic to real world. Bibliographic review of papers related to the representation and simulation study and their application in locative games was carried out and is presented in the present paper. The cities representation and simulation concepts in locative games, and how this setting enables the flow and immersion in urban space, are analyzed. Some examples of games are discussed for this new setting development, which is a mix of real and virtual world. Finally, it was proposed a Locative Structure for electronic games using the concepts of heterotrophic representations and isotropic representations conjoined with immediacy and hypermediacy.
Keywords: Cities representation, city simulation, games simulation, locative games.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 879776 A Pipelined FSBM Hardware Architecture for HTDV-H.26x
Authors: H. Loukil, A. Ben Atitallah, F. Ghozzi, M. A. Ben Ayed, N. Masmoudi
Abstract:
In MPEG and H.26x standards, to eliminate the temporal redundancy we use motion estimation. Given that the motion estimation stage is very complex in terms of computational effort, a hardware implementation on a re-configurable circuit is crucial for the requirements of different real time multimedia applications. In this paper, we present hardware architecture for motion estimation based on "Full Search Block Matching" (FSBM) algorithm. This architecture presents minimum latency, maximum throughput, full utilization of hardware resources such as embedded memory blocks, and combining both pipelining and parallel processing techniques. Our design is described in VHDL language, verified by simulation and implemented in a Stratix II EP2S130F1020C4 FPGA circuit. The experiment result show that the optimum operating clock frequency of the proposed design is 89MHz which achieves 160M pixels/sec.Keywords: SAD, FSBM, Hardware Implementation, FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1640775 Feature Extraction from Aerial Photos
Authors: Mesut Gündüz, Ferruh Yildiz, Ayşe Onat
Abstract:
In Geographic Information System, one of the sources of obtaining needed geographic data is digitizing analog maps and evaluation of aerial and satellite photos. In this study, a method will be discussed which can be used to extract vectorial features and creating vectorized drawing files for aerial photos. At the same time a software developed for these purpose. Converting from raster to vector is also known as vectorization and it is the most important step when creating vectorized drawing files. In the developed algorithm, first of all preprocessing on the aerial photo is done. These are; converting to grayscale if necessary, reducing noise, applying some filters and determining the edge of the objects etc. After these steps, every pixel which constitutes the photo are followed from upper left to right bottom by examining its neighborhood relationship and one pixel wide lines or polylines obtained. The obtained lines have to be erased for preventing confusion while continuing vectorization because if not erased they can be perceived as new line, but if erased it can cause discontinuity in vector drawing so the image converted from 2 bit to 8 bit and the detected pixels are expressed as a different bit. In conclusion, the aerial photo can be converted to vector form which includes lines and polylines and can be opened in any CAD application.Keywords: Vectorization, Aerial Photos, Vectorized DrawingFile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605774 Photogrammetry and GIS Integration for Archaeological Documentation of Ahl-Alkahf, Jordan
Authors: Rami Al-Ruzouq, Abdallah Al-Zoubi, Abdel-Rahman Abueladas, Petya Dimitrova
Abstract:
Protection and proper management of archaeological heritage are an essential process of studying and interpreting the generations present and future. Protecting the archaeological heritage is based upon multidiscipline professional collaboration. This study aims to gather data by different sources (Photogrammetry and Geographic Information System (GIS)) integrated for the purpose of documenting one the of significant archeological sites (Ahl-Alkahf, Jordan). 3D modeling deals with the actual image of the features, shapes and texture to represent reality as realistically as possible by using texture. The 3D coordinates that result of the photogrammetric adjustment procedures are used to create 3D-models of the study area. Adding Textures to the 3D-models surfaces gives a 'real world' appearance to the displayed models. GIS system combined all data, including boundary maps, indicating the location of archeological sites, transportation layer, digital elevation model and orthoimages. For realistic representation of the study area, 3D - GIS model prepared, where efficient generation, management and visualization of such special data can be achieved.
Keywords: Archaeology, close range photogrammetry, ortho-photo, 3D-GIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2162773 A New Floating Point Implementation of Base 2 Logarithm
Authors: Ahmed M. Mansour, Ali M. El-Sawy, Ahmed T Sayed
Abstract:
Logarithms reduce products to sums and powers to products; they play an important role in signal processing, communication and information theory. They are primarily used for hardware calculations, handling multiplications, divisions, powers, and roots effectively. There are three commonly used bases for logarithms; the logarithm with base-10 is called the common logarithm, the natural logarithm with base-e and the binary logarithm with base-2. This paper demonstrates different methods of calculation for log2 showing the complexity of each and finds out the most accurate and efficient besides giving insights to their hardware design. We present a new method called Floor Shift for fast calculation of log2, and then we combine this algorithm with Taylor series to improve the accuracy of the output, we illustrate that by using two examples. We finally compare the algorithms and conclude with our remarks.
Keywords: Logarithms, log2, floor, iterative, CORDIC, Taylor series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3821772 Consistent Modeling of Functional Dependencies along with World Knowledge
Authors: Sven Rebhan, Nils Einecke, Julian Eggert
Abstract:
In this paper we propose a method for vision systems to consistently represent functional dependencies between different visual routines along with relational short- and long-term knowledge about the world. Here the visual routines are bound to visual properties of objects stored in the memory of the system. Furthermore, the functional dependencies between the visual routines are seen as a graph also belonging to the object-s structure. This graph is parsed in the course of acquiring a visual property of an object to automatically resolve the dependencies of the bound visual routines. Using this representation, the system is able to dynamically rearrange the processing order while keeping its functionality. Additionally, the system is able to estimate the overall computational costs of a certain action. We will also show that the system can efficiently use that structure to incorporate already acquired knowledge and thus reduce the computational demand.Keywords: Adaptive systems, Knowledge representation, Machinevision, Systems engineering
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695771 OCR for Script Identification of Hindi (Devnagari) Numerals using Error Diffusion Halftoning Algorithm with Neural Classifier
Authors: Banashree N. P., Andhe Dharani, R. Vasanta, P. S. Satyanarayana
Abstract:
The applications on numbers are across-the-board that there is much scope for study. The chic of writing numbers is diverse and comes in a variety of form, size and fonts. Identification of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], machine printed or handwritten characters/numerals are recognized. There are plentiful approaches that deal with problem of detection of numerals/character depending on the sort of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent our work focused on a technique in feature extraction i.e. Local-based approach, a method using 16-segment display concept, which is extracted from halftoned images & Binary images of isolated numerals. These feature vectors are fed to neural classifier model that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. Experimentation result shows that recognition rate of halftoned images is 98 % compared to binary images (95%).
Keywords: OCR, Halftoning, Neural classifier, 16-segmentdisplay concept.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715770 Study on Practice of Improving Water Quality in Urban Rivers by Diverting Clean Water
Authors: Manjie Li, Xiangju Cheng, Yongcan Chen
Abstract:
With rapid development of industrialization and urbanization, water environmental deterioration is widespread in majority of urban rivers, which seriously affects city image and life satisfaction of residents. As an emergency measure to improve water quality, clean water diversion is introduced for water environmental management. Lubao River and Southwest River, two urban rivers in typical plain tidal river network, are identified as technically and economically feasible for the application of clean water diversion. One-dimensional hydrodynamic-water quality model is developed to simulate temporal and spatial variations of water level and water quality, with satisfactory accuracy. The mathematical model after calibration is applied to investigate hydrodynamic and water quality variations in rivers as well as determine the optimum operation scheme of water diversion. Assessment system is developed for evaluation of positive and negative effects of water diversion, demonstrating the effectiveness of clean water diversion and the necessity of pollution reduction.
Keywords: Assessment system, clean water diversion, hydrodynamic-water quality model, tidal river network, urban rivers, water environment improvement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 786769 On a Pitch Duration Technique for Prosody Control
Authors: JongKuk Kim, HernSoo Hahn, Uei-Joong Yoo, MyungJin Bae
Abstract:
In this paper, we propose a method of alter duration in frequency domain that control prosody in real time after pitch alteration. If there has a method to alteration duration freely among prosody information, that may used in several fields such as speech impediment person's pronunciation proof reading or language study. The pitch alteration method used control prosody altered by PSOLA synthesis method which is in time domain processing method. However, the duration of pitch alteration speech is changed by the frequency domain. In this paper, we altered the duration with the method of duration alteration by Fast Fourier Transformation in frequency domain. Consequently, the intelligibility of the pitch and duration are controlled has a slight decrease than the case when only pitch is changed, but the proposed algorithm obtained the higher MOS score about naturalness.Keywords: PSOLA, Pitch Alteration, Duration Control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683768 Review of Surface Electromyogram Signals: Its Analysis and Applications
Authors: Anjana Goen, D. C. Tiwari
Abstract:
Electromyography (EMG) is the study of muscles function through analysis of electrical activity produced from muscles. This electrical activity which is displayed in the form of signal is the result of neuromuscular activation associated with muscle contraction. The most common techniques of EMG signal recording are by using surface and needle/wire electrode where the latter is usually used for interest in deep muscle. This paper will focus on surface electromyogram (SEMG) signal. During SEMG recording, several problems had to been countered such as noise, motion artifact and signal instability. Thus, various signal processing techniques had been implemented to produce a reliable signal for analysis. SEMG signal finds broad application particularly in biomedical field. It had been analyzed and studied for various interests such as neuromuscular disease, enhancement of muscular function and human-computer interface.
Keywords: Evolvable hardware (EHW), Functional Electrical Simulation (FES), Hidden Markov Model (HMM), Hjorth Time Domain (HTD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3515767 Sustainable Renovation and Restoration of the Rural Based on the View Point of Psychology
Abstract:
Countryside has been generally recognized and regarded as a characteristic symbol which presents in human memory for a long time. As a result of the change of times, because of it is failure to meet the growing needs of the growing life and mental decline, the vast rural area began to decline. But their history feature image which accumulated by the ancient tradition provides people with the origins of existence on the spiritual level, such as "identity" and "belonging", makes people closer to the others in the spiritual and psychological aspects of a common experience about the past, thus the sense of a lack of culture caused by the losing of memory symbols is weakened. So, in the modernization process, how to repair its vitality and transform and planning it in a sustainable way has become a hot topics in architectural and urban planning. This paper aims to break the constraints of disciplines, from the perspective of interdiscipline, using the research methods of systems science to analyze and discuss the theories and methods of rural form factors, which based on the viewpoint of memory in psychology. So we can find a right way to transform the Rural to give full play to the role of the countryside in the actual use and the shape of history spirits.Keywords: The rural, sustainable renovation, restoration, psychology, memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475766 Exploring Inter-Relationships between Events to Identify Strategic Technological Competencies: A Combined Approach
Authors: Cláudio Santos, Madalena Araújo, Nuno Correia
Abstract:
The inherent complexity in nowadays- business environments is forcing organizations to be attentive to the dynamics in several fronts. Therefore, the management of technological innovation is continually faced with uncertainty about the future. These issues lead to a need for a systemic perspective, able to analyze the consequences of interactions between different factors. The field of technology foresight has proposed methods and tools to deal with this broader perspective. In an attempt to provide a method to analyze the complex interactions between events in several areas, departing from the identification of the most strategic competencies, this paper presents a methodology based on the Delphi method and Quality Function Deployment. This methodology is applied in a sheet metal processing equipment manufacturer, as a case study.Keywords: Competencies, Delphi Method, Quality Function Deployment, Technology Foresight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701765 Understanding Evolutionary Algorithms through Interactive Graphical Applications
Authors: Javier Barrachina, Piedad Garrido, Manuel Fogue, Julio A. Sanguesa, Francisco J. Martinez
Abstract:
It is very common to observe, especially in Computer Science studies that students have difficulties to correctly understand how some mechanisms based on Artificial Intelligence work. In addition, the scope and limitations of most of these mechanisms are usually presented by professors only in a theoretical way, which does not help students to understand them adequately. In this work, we focus on the problems found when teaching Evolutionary Algorithms (EAs), which imitate the principles of natural evolution, as a method to solve parameter optimization problems. Although this kind of algorithms can be very powerful to solve relatively complex problems, students often have difficulties to understand how they work, and how to apply them to solve problems in real cases. In this paper, we present two interactive graphical applications which have been specially designed with the aim of making Evolutionary Algorithms easy to be understood by students. Specifically, we present: (i) TSPS, an application able to solve the ”Traveling Salesman Problem”, and (ii) FotEvol, an application able to reconstruct a given image by using Evolution Strategies. The main objective is that students learn how these techniques can be implemented, and the great possibilities they offer.Keywords: Education, evolutionary algorithms, evolution strategies, interactive learning applications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1063764 Road Vehicle Recognition Using Magnetic Sensing Feature Extraction and Classification
Authors: Xiao Chen, Xiaoying Kong, Min Xu
Abstract:
This paper presents a road vehicle detection approach for the intelligent transportation system. This approach mainly uses low-cost magnetic sensor and associated data collection system to collect magnetic signals. This system can measure the magnetic field changing, and it also can detect and count vehicles. We extend Mel Frequency Cepstral Coefficients to analyze vehicle magnetic signals. Vehicle type features are extracted using representation of cepstrum, frame energy, and gap cepstrum of magnetic signals. We design a 2-dimensional map algorithm using Vector Quantization to classify vehicle magnetic features to four typical types of vehicles in Australian suburbs: sedan, VAN, truck, and bus. Experiments results show that our approach achieves a high level of accuracy for vehicle detection and classification.
Keywords: Vehicle classification, signal processing, road traffic model, magnetic sensing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400763 Food Safety Aspects of Pesticide Residues in Spice Paprika
Authors: Sz. Klátyik, B. Darvas, M. Mörtl, M. Ottucsák, E. Takács, H. Bánáti, L. Simon, G. Gyurcsó, A. Székács
Abstract:
Environmental and health safety of condiments used for spicing food products in food processing or by culinary means receive relatively low attention, even though possible contamination of spices may affect food quality and safety. Contamination surveys mostly focus on microbial contaminants or their secondary metabolites, mycotoxins. Chemical contaminants, particularly pesticide residues, however, are clearly substantial factors in the case of given condiments in the Capsicum family including spice paprika and chilli. To assess food safety and support the quality of the Hungaricum product spice paprika, the pesticide residue status of spice paprika and chilli is assessed on the basis of reported pesticide contamination cases and non-compliances in the Rapid Alert System for Food and Feed of the European Union since 1998.
Keywords: Spice paprika, Capsicum, pesticide residues, RASFF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2401762 Perceptual JPEG Compliant Coding by Using DCT-Based Visibility Thresholds of Color Images
Authors: Kuo-Cheng Liu
Abstract:
Effective estimation of just noticeable distortion (JND) for images is helpful to increase the efficiency of a compression algorithm in which both the statistical redundancy and the perceptual redundancy should be accurately removed. In this paper, we design a DCT-based model for estimating JND profiles of color images. Based on a mathematical model of measuring the base detection threshold for each DCT coefficient in the color component of color images, the luminance masking adjustment, the contrast masking adjustment, and the cross masking adjustment are utilized for luminance component, and the variance-based masking adjustment based on the coefficient variation in the block is proposed for chrominance components. In order to verify the proposed model, the JND estimator is incorporated into the conventional JPEG coder to improve the compression performance. A subjective and fair viewing test is designed to evaluate the visual quality of the coding image under the specified viewing condition. The simulation results show that the JPEG coder integrated with the proposed DCT-based JND model gives better coding bit rates at visually lossless quality for a variety of color images.
Keywords: Just-noticeable distortion (JND), discrete cosine transform (DCT), JPEG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2580761 Hardware Prototyping of an Efficient Encryption Engine
Authors: Muhammad I. Ibrahimy, Mamun B.I. Reaz, Khandaker Asaduzzaman, Sazzad Hussain
Abstract:
An approach to develop the FPGA of a flexible key RSA encryption engine that can be used as a standard device in the secured communication system is presented. The VHDL modeling of this RSA encryption engine has the unique characteristics of supporting multiple key sizes, thus can easily be fit into the systems that require different levels of security. A simple nested loop addition and subtraction have been used in order to implement the RSA operation. This has made the processing time faster and used comparatively smaller amount of space in the FPGA. The hardware design is targeted on Altera STRATIX II device and determined that the flexible key RSA encryption engine can be best suited in the device named EP2S30F484C3. The RSA encryption implementation has made use of 13,779 units of logic elements and achieved a clock frequency of 17.77MHz. It has been verified that this RSA encryption engine can perform 32-bit, 256-bit and 1024-bit encryption operation in less than 41.585us, 531.515us and 790.61us respectively.Keywords: RSA, FPGA, Communication, Security, VHDL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450760 Adhesion Problematic for Novel Non-Crimp Fabric and Surface Modification of Carbon-Fibres Using Oxy-Fluorination
Authors: Iris Käppler, Paul Matthäi, Chokri Cherif
Abstract:
In the scope of application of technical textiles, Non- Crimp Fabrics are increasingly used. In general, NCF exhibit excellent load bearing properties, but caused by the manufacturing process, there are some remaining disadvantages which have to be reduced. Regarding to this, a novel technique of processing NCF was developed substituting the binding-thread by an adhesive. This stitchfree method requires new manufacturing concept as well as new basic methods to prove adhesion of glue at fibres and textiles. To improve adhesion properties and the wettability of carbon-fibres by the adhesive, oxy-fluorination was used. The modification of carbonfibres by oxy-fluorination was investigated via scanning electron microscope, X-ray photoelectron spectroscopy and single fibre tensiometry. Special tensile tests were developed to determine the maximum force required for detachment.
Keywords: Non-Crimp Fabric, adhesive, stitch-free, high-performance fibre.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035759 Semi-Automatic Artifact Rejection Procedure Based on Kurtosis, Renyi's Entropy and Independent Component Scalp Maps
Authors: Antonino Greco, Nadia Mammone, Francesco Carlo Morabito, Mario Versaci
Abstract:
Artifact rejection plays a key role in many signal processing applications. The artifacts are disturbance that can occur during the signal acquisition and that can alter the analysis of the signals themselves. Our aim is to automatically remove the artifacts, in particular from the Electroencephalographic (EEG) recordings. A technique for the automatic artifact rejection, based on the Independent Component Analysis (ICA) for the artifact extraction and on some high order statistics such as kurtosis and Shannon-s entropy, was proposed some years ago in literature. In this paper we try to enhance this technique proposing a new method based on the Renyi-s entropy. The performance of our method was tested and compared to the performance of the method in literature and the former proved to outperform the latter.
Keywords: Artifact, EEG, Renyi's entropy, kurtosis, independent component analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854758 Endometrial Cancer Recognition via EEG Dependent upon 14-3-3 Protein Leading to an Ontological Diagnosis
Authors: Marios Poulos, Eirini Maliagani, Minas Paschopoulos, George Bokos
Abstract:
The purpose of my research proposal is to demonstrate that there is a relationship between EEG and endometrial cancer. The above relationship is based on an Aristotelian Syllogism; since it is known that the 14-3-3 protein is related to the electrical activity of the brain via control of the flow of Na+ and K+ ions and since it is also known that many types of cancer are associated with 14-3-3 protein, it is possible that there is a relationship between EEG and cancer. This research will be carried out by well-defined diagnostic indicators, obtained via the EEG, using signal processing procedures and pattern recognition tools such as neural networks in order to recognize the endometrial cancer type. The current research shall compare the findings from EEG and hysteroscopy performed on women of a wide age range. Moreover, this practice could be expanded to other types of cancer. The implementation of this methodology will be completed with the creation of an ontology. This ontology shall define the concepts existing in this research-s domain and the relationships between them. It will represent the types of relationships between hysteroscopy and EEG findings.Keywords: Bioinformatics, Protein 14-3-3, EEG, Endometrial cancer, Ontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628757 An Enhanced Slicing Algorithm Using Nearest Distance Analysis for Layer Manufacturing
Authors: M. Vatani, A. R. Rahimi, F. Brazandeh, A. Sanati nezhad
Abstract:
Although the STL (stereo lithography) file format is widely used as a de facto industry standard in the rapid prototyping industry due to its simplicity and ability to tessellation of almost all surfaces, but there are always some defects and shortcoming in their usage, which many of them are difficult to correct manually. In processing the complex models, size of the file and its defects grow extremely, therefore, correcting STL files become difficult. In this paper through optimizing the exiting algorithms, size of the files and memory usage of computers to process them will be reduced. In spite of type and extent of the errors in STL files, the tail-to-head searching method and analysis of the nearest distance between tails and heads techniques were used. As a result STL models sliced rapidly, and fully closed contours produced effectively and errorless.Keywords: Layer manufacturing, STL files, slicing algorithm, nearest distance analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4157756 Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies
Authors: Reza Mohammadi, Mahmod R. Sahebi, Mehrnoosh Omati, Milad Vahidi
Abstract:
Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.
Keywords: Bag of Visual Words, classification, feature extraction, land cover management, Polarimetric Synthetic Aperture Radar.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773755 Evaluating Spectral Relationships between Signals by Removing the Contribution of a Common, Periodic Source A Partial Coherence-based Approach
Authors: Antonio Mauricio F. L. Miranda de Sá
Abstract:
Partial coherence between two signals removing the contribution of a periodic, deterministic signal is proposed for evaluating the interrelationship in multivariate systems. The estimator expression was derived and shown to be independent of such periodic signal. Simulations were used for obtaining its critical value, which were found to be the same as those for Gaussian signals, as well as for evaluating the technique. An Illustration with eletroencephalografic (EEG) signals during photic stimulation is also provided. The application of the proposed technique in both simulation and real EEG data indicate that it seems to be very specific in removing the contribution of periodic sources. The estimate independence of the periodic signal may widen partial coherence application to signal analysis, since it could be used together with simple coherence to test for contamination in signals by a common, periodic noise source.
Keywords: Partial coherence, periodic input, spectral analysis, statistical signal processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1463754 Variance Based Component Analysis for Texture Segmentation
Authors: Zeinab Ghasemi, S. Amirhassan Monadjemi, Abbas Vafaei
Abstract:
This paper presents a comparative analysis of a new unsupervised PCA-based technique for steel plates texture segmentation towards defect detection. The proposed scheme called Variance Based Component Analysis or VBCA employs PCA for feature extraction, applies a feature reduction algorithm based on variance of eigenpictures and classifies the pixels as defective and normal. While the classic PCA uses a clusterer like Kmeans for pixel clustering, VBCA employs thresholding and some post processing operations to label pixels as defective and normal. The experimental results show that proposed algorithm called VBCA is 12.46% more accurate and 78.85% faster than the classic PCA. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972753 Generic Filtering of Infinite Sets of Stochastic Signals
Authors: Anatoli Torokhti, Phil Howlett
Abstract:
A theory for optimal filtering of infinite sets of random signals is presented. There are several new distinctive features of the proposed approach. First, a single optimal filter for processing any signal from a given infinite signal set is provided. Second, the filter is presented in the special form of a sum with p terms where each term is represented as a combination of three operations. Each operation is a special stage of the filtering aimed at facilitating the associated numerical work. Third, an iterative scheme is implemented into the filter structure to provide an improvement in the filter performance at each step of the scheme. The final step of the scheme concerns signal compression and decompression. This step is based on the solution of a new rank-constrained matrix approximation problem. The solution to the matrix problem is described in this paper. A rigorous error analysis is given for the new filter.Keywords: Optimal filtering, data compression, stochastic signals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1320752 Performance Analysis of MT Evaluation Measures and Test Suites
Authors: Yao Jian-Min, Lv Qiang, Zhang Jing
Abstract:
Many measures have been proposed for machine translation evaluation (MTE) while little research has been done on the performance of MTE methods. This paper is an effort for MTE performance analysis. A general frame is proposed for the description of the MTE measure and the test suite, including whether the automatic measure is consistent with human evaluation, whether different results from various measures or test suites are consistent, whether the content of the test suite is suitable for performance evaluation, the degree of difficulty of the test suite and its influence on the MTE, the relationship of MTE result significance and the size of the test suite, etc. For a better clarification of the frame, several experiment results are analyzed relating human evaluation, BLEU evaluation, and typological MTE. A visualization method is introduced for better presentation of the results. The study aims for aid in construction of test suite and method selection in MTE practice.Keywords: Machine translation, natural language processing, visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705751 Binarization of Text Region based on Fuzzy Clustering and Histogram Distribution in Signboards
Authors: Jonghyun Park, Toan Nguyen Dinh, Gueesang Lee
Abstract:
In this paper, we present a novel approach to accurately detect text regions including shop name in signboard images with complex background for mobile system applications. The proposed method is based on the combination of text detection using edge profile and region segmentation using fuzzy c-means method. In the first step, we perform an elaborate canny edge operator to extract all possible object edges. Then, edge profile analysis with vertical and horizontal direction is performed on these edge pixels to detect potential text region existing shop name in a signboard. The edge profile and geometrical characteristics of each object contour are carefully examined to construct candidate text regions and classify the main text region from background. Finally, the fuzzy c-means algorithm is performed to segment and detected binarize text region. Experimental results show that our proposed method is robust in text detection with respect to different character size and color and can provide reliable text binarization result.Keywords: Text detection, edge profile, signboard image, fuzzy clustering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2225750 Rule-Based Expert System for Headache Diagnosis and Medication Recommendation
Authors: Noura Al-Ajmi, Mohammed A. Almulla
Abstract:
With the increased utilization of technology devices around the world, healthcare and medical diagnosis are critical issues that people worry about these days. Doctors are doing their best to avoid any medical errors while diagnosing diseases and prescribing the wrong medication. Subsequently, artificial intelligence applications that can be installed on mobile devices such as rule-based expert systems facilitate the task of assisting doctors in several ways. Due to their many advantages, the usage of expert systems has increased recently in health sciences. This work presents a backward rule-based expert system that can be used for a headache diagnosis and medication recommendation system. The structure of the system consists of three main modules, namely the input unit, the processing unit, and the output unit.Keywords: Headache diagnosis system, treatment recommender system, rule-based expert system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744