Search results for: Image Resolution.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1846

Search results for: Image Resolution.

346 DWT Based Robust Watermarking Embed Using CRC-32 Techniques

Authors: Sadi Vural, Hiromi Tomii, Hironori Yamauchi

Abstract:

As far as the latest technological improvements are concerned, digital systems more become popular than the past. Despite this growing demand to the digital systems, content copy and attack against the digital cinema contents becomes a serious problem. To solve the above security problem, we propose “traceable watermarking using Hash functions for digital cinema system. Digital Cinema is a great application for traceable watermarking since it uses watermarking technology during content play as well as content transmission. The watermark is embedded into the randomly selected movie frames using CRC-32 techniques. CRC-32 is a Hash function. Using it, the embedding position is distributed by Hash Function so that any party cannot break off the watermarking or will not be able to change. Finally, our experimental results show that proposed DWT watermarking method using CRC-32 is much better than the convenient watermarking techniques in terms of robustness, image quality and its simple but unbreakable algorithm.

Keywords: Decoder, Digital content, JPEG2000 Frame, System-On-Chip, traceable watermark, Hash Function, CRC-32.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1954
345 Thin Bed Reservoir Delineation Using Spectral Decomposition and Instantaneous Seismic Attributes, Pohokura Field, Taranaki Basin, New Zealand

Authors: P. Sophon, M. Kruachanta, S. Chaisri, G. Leaungvongpaisan, P. Wongpornchai

Abstract:

The thick bed hydrocarbon reservoirs are primarily interested because of the more prolific production. When the amount of petroleum in the thick bed starts decreasing, the thin bed reservoirs are the alternative targets to maintain the reserves. The conventional interpretation of seismic data cannot delineate the thin bed having thickness less than the vertical seismic resolution. Therefore, spectral decomposition and instantaneous seismic attributes were used to delineate the thin bed in this study. Short Window Discrete Fourier Transform (SWDFT) spectral decomposition and instantaneous frequency attributes were used to reveal the thin bed reservoir, while Continuous Wavelet Transform (CWT) spectral decomposition and envelope (instantaneous amplitude) attributes were used to indicate hydrocarbon bearing zone. The study area is located in the Pohokura Field, Taranaki Basin, New Zealand. The thin bed target is the uppermost part of Mangahewa Formation, the most productive in the gas-condensate production in the Pohokura Field. According to the time-frequency analysis, SWDFT spectral decomposition can reveal the thin bed using a 72 Hz SWDFT isofrequency section and map, and that is confirmed by the instantaneous frequency attribute. The envelope attribute showing the high anomaly indicates the hydrocarbon accumulation area at the thin bed target. Moreover, the CWT spectral decomposition shows the low-frequency shadow zone and abnormal seismic attenuation in the higher isofrequencies below the thin bed confirms that the thin bed can be a prospective hydrocarbon zone.

Keywords: Hydrocarbon indication, instantaneous seismic attribute, spectral decomposition, thin bed delineation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 628
344 Coupled Dynamics in Host-Guest Complex Systems Duplicates Emergent Behavior in the Brain

Authors: Sergio Pissanetzky

Abstract:

The ability of the brain to organize information and generate the functional structures we use to act, think and communicate, is a common and easily observable natural phenomenon. In object-oriented analysis, these structures are represented by objects. Objects have been extensively studied and documented, but the process that creates them is not understood. In this work, a new class of discrete, deterministic, dissipative, host-guest dynamical systems is introduced. The new systems have extraordinary self-organizing properties. They can host information representing other physical systems and generate the same functional structures as the brain does. A simple mathematical model is proposed. The new systems are easy to simulate by computer, and measurements needed to confirm the assumptions are abundant and readily available. Experimental results presented here confirm the findings. Applications are many, but among the most immediate are object-oriented engineering, image and voice recognition, search engines, and Neuroscience.

Keywords: AI, artificial intelligence, complex system, object oriented, OO, refactoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2083
343 A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies

Authors: Sambit Prasad Kar, P.Palanisamy

Abstract:

In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.

Keywords: Frequency estimation, peak search, subspace-based method without eigen decomposition, quadratic convex function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
342 Performance Evaluation of Wavelet Based Coders on Brain MRI Volumetric Medical Datasets for Storage and Wireless Transmission

Authors: D. Dhouib, A. Naït-Ali, C. Olivier, M. S. Naceur

Abstract:

In this paper, we evaluate the performance of some wavelet based coding algorithms such as 3D QT-L, 3D SPIHT and JPEG2K. In the first step we achieve an objective comparison between three coders, namely 3D SPIHT, 3D QT-L and JPEG2K. For this purpose, eight MRI head scan test sets of 256 x 256x124 voxels have been used. Results show superior performance of 3D SPIHT algorithm, whereas 3D QT-L outperforms JPEG2K. The second step consists of evaluating the robustness of 3D SPIHT and JPEG2K coding algorithm over wireless transmission. Compressed dataset images are then transmitted over AWGN wireless channel or over Rayleigh wireless channel. Results show the superiority of JPEG2K over these two models. In fact, it has been deduced that JPEG2K is more robust regarding coding errors. Thus we may conclude the necessity of using corrector codes in order to protect the transmitted medical information.

Keywords: Image coding, medical imaging, wavelet basedcoder, wireless transmission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1933
341 Semi-Supervised Outlier Detection Using a Generative and Adversary Framework

Authors: Jindong Gu, Matthias Schubert, Volker Tresp

Abstract:

In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks.

Keywords: Outlier detection, generative adversary networks, semi-supervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1064
340 Homogeneity of Microstructure and Mechanical Properties in Horizontal Continuous Cast Billet

Authors: V. Arbabi , I. Ebrahimzadeh, H. Ghanbari, M.M. Kaykha

Abstract:

Horizontal continuous casting is widely used to produce semi-finished non-Ferrous products. Homogeneity in the metallurgical characteristics and mechanical properties for this product is vital for industrial application. In the present work, the microstructure and mechanical properties of a horizontal continuous cast two-phase brass billet have been studied. Impact strength and hardness variations were examined and the phase composition and porosity studied with image analysis software. Distinct differences in mechanical properties were observed between the upper, middle and lower parts of the billet, which are explained in terms of the morphology and size of the phase in the microstructure. Hardness variation in the length of billet is higher in upper area but impact strength is higher in lower areas.

Keywords: Horizontal Continuous Casting, Two-phase brasses, CuZn40Al1 alloy, Microstructure, Impact Strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173
339 Remote Sensing, GIS, and AHP for Assessing Physical Vulnerability to Tsunami Hazard

Authors: Abu Bakar Sambah, Fusanori Miura

Abstract:

Remote sensing image processing, spatial data analysis through GIS approach, and analytical hierarchy process were introduced in this study for assessing the vulnerability area and inundation area due to tsunami hazard in the area of Rikuzentakata, Iwate Prefecture, Japan. Appropriate input parameters were derived from GSI DEM data, ALOS AVNIR-2, and field data. We used the parameters of elevation, slope, shoreline distance, and vegetation density. Five classes of vulnerability were defined and weighted via pairwise comparison matrix. The assessment results described that 14.35km2 of the study area was under tsunami vulnerability zone. Inundation areas are those of high and slightly high vulnerability. The farthest area reached by a tsunami was about 7.50km from the shoreline and shows that rivers act as flooding strips that transport tsunami waves into the hinterland. This study can be used for determining a priority for land-use planning in the scope of tsunami hazard risk management.

Keywords: AHP, GIS, remote sensing, tsunami vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3326
338 Applications of Drones in Infrastructures: Challenges and Opportunities

Authors: Jin Fan, M. Ala Saadeghvaziri

Abstract:

Unmanned aerial vehicles (UAVs), also referred to as drones, equipped with various kinds of advanced detecting or surveying systems, are effective and low-cost in data acquisition, data delivery and sharing, which can benefit the building of infrastructures. This paper will give an overview of applications of drones in planning, designing, construction and maintenance of infrastructures. The drone platform, detecting and surveying systems, and post-data processing systems will be introduced, followed by cases with details of the applications. Challenges from different aspects will be addressed. Opportunities of drones in infrastructure include but not limited to the following. Firstly, UAVs equipped with high definition cameras or other detecting equipment are capable of inspecting the hard to reach infrastructure assets. Secondly, UAVs can be used as effective tools to survey and map the landscape to collect necessary information before infrastructure construction. Furthermore, an UAV or multi-UVAs are useful in construction management. UVAs can also be used in collecting roads and building information by taking high-resolution photos for future infrastructure planning. UAVs can be used to provide reliable and dynamic traffic information, which is potentially helpful in building smart cities. The main challenges are: limited flight time, the robustness of signal, post data analyze, multi-drone collaboration, weather condition, distractions to the traffic caused by drones. This paper aims to help owners, designers, engineers and architects to improve the building process of infrastructures for higher efficiency and better performance.

Keywords: Bridge, construction, drones, infrastructure, information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1292
337 Computer Vision Applied to Flower, Fruit and Vegetable Processing

Authors: Luis Gracia, Carlos Perez-Vidal, Carlos Gracia

Abstract:

This paper presents the theoretical background and the real implementation of an automated computer system to introduce machine vision in flower, fruit and vegetable processing for recollection, cutting, packaging, classification, or fumigation tasks. The considerations and implementation issues presented in this work can be applied to a wide range of varieties of flowers, fruits and vegetables, although some of them are especially relevant due to the great amount of units that are manipulated and processed each year over the world. The computer vision algorithms developed in this work are shown in detail, and can be easily extended to other applications. A special attention is given to the electromagnetic compatibility in order to avoid noisy images. Furthermore, real experimentation has been carried out in order to validate the developed application. In particular, the tests show that the method has good robustness and high success percentage in the object characterization.

Keywords: Image processing, Vision system, Automation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3315
336 Edge Detection Using Multi-Agent System: Evaluation on Synthetic and Medical MR Images

Authors: A. Nachour, L. Ouzizi, Y. Aoura

Abstract:

Recent developments on multi-agent system have brought a new research field on image processing. Several algorithms are used simultaneously and improved in deferent applications while new methods are investigated. This paper presents a new automatic method for edge detection using several agents and many different actions. The proposed multi-agent system is based on parallel agents that locally perceive their environment, that is to say, pixels and additional environmental information. This environment is built using Vector Field Convolution that attract free agent to the edges. Problems of partial, hidden or edges linking are solved with the cooperation between agents. The presented method was implemented and evaluated using several examples on different synthetic and medical images. The obtained experimental results suggest that this approach confirm the efficiency and accuracy of detected edge.

Keywords: Edge detection, medical MR images, multi-agent systems, vector field convolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
335 Characterization and Development of Anthropomorphic Phantoms Liver for Use in Nuclear Medicine

Authors: Ferreira F. C. L., Souza D. N., Rodrigues T. M. A., Cunha C. J., Dullius M. A., Andrade J. E., Sousa A. H., Vieira J. P. C., Carvalho Júnior A. B., Santos L. P. B., Passos R. O.

Abstract:

The objective this study was to characterize and develop anthropomorphic liver phantoms in tomography hepatic procedures for quality control and improvement professionals in nuclear medicine. For the conformation of the anthropomorphic phantom was used in plaster and acrylic. We constructed three phantoms representing processes with liver cirrhosis. The phantoms were filled with 99mTc diluted with water to obtain the scintigraphic images. Tomography images were analyzed anterior and posterior phantom representing a body with a greater degree cirrhotic. It was noted that the phantoms allow the acquisition of images similar to real liver with cirrhosis. Simulations of hemangiomas may contribute to continued professional education of nuclear medicine, on the question of image acquisition, allowing of the study parameters such of the matrix, energy window and count statistics.

Keywords: Nuclear medicine, liver phantom, control quality

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
334 Simulation of Ammonia-Water Two Phase Flow in Bubble Pump

Authors: Jemai Rabeb, Benhmidene Ali, Hidouri Khaoula, Chaouachi Bechir

Abstract:

The diffusion-absorption refrigeration cycle consists of a generator bubble pump, an absorber, an evaporator and a condenser, and usually operates with ammonia/water/ hydrogen or helium as the working fluid. The aim of this paper is to study the stability problem a bubble pump. In fact instability can caused a reduction of bubble pump efficiency. To achieve this goal, we have simulated the behaviour of two-phase flow in a bubble pump by using a drift flow model. Equations of a drift flow model are formulated in the transitional regime, non-adiabatic condition and thermodynamic equilibrium between the liquid and vapour phases. Equations resolution allowed to define void fraction, and liquid and vapour velocities, as well as pressure and mixing enthalpy. Ammonia-water mixing is used as working fluid, where ammonia mass fraction in the inlet is 0.6. Present simulation is conducted out for a heating flux of 2 kW/m² to 5 kW/m² and bubble pump tube length of 1 m and 2.5 mm of inner diameter. Simulation results reveal oscillations of vapour and liquid velocities along time. Oscillations decrease with time and with heat flux. For sufficient time the steady state is established, it is characterised by constant liquid velocity and void fraction values. However, vapour velocity does not have the same behaviour, it increases for steady state too. On the other hand, pressure drop oscillations are studied.

Keywords: Bubble pump, drift flow model, instability, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1080
333 The Nature of the Complicated Fabric Textures: How to Represent in Primary Visual Cortex

Authors: J. L. Liu, L. Wang, B. Zhu, J. Zhou, W. D. Gao

Abstract:

Fabric textures are very common in our daily life. However, the representation of fabric textures has never been explored from neuroscience view. Theoretical studies suggest that primary visual cortex (V1) uses a sparse code to efficiently represent natural images. However, how the simple cells in V1 encode the artificial textures is still a mystery. So, here we will take fabric texture as stimulus to study the response of independent component analysis that is established to model the receptive field of simple cells in V1. We choose 140 types of fabrics to get the classical fabric textures as materials. Experiment results indicate that the receptive fields of simple cells have obvious selectivity in orientation, frequency and phase when drifting gratings are used to determine their tuning properties. Additionally, the distribution of optimal orientation and frequency shows that the patch size selected from each original fabric image has a significant effect on the frequency selectivity.

Keywords: Fabric Texture, Receptive Filed, Simple Cell, Spare Coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
332 Optimal Duty-Cycle Modulation Scheme for Analog-To-Digital Conversion Systems

Authors: G. Sonfack, J. Mbihi, B. Lonla Moffo

Abstract:

This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.

Keywords: Digital IIR filter, morphological lemmas and theorems, optimal DCM-based DAC, virtual simulation, weighted least pth norm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 926
331 Automatic Detection of Mass Type Breast Cancer using Texture Analysis in Korean Digital Mammography

Authors: E. B. Jo, J. H. Lee, J. Y. Park, S. M. Kim

Abstract:

In this study, we present an advanced detection technique for mass type breast cancer based on texture information of organs. The proposed method detects the cancer areas in three stages. In the first stage, the midpoints of mass area are determined based on AHE (Adaptive Histogram Equalization). In the second stage, we set the threshold coefficient of homogeneity by using MLE (Maximum Likelihood Estimation) to compute the uniformity of texture. Finally, mass type cancer tissues are extracted from the original image. As a result, it was observed that the proposed method shows an improved detection performance on dense breast tissues of Korean women compared with the existing methods. It is expected that the proposed method may provide additional diagnostic information for detection of mass-type breast cancer.

Keywords: Mass Type Breast Cancer, Mammography, Maximum Likelihood Estimation (MLE), Ranklets, SVM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
330 Approach Based on Fuzzy C-Means for Band Selection in Hyperspectral Images

Authors: Diego Saqui, José H. Saito, José R. Campos, Lúcio A. de C. Jorge

Abstract:

Hyperspectral images and remote sensing are important for many applications. A problem in the use of these images is the high volume of data to be processed, stored and transferred. Dimensionality reduction techniques can be used to reduce the volume of data. In this paper, an approach to band selection based on clustering algorithms is presented. This approach allows to reduce the volume of data. The proposed structure is based on Fuzzy C-Means (or K-Means) and NWHFC algorithms. New attributes in relation to other studies in the literature, such as kurtosis and low correlation, are also considered. A comparison of the results of the approach using the Fuzzy C-Means and K-Means with different attributes is performed. The use of both algorithms show similar good results but, particularly when used attributes variance and kurtosis in the clustering process, however applicable in hyperspectral images.

Keywords: Band selection, fuzzy C-means, K-means, hyperspectral image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
329 ZnS and Graphene Quantum Dots Nanocomposite as Potential Electron Acceptor for Photovoltaics

Authors: S. M. Giripunje, Shikha Jindal

Abstract:

Zinc sulphide (ZnS) quantum dots (QDs) were synthesized successfully via simple sonochemical method. X-ray diffraction (XRD), scanning electron microscopy (SEM) and high resolution transmission electron microscopy (HRTEM) analysis revealed the average size of QDs of the order of 3.7 nm. The band gap of the QDs was tuned to 5.2 eV by optimizing the synthesis parameters. UV-Vis absorption spectra of ZnS QD confirm the quantum confinement effect. Fourier transform infrared (FTIR) analysis confirmed the formation of single phase ZnS QDs. To fabricate the diode, blend of ZnS QDs and P3HT was prepared and the heterojunction of PEDOT:PSS and the blend was formed by spin coating on indium tin oxide (ITO) coated glass substrate. The diode behaviour of the heterojunction was analysed, wherein the ideality factor was found to be 2.53 with turn on voltage 0.75 V and the barrier height was found to be 1.429 eV. ZnS-Graphene QDs nanocomposite was characterised for the surface morphological study. It was found that the synthesized ZnS QDs appear as quasi spherical particles on the graphene sheets. The average particle size of ZnS-graphene nanocomposite QDs was found to be 8.4 nm. From voltage-current characteristics of ZnS-graphene nanocomposites, it is observed that the conductivity of the composite increases by 104 times the conductivity of ZnS QDs. Thus the addition of graphene QDs in ZnS QDs enhances the mobility of the charge carriers in the composite material. Thus, the graphene QDs, with high specific area for a large interface, high mobility and tunable band gap, show a great potential as an electron-acceptors in photovoltaic devices.

Keywords: Graphene, mobility, nanocomposites, photovoltaics, quantum dots, zinc sulphide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
328 Unsupervised Feature Selection Using Feature Density Functions

Authors: Mina Alibeigi, Sattar Hashemi, Ali Hamzeh

Abstract:

Since dealing with high dimensional data is computationally complex and sometimes even intractable, recently several feature reductions methods have been developed to reduce the dimensionality of the data in order to simplify the calculation analysis in various applications such as text categorization, signal processing, image retrieval, gene expressions and etc. Among feature reduction techniques, feature selection is one the most popular methods due to the preservation of the original features. In this paper, we propose a new unsupervised feature selection method which will remove redundant features from the original feature space by the use of probability density functions of various features. To show the effectiveness of the proposed method, popular feature selection methods have been implemented and compared. Experimental results on the several datasets derived from UCI repository database, illustrate the effectiveness of our proposed methods in comparison with the other compared methods in terms of both classification accuracy and the number of selected features.

Keywords: Feature, Feature Selection, Filter, Probability Density Function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2071
327 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1008
326 A Study on Algorithm Fusion for Recognition and Tracking of Moving Robot

Authors: Jungho Choi, Youngwan Cho

Abstract:

This paper presents an algorithm for the recognition and tracking of moving objects, 1/10 scale model car is used to verify performance of the algorithm. Presented algorithm for the recognition and tracking of moving objects in the paper is as follows. SURF algorithm is merged with Lucas-Kanade algorithm. SURF algorithm has strong performance on contrast, size, rotation changes and it recognizes objects but it is slow due to many computational complexities. Processing speed of Lucas-Kanade algorithm is fast but the recognition of objects is impossible. Its optical flow compares the previous and current frames so that can track the movement of a pixel. The fusion algorithm is created in order to solve problems which occurred using the Kalman Filter to estimate the position and the accumulated error compensation algorithm was implemented. Kalman filter is used to create presented algorithm to complement problems that is occurred when fusion two algorithms. Kalman filter is used to estimate next location, compensate for the accumulated error. The resolution of the camera (Vision Sensor) is fixed to be 640x480. To verify the performance of the fusion algorithm, test is compared to SURF algorithm under three situations, driving straight, curve, and recognizing cars behind the obstacles. Situation similar to the actual is possible using a model vehicle. Proposed fusion algorithm showed superior performance and accuracy than the existing object recognition and tracking algorithms. We will improve the performance of the algorithm, so that you can experiment with the images of the actual road environment.

Keywords: SURF, Optical Flow Lucas-Kanade, Kalman Filter, object recognition, object tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2284
325 Object Speed Estimation by using Fuzzy Set

Authors: Hossein Pazhoumand-Dar, Amir Mohsen Toliyat Abolhassani, Ehsan Saeedi

Abstract:

Speed estimation is one of the important and practical tasks in machine vision, Robotic and Mechatronic. the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis has generated a great deal of interest in machine vision algorithms. Numerous approaches for speed estimation have been proposed. So classification and survey of the proposed methods can be very useful. The goal of this paper is first to review and verify these methods. Then we will propose a novel algorithm to estimate the speed of moving object by using fuzzy concept. There is a direct relation between motion blur parameters and object speed. In our new approach we will use Radon transform to find direction of blurred image, and Fuzzy sets to estimate motion blur length. The most benefit of this algorithm is its robustness and precision in noisy images. Our method was tested on many images with different range of SNR and is satisfiable.

Keywords: Blur Analysis, Fuzzy sets, Speed estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873
324 Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images

Authors: A. Biran, P. Sobhe Bidari, A. Almazroe V. Lakshminarayanan, K. Raahemifar

Abstract:

Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.

Keywords: Diabetic retinopathy, fundus images, STARE, Gabor filter, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
323 A Method of Planar-Template- Based Camera Self-Calibration for Single-View

Authors: Yue Zhao, Chao Li

Abstract:

Camera calibration is an important step in 3D reconstruction. Camera calibration may be classified into two major types: traditional calibration and self-calibration. However, a calibration method in using a checkerboard is intermediate between traditional calibration and self-calibration. A self is proposed based on a square in this paper. Only a square in the planar template, the camera self-calibration can be completed through the single view. The proposed algorithm is that the virtual circle and straight line are established by a square on planar template, and circular points, vanishing points in straight lines and the relation between them are be used, in order to obtain the image of the absolute conic (IAC) and establish the camera intrinsic parameters. To make the calibration template is simpler, as compared with the Zhang Zhengyou-s method. Through real experiments and experiments, the experimental results show that this algorithm is feasible and available, and has a certain precision and robustness.

Keywords: Absolute conic, camera calibration, circle point, vanishing point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
322 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. The world wide observed changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although the effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: Climate Change, Downscaling, GCM, RCM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3364
321 Texture Characterization Based on a Chandrasekhar Fast Adaptive Filter

Authors: Mounir Sayadi, Farhat Fnaiech

Abstract:

In the framework of adaptive parametric modelling of images, we propose in this paper a new technique based on the Chandrasekhar fast adaptive filter for texture characterization. An Auto-Regressive (AR) linear model of texture is obtained by scanning the image row by row and modelling this data with an adaptive Chandrasekhar linear filter. The characterization efficiency of the obtained model is compared with the model adapted with the Least Mean Square (LMS) 2-D adaptive algorithm and with the cooccurrence method features. The comparison criteria is based on the computation of a characterization degree using the ratio of "betweenclass" variances with respect to "within-class" variances of the estimated coefficients. Extensive experiments show that the coefficients estimated by the use of Chandrasekhar adaptive filter give better results in texture discrimination than those estimated by other algorithms, even in a noisy context.

Keywords: Texture analysis, statistical features, adaptive filters, Chandrasekhar algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
320 Multiple Object Tracking using Particle Swarm Optimization

Authors: Chen-Chien Hsu, Guo-Tang Dai

Abstract:

This paper presents a particle swarm optimization (PSO) based approach for multiple object tracking based on histogram matching. To start with, gray-level histograms are calculated to establish a feature model for each of the target object. The difference between the gray-level histogram corresponding to each particle in the search space and the target object is used as the fitness value. Multiple swarms are created depending on the number of the target objects under tracking. Because of the efficiency and simplicity of the PSO algorithm for global optimization, target objects can be tracked as iterations continue. Experimental results confirm that the proposed PSO algorithm can rapidly converge, allowing real-time tracking of each target object. When the objects being tracked move outside the tracking range, global search capability of the PSO resumes to re-trace the target objects.

Keywords: multiple object tracking, particle swarm optimization, gray-level histogram, image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4090
319 The Mass Attenuation Coefficients, Effective Atomic Cross Sections, Effective Atomic Numbers and Electron Densities of Some Halides

Authors: Shivalinge Gowda

Abstract:

The total mass attenuation coefficients m/r, of some halides such as, NaCl, KCl, CuCl, NaBr, KBr, RbCl, AgCl, NaI, KI, AgBr, CsI, HgCl2, CdI2 and HgI2 were determined at photon energies 279.2, 320.07, 514.0, 661.6, 1115.5, 1173.2 and 1332.5 keV in a well-collimated narrow beam good geometry set-up using a high resolution, hyper pure germanium detector. The mass attenuation coefficients and the effective atomic cross sections are found to be in good agreement with the XCOM values. From these mass attenuation coefficients, the effective atomic cross sections sa, of the compounds were determined. These effective atomic cross section sa data so obtained are then used to compute the effective atomic numbers Zeff. For this, the interpolation of total attenuation cross-sections of photons of energy E in elements of atomic number Z was performed by using the logarithmic regression analysis of the data measured by the authors and reported earlier for the above said energies along with XCOM data for standard energies. The best-fit coefficients in the photon energy range of 250 to 350 keV, 350 to 500 keV, 500 to 700 keV, 700 to 1000 keV and 1000 to 1500 keV by a piecewise interpolation method were then used to find the Zeff of the compounds with respect to the effective atomic cross section sa from the relation obtained by piece wise interpolation method. Using these Zeff values, the electron densities Nel of halides were also determined. The present Zeff and Nel values of halides are found to be in good agreement with the values calculated from XCOM data and other available published values.

Keywords: Mass attenuation coefficient, atomic cross-section, effective atomic number, electron density.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115
318 Face Detection using Variance based Haar-Like feature and SVM

Authors: Cuong Nguyen Khac, Ju H. Park, Ho-Youl Jung

Abstract:

This paper proposes a new approach to perform the problem of real-time face detection. The proposed method combines primitive Haar-Like feature and variance value to construct a new feature, so-called Variance based Haar-Like feature. Face in image can be represented with a small quantity of features using this new feature. We used SVM instead of AdaBoost for training and classification. We made a database containing 5,000 face samples and 10,000 non-face samples extracted from real images for learning purposed. The 5,000 face samples contain many images which have many differences of light conditions. And experiments showed that face detection system using Variance based Haar-Like feature and SVM can be much more efficient than face detection system using primitive Haar-Like feature and AdaBoost. We tested our method on two Face databases and one Non-Face database. We have obtained 96.17% of correct detection rate on YaleB face database, which is higher 4.21% than that of using primitive Haar-Like feature and AdaBoost.

Keywords: AdaBoost, Haar-Like feature, SVM, variance, Variance based Haar-Like feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3726
317 Factors Having Impact on Marketing and Improvement Measures in the Real Estate Sector of Turkey

Authors: Ali Ihtiyar, Serdar Durdyev, Syuhaida Ismail

Abstract:

Marketing is an essential issue to the survival of any real estate company in Turkey. There are some factors which are constraining the achievements of the marketing and sales strategies in the Turkey real estate industry. This study aims to identify and prioritise the most significant constraints to marketing in real estate sector and new strategies based on those constraints. This study is based on survey method, where the respondents such as credit counsellors, real estate investors, consultants, academicians and marketing representatives in Turkey were asked to rank forty seven sub-factors according to their levels of impact. The results of Multiattribute analytical technique indicated that the main subcomponents having impact on marketing in real estate sector are interest rates, real estate credit availability, accessibility, company image and consumer real income, respectively. The identified constraints are expected to guide the marketing team in a sales-effective way.

Keywords: Marketing, marketing constraints, Real estate marketing, Turkey real estate sector

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570