Search results for: Discrete Fourier Transform
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1247

Search results for: Discrete Fourier Transform

107 Semi-Automated Tracking of Vibrissal Movements in Free-Moving Rodents Captured by High-Speed Videos

Authors: Hyun June Kim, Tailong Shi, Seden Akdagli, Sam Most, Yuling Yan

Abstract:

Quantitative analyses of whisker movements provide a means to study functional recovery and regeneration of mouse facial nerve after an injury. However, accurate tracking of the mouse whisker movement is challenging. Most methods for whisker tracking require manual intervention, e.g. fixing the head of the mouse during a study. Here we describe a semi-automated image processing method, which is applied to high-speed video recordings of free-moving mice to track the whisker movements. We first track the head movement of a mouse by delineating the lower head contour frame-by-frame that allows for detection of the location and orientation of the head. Then, a region of interest is identified for each frame; the subsequent application of a mask and the Hough transform detects the selected whiskers on each side of the head. Our approach is used to examine the functional recovery of damaged facial nerves in mice over a course of 21 days.

Keywords: Mystacial macrovibrissae, whisker tracking, head tracking, facial nerve recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648
106 Image Haze Removal Using Scene Depth Based Spatially Varying Atmospheric Light in Haar Lifting Wavelet Domain

Authors: Prabh Preet Singh, Harpreet Kaur

Abstract:

This paper presents a method for single image dehazing based on dark channel prior (DCP). The property that the intensity of the dark channel gives an approximate thickness of the haze is used to estimate the transmission and atmospheric light. Instead of constant atmospheric light, the proposed method employs scene depth to estimate spatially varying atmospheric light as it truly occurs in nature. Haze imaging model together with the soft matting method has been used in this work to produce high quality haze free image. Experimental results demonstrate that the proposed approach produces better results than the classic DCP approach as color fidelity and contrast of haze free image are improved and no over-saturation in the sky region is observed. Further, lifting Haar wavelet transform is employed to reduce overall execution time by a factor of two to three as compared to the conventional approach.

Keywords: Depth based atmospheric light, dark channel prior, lifting wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 498
105 Early Registration : Criterion to Improve Communication-Inter Agents in Mobile-IP Protocol

Authors: Hossam el-ddin Mostafa, Pavel Čičak

Abstract:

In IETF RFC 2002, Mobile-IP was developed to enable Laptobs to maintain Internet connectivity while moving between subnets. However, the packet loss that comes from switching subnets arises because network connectivity is lost while the mobile host registers with the foreign agent and this encounters large end-to-end packet delays. The criterion to initiate a simple and fast full-duplex connection between the home agent and foreign agent, to reduce the roaming duration, is a very important issue to be considered by a work in this paper. State-transition Petri-Nets of the modeling scenario-based CIA: communication inter-agents procedure as an extension to the basic Mobile-IP registration process was designed and manipulated to describe the system in discrete events. The heuristic of configuration file during practical Setup session for registration parameters, on Cisco platform Router-1760 using IOS 12.3 (15)T and TFTP server S/W is created. Finally, stand-alone performance simulations from Simulink Matlab, within each subnet and also between subnets, are illustrated for reporting better end-toend packet delays. Results verified the effectiveness of our Mathcad analytical manipulation and experimental implementation. It showed lower values of end-to-end packet delay for Mobile-IP using CIA procedure-based early registration. Furthermore, it reported packets flow between subnets to improve losses between subnets.

Keywords: Cisco configuration, handoff, Mobile-IP, packetdelay, Petri-Nets, registration process, Simulink

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350
104 A Novel Prostate Segmentation Algorithm in TRUS Images

Authors: Ali Rafiee, Ahad Salimi, Ali Reza Roosta

Abstract:

Prostate cancer is one of the most frequent cancers in men and is a major cause of mortality in the most of countries. In many diagnostic and treatment procedures for prostate disease accurate detection of prostate boundaries in transrectal ultrasound (TRUS) images is required. This is a challenging and difficult task due to weak prostate boundaries, speckle noise and the short range of gray levels. In this paper a novel method for automatic prostate segmentation in TRUS images is presented. This method involves preprocessing (edge preserving noise reduction and smoothing) and prostate segmentation. The speckle reduction has been achieved by using stick filter and top-hat transform has been implemented for smoothing. A feed forward neural network and local binary pattern together have been use to find a point inside prostate object. Finally the boundary of prostate is extracted by the inside point and an active contour algorithm. A numbers of experiments are conducted to validate this method and results showed that this new algorithm extracted the prostate boundary with MSE less than 4.6% relative to boundary provided manually by physicians.

Keywords: Prostate segmentation, stick filter, neural network, active contour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
103 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: Data fusion, Dempster-Shafer theory, data mining, event detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
102 An Extension of the Kratzel Function and Associated Inverse Gaussian Probability Distribution Occurring in Reliability Theory

Authors: R. K. Saxena, Ravi Saxena

Abstract:

In view of their importance and usefulness in reliability theory and probability distributions, several generalizations of the inverse Gaussian distribution and the Krtzel function are investigated in recent years. This has motivated the authors to introduce and study a new generalization of the inverse Gaussian distribution and the Krtzel function associated with a product of a Bessel function of the third kind )(zKQ and a Z - Fox-Wright generalized hyper geometric function introduced in this paper. The introduced function turns out to be a unified gamma-type function. Its incomplete forms are also discussed. Several properties of this gamma-type function are obtained. By means of this generalized function, we introduce a generalization of inverse Gaussian distribution, which is useful in reliability analysis, diffusion processes, and radio techniques etc. The inverse Gaussian distribution thus introduced also provides a generalization of the Krtzel function. Some basic statistical functions associated with this probability density function, such as moments, the Mellin transform, the moment generating function, the hazard rate function, and the mean residue life function are also obtained.KeywordsFox-Wright function, Inverse Gaussian distribution, Krtzel function & Bessel function of the third kind.

Keywords: Fox-Wright function, Inverse Gaussian distribution, Krtzel function & Bessel function of the third kind.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680
101 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
100 Adaptive Kalman Filter for Noise Estimation and Identification with Bayesian Approach

Authors: Farhad Asadi, S. Hossein Sadati

Abstract:

Bayesian approach can be used for parameter identification and extraction in state space models and its ability for analyzing sequence of data in dynamical system is proved in different literatures. In this paper, adaptive Kalman filter with Bayesian approach for identification of variances in measurement parameter noise is developed. Next, it is applied for estimation of the dynamical state and measurement data in discrete linear dynamical system. This algorithm at each step time estimates noise variance in measurement noise and state of system with Kalman filter. Next, approximation is designed at each step separately and consequently sufficient statistics of the state and noise variances are computed with a fixed-point iteration of an adaptive Kalman filter. Different simulations are applied for showing the influence of noise variance in measurement data on algorithm. Firstly, the effect of noise variance and its distribution on detection and identification performance is simulated in Kalman filter without Bayesian formulation. Then, simulation is applied to adaptive Kalman filter with the ability of noise variance tracking in measurement data. In these simulations, the influence of noise distribution of measurement data in each step is estimated, and true variance of data is obtained by algorithm and is compared in different scenarios. Afterwards, one typical modeling of nonlinear state space model with inducing noise measurement is simulated by this approach. Finally, the performance and the important limitations of this algorithm in these simulations are explained. 

Keywords: adaptive filtering, Bayesian approach Kalman filtering approach, variance tracking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 550
99 A Recognition Method of Ancient Yi Script Based on Deep Learning

Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma

Abstract:

Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.

Keywords: Recognition, CNN, convolutional neural network, Yi character, divergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 690
98 Vibration Characteristics of Functionally Graded Material Skew Plate in Thermal Environment

Authors: Gulshan Taj M. N. A., Anupam Chakrabarti, Vipul Prakash

Abstract:

In the present investigation, free vibration of functionally graded material (FGM) skew plates under thermal environment is studied. Kinematics equations are based on the Reddy’s higher order shear deformation theory and a nine noded isoparametric Lagrangian element is adopted to mesh the plate geometry. The issue of C1 continuity requirement related to the assumed displacement field has been circumvented effectively to develop C0 finite element formulation. Effective mechanical properties of the constituents of the plate are considered to be as position and temperature dependent and assumed to vary in the thickness direction according to a simple power law distribution. The displacement components of a rectangular plate are mapped into skew plate geometry by means of suitable transformation rule. One dimensional Fourier heat conduction equation is used to ascertain the temperature profile of the plate along thickness direction. Influence of different parameters such as volume fraction index, boundary condition, aspect ratio, thickness ratio and temperature field on frequency parameter of the FGM skew plate is demonstrated by performing various examples and the related findings are discussed briefly. New results are generated for vibration of the FGM skew plate under thermal environment, for the first time, which may be implemented in the future research involving similar kind of problems.

Keywords: Functionally graded material, finite element method, higher order shear deformation theory, skew plate, thermal vibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3561
97 End-to-End Pyramid Based Method for MRI Reconstruction

Authors: Omer Cahana, Maya Herman, Ofer Levi

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: Accelerate MRI scans, image reconstruction, pyramid network, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 266
96 Multi-Layer Perceptron Neural Network Classifier with Binary Particle Swarm Optimization Based Feature Selection for Brain-Computer Interfaces

Authors: K. Akilandeswari, G. M. Nasira

Abstract:

Brain-Computer Interfaces (BCIs) measure brain signals activity, intentionally and unintentionally induced by users, and provides a communication channel without depending on the brain’s normal peripheral nerves and muscles output pathway. Feature Selection (FS) is a global optimization machine learning problem that reduces features, removes irrelevant and noisy data resulting in acceptable recognition accuracy. It is a vital step affecting pattern recognition system performance. This study presents a new Binary Particle Swarm Optimization (BPSO) based feature selection algorithm. Multi-layer Perceptron Neural Network (MLPNN) classifier with backpropagation training algorithm and Levenberg-Marquardt training algorithm classify selected features.

Keywords: Brain-Computer Interfaces (BCI), Feature Selection (FS), Walsh–Hadamard Transform (WHT), Binary Particle Swarm Optimization (BPSO), Multi-Layer Perceptron (MLP), Levenberg–Marquardt algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136
95 Design for Classroom Units: A Collaborative Multicultural Studio Development with Chinese Students

Authors: C. S. Caires, A. Barbosa, W. Hanyou

Abstract:

In this paper, we present the main results achieved during a five-week international workshop on Interactive Furniture for the Classroom, with 22 Chinese design students, in Jiangmen city (Guangdong province, China), and five teachers from Portugal, France, Iran, Macao SAR, and China. The main goal was to engage design students from China with new skills and practice methodologies towards interactive design research for furniture and product design for the classroom. The final results demonstrate students' concerns on improving Chinese furniture design for the classrooms, including solutions related to collaborative learning and human-interaction design for interactive furniture products. The findings of the research led students to the fabrication of five original prototypes: two for kindergartens ('Candy' and 'Tilt-tilt'), two for primary schools ('Closer' and 'Eks(x)'), and one for art/creative schools ('Wave'). From the findings, it was also clear that collaboration, personalization, and project-based teaching are still neglected when designing furniture products for the classroom in China. Students focused on these issues and came up with creative solutions that could transform this educational field in China.

Keywords: Product design, interface design, interactive design, collaborative education and design research, design prototyping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 860
94 Prioritization Method in the Fuzzy Analytic Network Process by Fuzzy Preferences Programming Method

Authors: Tarifa S. Almulhim, Ludmil Mikhailov, Dong-Ling Xu

Abstract:

In this paper, a method for deriving a group priority vector in the Fuzzy Analytic Network Process (FANP) is proposed. By introducing importance weights of multiple decision makers (DMs) based on their experiences, the Fuzzy Preferences Programming Method (FPP) is extended to a fuzzy group prioritization problem in the FANP. Additionally, fuzzy pair-wise comparison judgments are presented rather than exact numerical assessments in order to model the uncertainty and imprecision in the DMs- judgments and then transform the fuzzy group prioritization problem into a fuzzy non-linear programming optimization problem which maximize the group satisfaction. Unlike the known fuzzy prioritization techniques, the new method proposed in this paper can easily derive crisp weights from incomplete and inconsistency fuzzy set of comparison judgments and does not require additional aggregation producers. Detailed numerical examples are used to illustrate the implement of our approach and compare with the latest fuzzy prioritization method.

Keywords: Fuzzy Analytic Network Process (FANP), Fuzzy Non-linear Programming, Fuzzy Preferences Programming Method (FPP), Multiple Criteria Decision-Making (MCDM), Triangular Fuzzy Number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2348
93 Data Transformation Services (DTS): Creating Data Mart by Consolidating Multi-Source Enterprise Operational Data

Authors: J. D. D. Daniel, K. N. Goh, S. M. Yusop

Abstract:

Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS [1] as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today-s business needs.

Keywords: Data Transformation Services (DTS), ObjectLinking and Embedding Database (OLEDB), Data Mart, OnlineAnalytical Processing (OLAP), Online Transactional Processing(OLTP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1985
92 FEA for Transient Responses of an S-Shaped Force Transducer with a Viscoelastic Absorber Using a Nonlinear Complex Spring

Authors: T. Yamaguchi, Y. Fujii, A. Takita, T. Kanai

Abstract:

To compute dynamic characteristics of nonlinear viscoelastic springs with elastic structures having huge degree-of-freedom, Yamaguchi proposed a new fast numerical method using finite element method [1]-[2]. In this method, restoring forces of the springs are expressed using power series of their elongation. In the expression, nonlinear hysteresis damping is introduced. In this expression, nonlinear complex spring constants are introduced. Finite element for the nonlinear spring having complex coefficients is expressed and is connected to the elastic structures modeled by linear solid finite element. Further, to save computational time, the discrete equations in physical coordinate are transformed into the nonlinear ordinary coupled equations using normal coordinate corresponding to linear natural modes. In this report, the proposed method is applied to simulation for impact responses of a viscoelastic shock absorber with an elastic structure (an S-shaped structure) by colliding with a concentrated mass. The concentrated mass has initial velocities and collides with the shock absorber. Accelerations of the elastic structure and the concentrated mass are measured using Levitation Mass Method proposed by Fujii [3]. The calculated accelerations from the proposed FEM, corresponds to the experimental ones. Moreover, using this method, we also investigate dynamic errors of the S-shaped force transducer due to elastic mode in the S-shaped structure.

Keywords: Transient response, Finite Element analysis, Numerical analysis, Viscoelastic shock absorber, Force transducer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
91 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
90 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816
89 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan

Abstract:

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
88 Fuzzy Mathematical Morphology approach in Image Processing

Authors: Yee Yee Htun, Dr. Khaing Khaing Aye

Abstract:

Morphological operators transform the original image into another image through the interaction with the other image of certain shape and size which is known as the structure element. Mathematical morphology provides a systematic approach to analyze the geometric characteristics of signals or images, and has been applied widely too many applications such as edge detection, objection segmentation, noise suppression and so on. Fuzzy Mathematical Morphology aims to extend the binary morphological operators to grey-level images. In order to define the basic morphological operations such as fuzzy erosion, dilation, opening and closing, a general method based upon fuzzy implication and inclusion grade operators is introduced. The fuzzy morphological operations extend the ordinary morphological operations by using fuzzy sets where for fuzzy sets, the union operation is replaced by a maximum operation, and the intersection operation is replaced by a minimum operation. In this work, it consists of two articles. In the first one, fuzzy set theory, fuzzy Mathematical morphology which is based on fuzzy logic and fuzzy set theory; fuzzy Mathematical operations and their properties will be studied in details. As a second part, the application of fuzziness in Mathematical morphology in practical work such as image processing will be discussed with the illustration problems.

Keywords: Binary Morphological, Fuzzy sets, Grayscalemorphology, Image processing, Mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3214
87 Estimation of the Parameters of Muskingum Methods for the Prediction of the Flood Depth in the Moudjar River Catchment

Authors: Fares Laouacheria, Said Kechida, Moncef Chabi

Abstract:

The objective of the study was based on the hydrological routing modelling for the continuous monitoring of the hydrological situation in the Moudjar river catchment, especially during floods with Hydrologic Engineering Center–Hydrologic Modelling Systems (HEC-HMS). The HEC-GeoHMS was used to transform data from geographic information system (GIS) to HEC-HMS for delineating and modelling the catchment river in order to estimate the runoff volume, which is used as inputs to the hydrological routing model. Two hydrological routing models were used, namely Muskingum and Muskingum routing models, for conducting this study. In this study, a comparison between the parameters of the Muskingum and Muskingum-Cunge routing models in HEC-HMS was used for modelling flood routing in the Moudjar river catchment and determining the relationship between these parameters and the physical characteristics of the river. The results indicate that the effects of input parameters such as the weighting factor "X" and travel time "K" on the output results are more significant, where the Muskingum routing model was more sensitive to input parameters than the Muskingum-Cunge routing model. This study can contribute to understand and improve the knowledge of the mechanisms of river floods, especially in ungauged river catchments.

Keywords: HEC-HMS, hydrological modelling, Muskingum routing model, Muskingum-Cunge routing model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1146
86 Infrared Lightbox and iPhone App for Improving Detection Limit of Phosphate Detecting Dip Strips

Authors: H. Heidari-Bafroui, B. Ribeiro, A. Charbaji, C. Anagnostopoulos, M. Faghri

Abstract:

In this paper, we report the development of a portable and inexpensive infrared lightbox for improving the detection limits of paper-based phosphate devices. Commercial paper-based devices utilize the molybdenum blue protocol to detect phosphate in the environment. Although these devices are easy to use and have a long shelf life, their main deficiency is their low sensitivity based on the qualitative results obtained via a color chart. To improve the results, we constructed a compact infrared lightbox that communicates wirelessly with a smartphone. The system measures the absorbance of radiation for the molybdenum blue reaction in the infrared region of the spectrum. It consists of a lightbox illuminated by four infrared light-emitting diodes, an infrared digital camera, a Raspberry Pi microcontroller, a mini-router, and an iPhone to control the microcontroller. An iPhone application was also developed to analyze images captured by the infrared camera in order to quantify phosphate concentrations. Additionally, the app connects to an online data center to present a highly scalable worldwide system for tracking and analyzing field measurements. In this study, the detection limits for two popular commercial devices were improved by a factor of 4 for the Quantofix devices (from 1.3 ppm using visible light to 300 ppb using infrared illumination) and a factor of 6 for the Indigo units (from 9.2 ppm to 1.4 ppm) with repeatability of less than or equal to 1.2% relative standard deviation (RSD). The system also provides more granular concentration information compared to the discrete color chart used by commercial devices and it can be easily adapted for use in other applications.

Keywords: Infrared lightbox, paper-based device, phosphate detection, smartphone colorimetric analyzer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 593
85 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency

Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami

Abstract:

Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.

Keywords: Clustering, k-means, categorical datasets, pattern recognition, unsupervised learning, knowledge discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3496
84 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: Discrete elements, Hertzian Contact, polydispersity, weakly nonlinear, wave propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 881
83 Using Mean-Shift Tracking Algorithms for Real-Time Tracking of Moving Images on an Autonomous Vehicle Testbed Platform

Authors: Benjamin Gorry, Zezhi Chen, Kevin Hammond, Andy Wallace, Greg Michaelson

Abstract:

This paper describes new computer vision algorithms that have been developed to track moving objects as part of a long-term study into the design of (semi-)autonomous vehicles. We present the results of a study to exploit variable kernels for tracking in video sequences. The basis of our work is the mean shift object-tracking algorithm; for a moving target, it is usual to define a rectangular target window in an initial frame, and then process the data within that window to separate the tracked object from the background by the mean shift segmentation algorithm. Rather than use the standard, Epanechnikov kernel, we have used a kernel weighted by the Chamfer distance transform to improve the accuracy of target representation and localization, minimising the distance between the two distributions in RGB color space using the Bhattacharyya coefficient. Experimental results show the improved tracking capability and versatility of the algorithm in comparison with results using the standard kernel. These algorithms are incorporated as part of a robot test-bed architecture which has been used to demonstrate their effectiveness.

Keywords: Hume, functional programming, autonomous vehicle, pioneer robot, vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1614
82 Fighter Aircraft Evaluation and Selection Process Based on Triangular Fuzzy Numbers in Multiple Criteria Decision Making Analysis Using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)

Authors: C. Ardil

Abstract:

This article presents a multiple criteria evaluation approach to uncertainty, vagueness, and imprecision analysis for ranking alternatives with fuzzy data for decision making using the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The fighter aircraft evaluation and selection decision making problem is modeled in a fuzzy environment with triangular fuzzy numbers. The fuzzy decision information related to the fighter aircraft selection problem is taken into account in ordering the alternatives and selecting the best candidate. The basic fuzzy TOPSIS procedure steps transform fuzzy decision matrices into matrices of alternatives evaluated according to all decision criteria. A practical numerical example illustrates the proposed approach to the fighter aircraft selection problem.

Keywords: triangular fuzzy number (TFN), multiple criteria decision making analysis, decision making, aircraft selection, MCDMA, fuzzy TOPSIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 411
81 Students, Knowledge and Employability

Authors: James Moir

Abstract:

Citizens are increasingly are provided with choice and customization in public services and this has now also become a key feature of higher education in terms of policy roll-outs on personal development planning (PDP) and more generally as part of the employability agenda. The goal here is to transform people, in this case graduates, into active, responsible citizen-workers. A key part of this rhetoric and logic is the inculcation of graduate attributes within students. However, there has also been a concern with the issue of student lack of engagement and perseverance with their studies. This paper sets out to explore some of these conceptions that link graduate attributes with citizenship as well as the notion of how identity is forged through the higher education process. Examples are drawn from a quality enhancement project that is being operated within the context of the Scottish higher education system. This is further framed within the wider context of competing and conflicting demands on higher education, exacerbated by the current worldwide economic climate. There are now pressures on students to develop their employability skills as well as their capacity to engage with global issues such as behavioural change in the light of environmental concerns. It is argued that these pressures, in effect, lead to a form of personalization that is concerned with how graduates develop their sense of identity as something that is engineered and re-engineered to meet these demands.

Keywords: students, higher education, employability, knowledge, personal development

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
80 A Business Model Design Process for Social Enterprises: The Critical Role of the Environment

Authors: Hadia Abdel Aziz, Raghda El Ebrashi

Abstract:

Business models are shaped by their design space or the environment they are designed to be implemented in. The rapidly changing economic, technological, political, regulatory and market external environment severely affects business logic. This is particularly true for social enterprises whose core mission is to transform their environments, and thus, their whole business logic revolves around the interchange between the enterprise and the environment. The context in which social business operates imposes different business design constraints while at the same time, open up new design opportunities. It is also affected to a great extent by the impact that successful enterprises generate; a continuous loop of interaction that needs to be managed through a dynamic capability in order to generate a lasting powerful impact. This conceptual research synthesizes and analyzes literature on social enterprise, social enterprise business models, business model innovation, business model design, and the open system view theory to propose a new business model design process for social enterprises that takes into account the critical role of environmental factors. This process would help the social enterprise develop a dynamic capability that ensures the alignment of its business model to its environmental context, thus, maximizing its probability of success.

Keywords: Social enterprise, business model, business model design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2979
79 A Mapping Approach of Code Generation for Arinc653-Based Avionics Software

Authors: Lu Zou, Dianfu MA, Ying Wang, Xianqi Zhao

Abstract:

Avionic software architecture has transit from a federated avionics architecture to an integrated modular avionics (IMA) .ARINC 653 (Avionics Application Standard Software Interface) is a software specification for space and time partitioning in Safety-critical avionics Real-time operating systems. Methods to transform the abstract avionics application logic function to the executable model have been brought up, however with less consideration about the code generating input and output model specific for ARINC 653 platform and inner-task synchronous dynamic interaction order sequence. In this paper, we proposed an AADL-based model-driven design methodology to fulfill the purpose to automatically generating Cµ executable model on ARINC 653 platform from the ARINC653 architecture which defined as AADL653 in order to facilitate the development of the avionics software constructed on ARINC653 OS. This paper presents the mapping rules between the AADL653 elements and the elements in Cµ language, and define the code generating rules , designs an automatic C µ code generator .Then, we use a case to illustrate our approach. Finally, we give the related work and future research directions.

Keywords: IMA, ARINC653, AADL653, code generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2997
78 Stereo Motion Tracking

Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer

Abstract:

Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.

Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2785