Search results for: Computed radiography
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 417

Search results for: Computed radiography

387 Dimension Free Rigid Point Set Registration in Linear Time

Authors: Jianqin Qu

Abstract:

This paper proposes a rigid point set matching algorithm in arbitrary dimensions based on the idea of symmetric covariant function. A group of functions of the points in the set are formulated using rigid invariants. Each of these functions computes a pair of correspondence from the given point set. Then the computed correspondences are used to recover the unknown rigid transform parameters. Each computed point can be geometrically interpreted as the weighted mean center of the point set. The algorithm is compact, fast, and dimension free without any optimization process. It either computes the desired transform for noiseless data in linear time, or fails quickly in exceptional cases. Experimental results for synthetic data and 2D/3D real data are provided, which demonstrate potential applications of the algorithm to a wide range of problems.

Keywords: Covariant point, point matching, dimension free, rigid registration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 657
386 X-Ray Intensity Measurement Using Frequency Output Sensor for Computed Tomography

Authors: R. M. Siddiqui, D. Z. Moghaddam, T. R. Turlapati, S. H. Khan, I. Ul Ahad

Abstract:

Quality of 2D and 3D cross-sectional images produce by Computed Tomography primarily depend upon the degree of precision of primary and secondary X-Ray intensity detection. Traditional method of primary intensity detection is apt to errors. Recently the X-Ray intensity measurement system along with smart X-Ray sensors is developed by our group which is able to detect primary X-Ray intensity unerringly. In this study a new smart X-Ray sensor is developed using Light-to-Frequency converter TSL230 from Texas Instruments which has numerous advantages in terms of noiseless data acquisition and transmission. TSL230 construction is based on a silicon photodiode which converts incoming X-Ray radiation into the proportional current signal. A current to frequency converter is attached to this photodiode on a single monolithic CMOS integrated circuit which provides proportional frequency count to incoming current signal in the form of the pulse train. The frequency count is delivered to the center of PICDEM FS USB board with PIC18F4550 microcontroller mounted on it. With highly compact electronic hardware, this Demo Board efficiently read the smart sensor output data. The frequency output approaches overcome nonlinear behavior of sensors with analog output thus un-attenuated X-Ray intensities could be measured precisely and better normalization could be acquired in order to attain high resolution.

Keywords: Computed tomography, detector technology, X-Ray intensity measurement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2579
385 Using Discrete Event Simulation Approach to Reduce Waiting Times in Computed Tomography Radiology Department

Authors: Mwafak Shakoor

Abstract:

The purpose of this study was to reduce patient waiting times, improve system throughput and improve resources utilization in radiology department. A discrete event simulation model was developed using Arena simulation software to investigate different alternatives to improve the overall system delivery based on adding resource scenarios due to the linkage between patient waiting times and resource availability. The study revealed that there is no addition investment need to procure additional scanner but hospital management deploy managerial tactics to enhance machine utilization and reduce the long waiting time in the department.

Keywords: Arena, Computed Tomography (CT), Discrete event simulation, Healthcare modeling, Radiology department, Waiting time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3511
384 Alternative Convergence Analysis for a Kind of Singularly Perturbed Boundary Value Problems

Authors: Jiming Yang

Abstract:

A kind of singularly perturbed boundary value problems is under consideration. In order to obtain its approximation, simple upwind difference discretization is applied. We use a moving mesh iterative algorithm based on equi-distributing of the arc-length function of the current computed piecewise linear solution. First, a maximum norm a posteriori error estimate on an arbitrary mesh is derived using a different method from the one carried out by Chen [Advances in Computational Mathematics, 24(1-4) (2006), 197-212.]. Then, basing on the properties of discrete Green-s function and the presented posteriori error estimate, we theoretically prove that the discrete solutions computed by the algorithm are first-order uniformly convergent with respect to the perturbation parameter ε.

Keywords: Convergence analysis, green's function, singularly perturbed, equi-distribution, moving mesh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
383 Liver Tumor Detection by Classification through FD Enhancement of CT Image

Authors: N. Ghatwary, A. Ahmed, H. Jalab

Abstract:

In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.

Keywords: Fractional differential (FD), Computed Tomography (CT), fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1654
382 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem

Authors: Y. Wang

Abstract:

The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.

Keywords: Frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 980
381 Direct Numerical Simulation of Subcooled Nucleate Pool Boiling

Authors: Sreeyuth Lal, Yohei Sato, Bojan Niceno

Abstract:

With the long-term objective of Critical Heat Flux (CHF) prediction, a Direct Numerical Simulation (DNS) framework for simulation of subcooled and saturated nucleate pool boiling is developed. One case of saturated, and three cases of subcooled boiling at different subcooling levels are simulated. Grid refinement study is also reported. Both boiling and condensation phenomena can be computed simultaneously in the proposed numerical framework. Computed bubble detachment diameters of the saturated nucleate pool boiling cases agree well with the experiment. The flow structures around the growing bubble are presented and the accompanying physics is described. The relation between heat flux evolution from the heated wall and the bubble growth is studied, along with investigations of temperature distribution and flow field evolutions.

Keywords: CFD, interface tracking method, phase change model, subcooled nucleate pool boiling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2429
380 Segmental and Subsegmental Lung Vessel Segmentation in CTA Images

Authors: H. Özkan

Abstract:

In this paper, a novel and fast algorithm for segmental and subsegmental lung vessel segmentation is introduced using Computed Tomography Angiography images. This process is quite important especially at the detection of pulmonary embolism, lung nodule, and interstitial lung disease. The applied method has been realized at five steps. At the first step, lung segmentation is achieved. At the second one, images are threshold and differences between the images are detected. At the third one, left and right lungs are gathered with the differences which are attained in the second step and Exact Lung Image (ELI) is achieved. At the fourth one, image, which is threshold for vessel, is gathered with the ELI. Lastly, identifying and segmentation of segmental and subsegmental lung vessel have been carried out thanks to image which is obtained in the fourth step. The performance of the applied method is found quite well for radiologists and it gives enough results to the surgeries medically.

Keywords: Computed tomography angiography (CTA), Computer aided detection (CAD), Lung segmentation, Lung vessel segmentation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2144
379 Thermodynamic, Structural and Transport Properties of Molten Copper-Thallium Alloys

Authors: D. Adhikari, R. P. Koirala, B.P. Singh

Abstract:

A self-association model has been used to understand the concentration dependence of free energy of mixing (GM), heat of mixing (HM), entropy of mixing (SM), activity (a) and microscopic structures, such as concentration fluctuation in long wavelength limit (Scc(0)) and Warren-Cowley short range order parameter ( 1 α )for Cu- Tl molten alloys at 1573K. A comparative study of surface tension of the alloys in the liquid state at that temperature has also been carried out theoretically as function of composition in the light of Butler-s model, Prasad-s model and quasi-chemical approach. Most of the computed thermodynamic properties have been found in agreement with the experimental values. The analysis reveals that the Cu-Tl molten alloys at 1573K represent a segregating system at all concentrations with moderate interaction. Surface tensions computed from different approaches have been found to be comparable to each other showing increment with the composition of copper.

Keywords: Concentration fluctuations, surface tension, thermodynamic properties, Quasi-chemical approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2135
378 Video Shot Detection and Key Frame Extraction Using Faber Shauder DWT and SVD

Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi

Abstract:

Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.

Keywords: Key Frame Extraction, Shot detection, FSDWT, Singular Value Decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2495
377 Kerma Profile Measurements in CT Chest Scans– a Comparison of Methodologies

Authors: Bruno B. Oliveira, Arnaldo P. Mourão, Teógenes A. da Silva

Abstract:

The Brazilian legislation has only established diagnostic reference levels (DRLs) in terms of Multiple Scan Average Dose (MSAD) as a quality control parameter for computed tomography (CT) scanners. Compliance with DRLs can be verified by measuring the Computed Tomography Kerma Index (Ca,100) with a pencil ionization chamber or by obtaining the kerma distribution in CT scans with radiochromic films or rod shape lithium fluoride termoluminescent dosimeters (TLD-100). TL dosimeters were used to record kerma profiles and to determine MSAD values of a Bright Speed model GE CT scanner. Measurements were done with radiochromic films and TL dosimeters distributed in cylinders positioned in the center and in four peripheral bores of a standard polymethylmethacrylate (PMMA) body CT dosimetry phantom. Irradiations were done using a protocol for adult chest. The maximum values were found at the midpoint of the longitudinal axis. The MSAD values obtained with three dosimetric techniques were compared.

Keywords: Kerma profile, CT, MSAD, patient dosimetry

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2032
376 Neural Network based Texture Analysis of Liver Tumor from Computed Tomography Images

Authors: K.Mala, V.Sadasivam, S.Alagappan

Abstract:

Advances in clinical medical imaging have brought about the routine production of vast numbers of medical images that need to be analyzed. As a result an enormous amount of computer vision research effort has been targeted at achieving automated medical image analysis. Computed Tomography (CT) is highly accurate for diagnosing liver tumors. This study aimed to evaluate the potential role of the wavelet and the neural network in the differential diagnosis of liver tumors in CT images. The tumors considered in this study are hepatocellular carcinoma, cholangio carcinoma, hemangeoma and hepatoadenoma. Each suspicious tumor region was automatically extracted from the CT abdominal images and the textural information obtained was used to train the Probabilistic Neural Network (PNN) to classify the tumors. Results obtained were evaluated with the help of radiologists. The system differentiates the tumor with relatively high accuracy and is therefore clinically useful.

Keywords: Fuzzy c means clustering, texture analysis, probabilistic neural network, LVQ neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2953
375 IMM based Kalman Filter for Channel Estimation in MB OFDM Systems

Authors: C.Ramesh, V.Vaidehi

Abstract:

Ultra-wide band (UWB) communication is one of the most promising technologies for high data rate wireless networks for short range applications. This paper proposes a blind channel estimation method namely IMM (Interactive Multiple Model) Based Kalman algorithm for UWB OFDM systems. IMM based Kalman filter is proposed to estimate frequency selective time varying channel. In the proposed method, two Kalman filters are concurrently estimate the channel parameters. The first Kalman filter namely Static Model Filter (SMF) gives accurate result when the user is static while the second Kalman filter namely the Dynamic Model Filter (DMF) gives accurate result when the receiver is in moving state. The static transition matrix in SMF is assumed as an Identity matrix where as in DMF, it is computed using Yule-Walker equations. The resultant filter estimate is computed as a weighted sum of individual filter estimates. The proposed method is compared with other existing channel estimation methods.

Keywords: Channel estimation, Kalman filter, UWB, Channel model, AR model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
374 Feature Point Reduction for Video Stabilization

Authors: Theerawat Songyot, Tham Manjing, Bunyarit Uyyanonvara, Chanjira Sinthanayothin

Abstract:

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

Keywords: background object tracking, feature point reduction, low cost tracking, video stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
373 Step Height Calibration Using Hamming Window Band-Pass Filter

Authors: Dahi Ghareab Abdelsalam Ibrahim

Abstract:

Axial and lateral measurements of a step depth standard are presented. The axial measurement is performed based on the ISO 5436 profile analysis. The lateral measurement is performed based on the Hamming window band-pass filter method. The method is applied to calibrate a groove structure of a step depth standard of 60 nm. For the axial measurement, the computed results show that the depth of the groove structure is 59.7 ± 0.6 nm. For the lateral measurement, the computed results show that the difference between the two line edges of the groove structure is 151.7 ± 2.5 nm. The method can be applied to any step height/depth regardless of the sharpness of the line edges.

Keywords: Hamming window, band-pass filter, metrology, interferometry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55
372 Effectiveness of Contourlet vs Wavelet Transform on Medical Image Compression: a Comparative Study

Authors: Negar Riazifar, Mehran Yazdi

Abstract:

Discrete Wavelet Transform (DWT) has demonstrated far superior to previous Discrete Cosine Transform (DCT) and standard JPEG in natural as well as medical image compression. Due to its localization properties both in special and transform domain, the quantization error introduced in DWT does not propagate globally as in DCT. Moreover, DWT is a global approach that avoids block artifacts as in the JPEG. However, recent reports on natural image compression have shown the superior performance of contourlet transform, a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks, compared to DWT. It is mostly due to the optimality of contourlet in representing the edges when they are smooth curves. In this work, we investigate this fact for medical images, especially for CT images, which has not been reported yet. To do that, we propose a compression scheme in transform domain and compare the performance of both DWT and contourlet transform in PSNR for different compression ratios (CR) using this scheme. The results obtained using different type of computed tomography images show that the DWT has still good performance at lower CR but contourlet transform performs better at higher CR.

Keywords: Computed Tomography (CT), DWT, Discrete Contourlet Transform, Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2769
371 Sparse-View CT Reconstruction Based on Nonconvex L1 − L2 Regularizations

Authors: Ali Pour Yazdanpanah, Farideh Foroozandeh Shahraki, Emma Regentova

Abstract:

The reconstruction from sparse-view projections is one of important problems in computed tomography (CT) limited by the availability or feasibility of obtaining of a large number of projections. Traditionally, convex regularizers have been exploited to improve the reconstruction quality in sparse-view CT, and the convex constraint in those problems leads to an easy optimization process. However, convex regularizers often result in a biased approximation and inaccurate reconstruction in CT problems. Here, we present a nonconvex, Lipschitz continuous and non-smooth regularization model. The CT reconstruction is formulated as a nonconvex constrained L1 − L2 minimization problem and solved through a difference of convex algorithm and alternating direction of multiplier method which generates a better result than L0 or L1 regularizers in the CT reconstruction. We compare our method with previously reported high performance methods which use convex regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV) on the test phantom images. The results show that there are benefits in using the nonconvex regularizer in the sparse-view CT reconstruction.

Keywords: Computed tomography, sparse-view reconstruction, L1 −L2 minimization, non-convex, difference of convex functions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
370 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: Deep neural models, natural language inference, recognizing textual entailment, sentence-to-sentence relation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
369 A Semi-Fragile Watermarking Scheme for Color Image Authentication

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a semi-fragile watermarking scheme is proposed for color image authentication. In this particular scheme, the color image is first transformed from RGB to YST color space, suitable for watermarking the color media. Each channel is divided into 4×4 non-overlapping blocks and its each 2×2 sub-block is selected. The embedding space is created by setting the two LSBs of selected sub-block to zero, which will hold the authentication and recovery information. For verification of work authentication and parity bits denoted by 'a' & 'p' are computed for each 2×2 subblock. For recovery, intensity mean of each 2×2 sub-block is computed and encoded upto six to eight bits depending upon the channel selection. The size of sub-block is important for correct localization and fast computation. For watermark distribution 2DTorus Automorphism is implemented using a private key to have a secure mapping of blocks. The perceptibility of watermarked image is quite reasonable both subjectively and objectively. Our scheme is oblivious, correctly localizes the tampering and able to recovery the original work with probability of near one.

Keywords: Image Authentication, YST Color Space, Intensity Mean, LSBs, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1806
368 Comparative Analysis of DTC Based Switched Reluctance Motor Drive Using Torque Equation and FEA Models

Authors: P. Srinivas, P. V. N. Prasad

Abstract:

Since torque ripple is the main cause of noise and vibrations, the performance of Switched Reluctance Motor (SRM) can be improved by minimizing its torque ripple using a novel control technique called Direct Torque Control (DTC). In DTC technique, torque is controlled directly through control of magnitude of the flux and change in speed of the stator flux vector. The flux and torque are maintained within set hysteresis bands.

The DTC of SRM is analyzed by two methods. In one method, the actual torque is computed by conducting Finite Element Analysis (FEA) on the design specifications of the motor. In the other method, the torque is computed by Simplified Torque Equation. The variation of peak current, average current, torque ripple and speed settling time with Simplified Torque Equation model is compared with FEA based model.

Keywords: Direct Toque Control, Simplified Torque Equation, Finite Element Analysis, Torque Ripple.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3475
367 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves

Authors: Hanifeh Imanian, Morteza Kolahdoozan

Abstract:

The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.

Keywords: Dispersion, marine environment, mathematical-statistical relationship, oil spill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1121
366 On-line and Off-line POD Assisted Projective Integral for Non-linear Problems: A Case Study with Burgers-Equation

Authors: Montri Maleewong, Sirod Sirisup

Abstract:

The POD-assisted projective integration method based on the equation-free framework is presented in this paper. The method is essentially based on the slow manifold governing of given system. We have applied two variants which are the “on-line" and “off-line" methods for solving the one-dimensional viscous Bergers- equation. For the on-line method, we have computed the slow manifold by extracting the POD modes and used them on-the-fly along the projective integration process without assuming knowledge of the underlying slow manifold. In contrast, the underlying slow manifold must be computed prior to the projective integration process for the off-line method. The projective step is performed by the forward Euler method. Numerical experiments show that for the case of nonperiodic system, the on-line method is more efficient than the off-line method. Besides, the online approach is more realistic when apply the POD-assisted projective integration method to solve any systems. The critical value of the projective time step which directly limits the efficiency of both methods is also shown.

Keywords: Projective integration, POD method, equation-free.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1330
365 Finite Element Prediction and Experimental Verification of the Failure Pattern of Proximal Femur using Quantitative Computed Tomography Images

Authors: Majid Mirzaei, Saeid Samiezadeh , Abbas Khodadadi, Mohammad R. Ghazavi

Abstract:

This paper presents a novel method for prediction of the mechanical behavior of proximal femur using the general framework of the quantitative computed tomography (QCT)-based finite element Analysis (FEA). A systematic imaging and modeling procedure was developed for reliable correspondence between the QCT-based FEA and the in-vitro mechanical testing. A speciallydesigned holding frame was used to define and maintain a unique geometrical reference system during the analysis and testing. The QCT images were directly converted into voxel-based 3D finite element models for linear and nonlinear analyses. The equivalent plastic strain and the strain energy density measures were used to identify the critical elements and predict the failure patterns. The samples were destructively tested using a specially-designed gripping fixture (with five degrees of freedom) mounted within a universal mechanical testing machine. Very good agreements were found between the experimental and the predicted failure patterns and the associated load levels.

Keywords: Bone, Osteoporosis, Noninvasive methods, Failure Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066
364 Alignment of Emission Gamma Ray Sources with Nai(Ti) Scintillation Detectors by Two Laser Beams to Pre-Operation using Alternating Minimization Technique

Authors: Abbas Ali Mahmood Karwi

Abstract:

Accurate timing alignment and stability is important to maximize the true counts and minimize the random counts in positron emission tomography So signals output from detectors must be centering with the two isotopes to pre-operation and fed signals into four units of pulse-processing units, each unit can accept up to eight inputs. The dual source computed tomography consist two units on the left for 15 detector signals of Cs-137 isotope and two units on the right are for 15 detectors signals of Co-60 isotope. The gamma spectrum consisting of either single or multiple photo peaks. This allows for the use of energy discrimination electronic hardware associated with the data acquisition system to acquire photon counts data with a specific energy, even if poor energy resolution detectors are used. This also helps to avoid counting of the Compton scatter counts especially if a single discrete gamma photo peak is emitted by the source as in the case of Cs-137. In this study the polyenergetic version of the alternating minimization algorithm is applied to the dual energy gamma computed tomography problem.

Keywords: Alignment, Spectrum, Laser, Detectors, Image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585
363 A Content Based Image Watermarking Scheme Resilient to Geometric Attacks

Authors: Latha Parameswaran, K. Anbumani

Abstract:

Multimedia security is an incredibly significant area of concern. The paper aims to discuss a robust image watermarking scheme, which can withstand geometric attacks. The source image is initially moment normalized in order to make it withstand geometric attacks. The moment normalized image is wavelet transformed. The first level wavelet transformed image is segmented into blocks if size 8x8. The product of mean and standard and standard deviation of each block is computed. The second level wavelet transformed image is divided into 8x8 blocks. The product of block mean and the standard deviation are computed. The difference between products in the two levels forms the watermark. The watermark is inserted by modulating the coefficients of the mid frequencies. The modulated image is inverse wavelet transformed and inverse moment normalized to generate the watermarked image. The watermarked image is now ready for transmission. The proposed scheme can be used to validate identification cards and financial instruments. The performance of this scheme has been evaluated using a set of parameters. Experimental results show the effectiveness of this scheme.

Keywords: Image moments, wavelets, content-based watermarking, moment normalization, geometric attacks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1428
362 Multi-Agent Simulation of Wayfinding for Rescue Operation during Building Fire

Authors: G. Sokhansefat, M. Delavar, M. Banedj-Schafii

Abstract:

Recently research on human wayfinding has focused mainly on mental representations rather than processes of wayfinding. The objective of this paper is to demonstrate the rationality behind applying multi-agent simulation paradigm to the modeling of rescuer team wayfinding in order to develop computational theory of perceptual wayfinding in crisis situations using image schemata and affordances, which explains how people find a specific destination in an unfamiliar building such as a hospital. The hypothesis of this paper is that successful navigation is possible if the agents are able to make the correct decision through well-defined cues in critical cases, so the design of the building signage is evaluated through the multi-agent-based simulation. In addition, a special case of wayfinding in a building, finding one-s way through three hospitals, is used to demonstrate the model. Thereby, total rescue time for rescue operation during building fire is computed. This paper discuses the computed rescue time for various signage localization and provides experimental result for optimization of building signage design. Therefore the most appropriate signage design resulted in the shortest total rescue time in various situations.

Keywords: Multi-Agent system (MAS), Spatial Cognition, Wayfinding, Indoor Environment, Geospatial Information System (GIS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345
361 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: Additive models, local polynomial regression, residuals, mean square error, variable selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 995
360 Effect of Transmission Codes on Hybrid SC/MRC Diversity Reception MQAM system over Rayleigh Fading Channels

Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal

Abstract:

In this paper, the effect of transmission codes on the performance of coherent square M-ary quadrature amplitude modulation (CSMQAM) under hybrid selection/maximal-ratio combining (H-S/MRC) diversity is analysed. The fading channels are modeled as frequency non-selective slow independent and identically distributed Rayleigh fading channels corrupted by additive white Gaussian noise (AWGN). The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM under H-S/MRC diversity by plotting error probabilities versus average signal to noise ratio (SNR) for various values L and N in order to examine the improvement in the performance of the digital communications system as the number of selected diversity branches is increased. The results for no diversity, conventional SC and Lth order MRC schemes are also plotted for comparison. Closed form analytical results derived in this paper are sufficiently simple and therefore can be computed numerically without any approximations. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems over wireless fading channels.

Keywords: Error probability, diversity reception, Rayleigh fading channels, wireless digital communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708
359 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.

Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1341
358 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance

Authors: Loai AbdAllah, Mahmoud Kaiyal

Abstract:

Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.

Keywords: Missing values, distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 752