Search results for: Base Input Reconstruction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2249

Search results for: Base Input Reconstruction

2249 Visual Hull with Imprecise Input

Authors: Peng He

Abstract:

Imprecision is a long-standing problem in CAD design and high accuracy image-based reconstruction applications. The visual hull which is the closed silhouette equivalent shape of the objects of interest is an important concept in image-based reconstruction. We extend the domain-theoretic framework, which is a robust and imprecision capturing geometric model, to analyze the imprecision in the output shape when the input vertices are given with imprecision. Under this framework, we show an efficient algorithm to generate the 2D partial visual hull which represents the exact information of the visual hull with only basic imprecision assumptions. We also show how the visual hull from polyhedra problem can be efficiently solved in the context of imprecise input.

Keywords: Geometric Domain, Computer Vision, Computational Geometry, Visual Hull, Image-Based reconstruction, Imprecise Input, CAD object

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
2248 3D Object Model Reconstruction Based on Polywogs Wavelet Network Parametrization

Authors: Mohamed Othmani, Yassine Khlifi

Abstract:

This paper presents a technique for compact three dimensional (3D) object model reconstruction using wavelet networks. It consists to transform an input surface vertices into signals,and uses wavelet network parameters for signal approximations. To prove this, we use a wavelet network architecture founded on several mother wavelet families. POLYnomials WindOwed with Gaussians (POLYWOG) wavelet families are used to maximize the probability to select the best wavelets which ensure the good generalization of the network. To achieve a better reconstruction, the network is trained several iterations to optimize the wavelet network parameters until the error criterion is small enough. Experimental results will shown that our proposed technique can effectively reconstruct an irregular 3D object models when using the optimized wavelet network parameters. We will prove that an accurateness reconstruction depends on the best choice of the mother wavelets.

Keywords: 3D object, optimization, parametrization, Polywog wavelets, reconstruction, wavelet networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501
2247 Preparation of Computer Model of the Aircraft for Numerical Aeroelasticity Tests – Flutter

Authors: M. Rychlik, R. Roszak, M. Morzynski, M. Nowak, H. Hausa, K. Kotecki

Abstract:

Article presents the geometry and structure reconstruction procedure of the aircraft model for flatter research (based on the I22-IRYDA aircraft). For reconstruction the Reverse Engineering techniques and advanced surface modeling CAD tools are used. Authors discuss all stages of data acquisition process, computation and analysis of measured data. For acquisition the three dimensional structured light scanner was used. In the further sections, details of reconstruction process are present. Geometry reconstruction procedure transform measured input data (points cloud) into the three dimensional parametric computer model (NURBS solid model) which is compatible with CAD systems. Parallel to the geometry of the aircraft, the internal structure (structural model) are extracted and modeled. In last chapter the evaluation of obtained models are discussed.

Keywords: computer modeling, numerical simulation, Reverse Engineering, structural model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
2246 About Methods of Additional Mining Pressure Figuring while Reconstruction of Tunnels

Authors: M. Moistsrapishvili, I. Ugrekhelidze, T. Baramashvili, D. Malaghuradze

Abstract:

At the end of the 20th century it was actual the development of transport corridors and the improvement of their technical parameters. With this purpose, many countries and Georgia among them manufacture to construct new highways, railways and also reconstruction-modernization of the existing transport infrastructure. It is necessary to explore the artificial structures (bridges and tunnels) on the existing tracks as they are very old. Conference report includes the peculiarities of reconstruction of tunnels, because we think that this theme is important for the modernization of the existing road infrastructure. We must remark that the methods of determining mining pressure of tunnel reconstructions are worked out according to the jobs of new tunnels but it is necessary to foresee additional mining pressure which will be formed during their reconstruction. In this report there are given the methods of figuring the additional mining pressure while reconstruction of tunnels, there was worked out the computer program, it is determined that during reconstruction of tunnels the additional mining pressure is 1/3rd of main mining pressure.

Keywords: Mining pressure, Reconstruction of tunnels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
2245 Automatic 3D Reconstruction of Coronary Artery Centerlines from Monoplane X-ray Angiogram Images

Authors: Ali Zifan, Panos Liatsis, Panagiotis Kantartzis, Manolis Gavaises, Nicos Karcanias, Demosthenes Katritsis

Abstract:

We present a new method for the fully automatic 3D reconstruction of the coronary artery centerlines, using two X-ray angiogram projection images from a single rotating monoplane acquisition system. During the first stage, the input images are smoothed using curve evolution techniques. Next, a simple yet efficient multiscale method, based on the information of the Hessian matrix, for the enhancement of the vascular structure is introduced. Hysteresis thresholding using different image quantiles, is used to threshold the arteries. This stage is followed by a thinning procedure to extract the centerlines. The resulting skeleton image is then pruned using morphological and pattern recognition techniques to remove non-vessel like structures. Finally, edge-based stereo correspondence is solved using a parallel evolutionary optimization method based on f symbiosis. The detected 2D centerlines combined with disparity map information allow the reconstruction of the 3D vessel centerlines. The proposed method has been evaluated on patient data sets for evaluation purposes.

Keywords: Vessel enhancement, centerline extraction, symbiotic reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2272
2244 Sparse-View CT Reconstruction Based on Nonconvex L1 − L2 Regularizations

Authors: Ali Pour Yazdanpanah, Farideh Foroozandeh Shahraki, Emma Regentova

Abstract:

The reconstruction from sparse-view projections is one of important problems in computed tomography (CT) limited by the availability or feasibility of obtaining of a large number of projections. Traditionally, convex regularizers have been exploited to improve the reconstruction quality in sparse-view CT, and the convex constraint in those problems leads to an easy optimization process. However, convex regularizers often result in a biased approximation and inaccurate reconstruction in CT problems. Here, we present a nonconvex, Lipschitz continuous and non-smooth regularization model. The CT reconstruction is formulated as a nonconvex constrained L1 − L2 minimization problem and solved through a difference of convex algorithm and alternating direction of multiplier method which generates a better result than L0 or L1 regularizers in the CT reconstruction. We compare our method with previously reported high performance methods which use convex regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV) on the test phantom images. The results show that there are benefits in using the nonconvex regularizer in the sparse-view CT reconstruction.

Keywords: Computed tomography, sparse-view reconstruction, L1 −L2 minimization, non-convex, difference of convex functions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2031
2243 CT Reconstruction from a Limited Number of X-Ray Projections

Authors: Tao Quang Bang, Insu Jeon

Abstract:

Most CT reconstruction system x-ray computed tomography (CT) is a well established visualization technique in medicine and nondestructive testing. However, since CT scanning requires sampling of radiographic projections from different viewing angles, common CT systems with mechanically moving parts are too slow for dynamic imaging, for instance of multiphase flows or live animals. A large number of X-ray projections are needed to reconstruct CT images, so the collection and calculation of the projection data consume too much time and harmful for patient. For the purpose of solving the problem, in this study, we proposed a method for tomographic reconstruction of a sample from a limited number of x-ray projections by using linear interpolation method. In simulation, we presented reconstruction from an experimental x-ray CT scan of a Aluminum phantom that follows to two steps: X-ray projections will be interpolated using linear interpolation method and using it for CT reconstruction based upon Ordered Subsets Expectation Maximization (OSEM) method.

Keywords: CT reconstruction, X-ray projections, Interpolation technique, OSEM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2394
2242 Data Mining Determination of Sunlight Average Input for Solar Power Plant

Authors: Fl. Loury, P. Sablonière, C. Lamoureux, G. Magnier, Th. Gutierrez

Abstract:

A method is proposed to extract faithful representative patterns from data set of observations when they are suffering from non-negligible fluctuations. Supposing time interval between measurements to be extremely small compared to observation time, it consists in defining first a subset of intermediate time intervals characterizing coherent behavior. Data projection on these intervals gives a set of curves out of which an ideally “perfect” one is constructed by taking the sup limit of them. Then comparison with average real curve in corresponding interval gives an efficiency parameter expressing the degradation consecutive to fluctuation effect. The method is applied to sunlight data collected in a specific place, where ideal sunlight is the one resulting from direct exposure at location latitude over the year, and efficiency is resulting from action of meteorological parameters, mainly cloudiness, at different periods of the year. The extracted information already gives interesting element of decision, before being used for analysis of plant control.

Keywords: Base Input Reconstruction, Data Mining, Efficiency Factor, Information Pattern Operator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528
2241 Social Capital in Housing Reconstruction Post Disaster Case of Yogyakarta Post Earthquake

Authors: Ikaputra

Abstract:

This paper will focus on the concept of social capital for especially housing reconstruction Post Disaster. The context of the study is Indonesia and Yogyakarta Post Earthquake 2006 as a case, but it is expected that the concept can be adopted in general post disaster reconstruction. The discussion will begin by addressing issues on House Reconstruction Post Disaster in Indonesia and Yogyakarta; defining Social Capital as a concept for effective management capacity based on community; Social Capital Post Java Earthquake utilizing Gotong Royong—community mutual self-help, and Approach and Strategy towards Community-based Reconstruction.

Keywords: Community empowerment, Gotong Royong, post disaster, reconstruction, social capital, Yogyakarta-Indonesia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 900
2240 Recognition and Reconstruction of Partially Occluded Objects

Authors: Michela Lecca, Stefano Messelodi

Abstract:

A new automatic system for the recognition and re¬construction of resealed and/or rotated partially occluded objects is presented. The objects to be recognized are described by 2D views and each view is occluded by several half-planes. The whole object views and their visible parts (linear cuts) are then stored in a database. To establish if a region R of an input image represents an object possibly occluded, the system generates a set of linear cuts of R and compare them with the elements in the database. Each linear cut of R is associated to the most similar database linear cut. R is recognized as an instance of the object 0 if the majority of the linear cuts of R are associated to a linear cut of views of 0. In the case of recognition, the system reconstructs the occluded part of R and determines the scale factor and the orientation in the image plane of the recognized object view. The system has been tested on two different datasets of objects, showing good performance both in terms of recognition and reconstruction accuracy.

Keywords: Occluded Object Recognition, Shape Reconstruction, Automatic Self-Adaptive Systems, Linear Cut.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1285
2239 A Study on Applying 3D Reconstruction to 3D Last Morphing

Authors: Shih-Wen Hsiao, Rong-Qi Chen, Chien-Yu Lin

Abstract:

When it comes to last, it is regarded as the critical foundation of shoe design and development. A computer aided methodology for various last form designs is proposed in this study. The reverse engineering is mainly applied to the process of scanning for the last form. Then with the minimum energy for revision of surface continuity, the surface reconstruction of last is rebuilt by the feature curves of the scanned last. When the surface reconstruction of last is completed, the weighted arithmetic mean method is applied to the computation on the shape morphing for the control mesh of last, thus 3D last form of different sizes is generated from its original form feature with functions remained. In the end, the result of this study is applied to an application for 3D last reconstruction system. The practicability of the proposed methodology is verified through later case studies.

Keywords: Reverse engineering, Surface reconstruction, Surface continuity, Shape morphing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
2238 Current Trends in Eco-Friendly Reconstruction after the Great East Japan Earthquake

Authors: Ayaka Kamiyama, Akihiro Iijima

Abstract:

On March 11, 2011, the East coast of Japan was hit by one of the strongest earthquakes in history, followed by a devastating tsunami. Although most lifelines, infrastructure, and public facilities have been restored gradually, recovery efforts in terms of disposal of disaster waste and revival of primary industry are lagging. This study presents a summary of the damage inflicted by the earthquake and the current status of reconstruction in the disaster area. Moreover, we discuss the current trends and future perspectives on recently implemented eco-friendly reconstruction projects and focus on the pro-environmental behavior of disaster victims which is emerging as a result of the energy shortage after the earthquake. Finally, we offer ideas for initiatives for the next stage of the reconstruction policies.

Keywords: Agriculture, Disaster wastes, Pro-environmental behavior, Reconstruction policies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
2237 Tomographic Images Reconstruction Simulation for Defects Detection in Specimen

Authors: Kedit J.

Abstract:

This paper is the tomographic images reconstruction simulation for defects detection in specimen. The specimen is the thin cylindrical steel contained with low density materials. The defects in material are simulated in three shapes.The specimen image function will be transformed to projection data. Radon transform and its inverse provide the mathematical for reconstructing tomographic images from projection data. The result of the simulation show that the reconstruction images is complete for defect detection.

Keywords: Tomography, Tomography Reconstruction, Radon Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1426
2236 New Efficient Iterative Optimization Algorithm to Design the Two Channel QMF Bank

Authors: Ram Kumar Soni, Alok Jain, Rajiv Saxena

Abstract:

This paper proposes an efficient method for the design of two channel quadrature mirror filter (QMF) bank. To achieve minimum value of reconstruction error near to perfect reconstruction, a linear optimization process has been proposed. Prototype low pass filter has been designed using Kaiser window function. The modified algorithm has been developed to optimize the reconstruction error using linear objective function through iteration method. The result obtained, show that the performance of the proposed algorithm is better than that of the already exists methods.

Keywords: Filterbank, near perfect reconstruction, Kaiserwindow, QMF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
2235 Analytical Analysis of Image Representation by Their Discrete Wavelet Transform

Authors: R. M. Farouk

Abstract:

In this paper, we present an analytical analysis of the representation of images as the magnitudes of their transform with the discrete wavelets. Such a representation plays as a model for complex cells in the early stage of visual processing and of high technical usefulness for image understanding, because it makes the representation insensitive to small local shifts. We found that if the signals are band limited and of zero mean, then reconstruction from the magnitudes is unique up to the sign for almost all signals. We also present an iterative reconstruction algorithm which yields very good reconstruction up to the sign minor numerical errors in the very low frequencies.

Keywords: Wavelets, Image processing signal processing, Image reconstruction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388
2234 Algebraic Approach for the Reconstruction of Linear and Convolutional Error Correcting Codes

Authors: Johann Barbier, Guillaume Sicot, Sebastien Houcke

Abstract:

In this paper we present a generic approach for the problem of the blind estimation of the parameters of linear and convolutional error correcting codes. In a non-cooperative context, an adversary has only access to the noised transmission he has intercepted. The intercepter has no knowledge about the parameters used by the legal users. So, before having acess to the information he has first to blindly estimate the parameters of the error correcting code of the communication. The presented approach has the main advantage that the problem of reconstruction of such codes can be expressed in a very simple way. This allows us to evaluate theorical bounds on the complexity of the reconstruction process but also bounds on the estimation rate. We show that some classical reconstruction techniques are optimal and also explain why some of them have theorical complexities greater than these experimentally observed.

Keywords: Blind estimation parameters, error correcting codes, non-cooperative context, reconstruction algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
2233 Simulation for Input-Output Energy Structure in Agriculture: Bangladesh

Authors: M. S. Alam, M. R. Alam, Nusrat Jahan Imu

Abstract:

This paper presents a computer simulation model based on system dynamics methodology for analyzing the dynamic characteristics of input energy structure in agriculture and Bangladesh is used here as a case study for model validation. The model provides an input energy structure linking the major energy flows with human energy and draft energy from cattle as well as tractors and/or power tillers, irrigation, chemical fertilizer and pesticide. The evaluation is made in terms of different energy dependent indicators. During the simulation period, the energy input to agriculture increased from 6.1 to 19.15 GJ/ha i.e. 2.14 fold corresponding to energy output in terms of food, fodder and fuel increase from 71.55 to 163.58 GJ/ha i.e. 1.28 fold from the base year. This result indicates that the energy input in Bangladeshi agricultural production is increasing faster than the energy output. Problems such as global warming, nutrient loading and pesticide pollution can associate with this increasing input. For an assessment, a comparative statement of input energy use in agriculture of developed countries (DCs) and least developed countries (LDCs) including Bangladesh has been made. The performance of the model is found satisfactory to analyze the agricultural energy system for LDCs

Keywords: Agriculture, energy indicator, system dynamics, energy flows.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2572
2232 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
2231 Sustainable Development in Disaster Affected Rural Areas: The Case of Dinar Villages

Authors: Nese Dikmen

Abstract:

Post-disaster reconstruction projects offer opportunities to facilitate physical, social and economic development and to reduce future hazard vulnerability long after the disasters. Sustainability of post-disaster reconstruction project conducted in the villages of Dinar following the 1995 earthquake was investigated in this paper. Officials of the Government who were involved in the project were interviewed. Besides, two field surveys were done in 12 villages of Dinar in winter months of 2008. Beneficiaries were interviewed and physical, socio-cultural and economic impacts of the reconstruction were examined. The research revealed that the postdisaster reconstruction project has negative aspects from the point view of sustainability. The physical, socio-cultural and economic factors were not considered during decision making process of the project.

Keywords: Dinar, Post-disaster reconstruction, Sustainable development, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
2230 End-to-End Pyramid Based Method for MRI Reconstruction

Authors: Omer Cahana, Maya Herman, Ofer Levi

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: Accelerate MRI scans, image reconstruction, pyramid network, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 334
2229 A Fast and Robust Protocol for Reconstruction and Re-Enactment of Historical Sites

Authors: S. I. Abu Alasal, M. M. Esbeih, E. R. Fayyad, R. S. Gharaibeh, M. Z. Ali, A. A. Freewan, M. M. Jamhawi

Abstract:

This research proposes a novel reconstruction protocol for restoring missing surfaces and low-quality edges and shapes in photos of artifacts at historical sites. The protocol starts with the extraction of a cloud of points. This extraction process is based on four subordinate algorithms, which differ in the robustness and amount of resultant. Moreover, they use different -but complementary- accuracy to some related features and to the way they build a quality mesh. The performance of our proposed protocol is compared with other state-of-the-art algorithms and toolkits. The statistical analysis shows that our algorithm significantly outperforms its rivals in the resultant quality of its object files used to reconstruct the desired model.

Keywords: Meshes, Point Clouds, Surface Reconstruction Protocols, 3D Reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2003
2228 Partial 3D Reconstruction using Evolutionary Algorithms

Authors: Mónica Pérez-Meza, Rodrigo Montúfar-Chaveznava

Abstract:

When reconstructing a scenario, it is necessary to know the structure of the elements present on the scene to have an interpretation. In this work we link 3D scenes reconstruction to evolutionary algorithms through the vision stereo theory. We consider vision stereo as a method that provides the reconstruction of a scene using only a couple of images of the scene and performing some computation. Through several images of a scene, captured from different positions, vision stereo can give us an idea about the threedimensional characteristics of the world. Vision stereo usually requires of two cameras, making an analogy to the mammalian vision system. In this work we employ only a camera, which is translated along a path, capturing images every certain distance. As we can not perform all computations required for an exhaustive reconstruction, we employ an evolutionary algorithm to partially reconstruct the scene in real time. The algorithm employed is the fly algorithm, which employ “flies" to reconstruct the principal characteristics of the world following certain evolutionary rules.

Keywords: 3D Reconstruction, Computer Vision, EvolutionaryAlgorithms, Vision Stereo.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
2227 Efficient High Fidelity Signal Reconstruction Based on Level Crossing Sampling

Authors: Negar Riazifar, Nigel G. Stocks

Abstract:

This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide high fidelity signal reconstruction for speech signals; these strategies circumvent the problem of exponentially increasing number of samples as the bit-depth is increased and hence are highly efficient. Specifically, the results indicate that the distribution of the intervals between samples is one of the key factors in the quality of signal reconstruction; including samples with short intervals does not improve the accuracy of the signal reconstruction, whilst samples with large intervals lead to numerical instability. The proposed sampling method, termed reduced conventional level crossing (RCLC) sampling, exploits redundancy between samples to improve the efficiency of the sampling without compromising performance. A reconstruction technique is also proposed that enhances the numerical stability through linear interpolation of samples separated by large intervals. Interpolation is demonstrated to improve the accuracy of the signal reconstruction in addition to the numerical stability. We further demonstrate that the RCLC and interpolation methods can give useful levels of signal recovery even if the average sampling rate is less than the Nyquist rate.

Keywords: Level crossing sampling, numerical stability, speech processing, trigonometric polynomial.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430
2226 Laboratory Investigation of the Pavement Condition in Lebanon: Implementation of Reclaimed Asphalt Pavement in the Base Course and Asphalt Layer

Authors: Marinelle El-Khoury, Lina Bouhaya, Nivine Abbas, Hassan Sleiman

Abstract:

The road network in the north of Lebanon is a prime example of the lack of pavement design and execution in Lebanon.  These roads show major distresses and hence, should be tested and evaluated. The aim of this research is to investigate and determine the deficiencies in road surface design in Lebanon, and to propose an environmentally friendly asphalt mix design. This paper consists of several parts: (i) evaluating pavement performance and structural behavior, (ii) identifying the distresses using visual examination followed by laboratory tests, (iii) deciding the optimal solution where rehabilitation or reconstruction is required and finally, (iv) identifying a sustainable method, which uses recycled material in the proposed mix. The asphalt formula contains Reclaimed Asphalt Pavement (RAP) in the base course layer and in the asphalt layer. Visual inspection of the roads in Tripoli shows that these roads face a high level of distress severity. Consequently, the pavement should be reconstructed rather than simply rehabilitated. Coring was done to determine the pavement layer thickness. The results were compared to the American Association of State Highway and Transportation Officials (AASHTO) design methodology and showed that the existing asphalt thickness is lower than the required asphalt thickness. Prior to the pavement reconstruction, the road materials were tested according to the American Society for Testing and Materials (ASTM) specification to identify whether the materials are suitable. Accordingly, the ASTM tests that were performed on the base course are Sieve analysis, Atterberg limits, modified proctor, Los Angeles, and California Bearing Ratio (CBR) tests. Results show a CBR value higher than 70%. Hence, these aggregates could be used as a base course layer. The asphalt layer was also tested and the results of the Marshall flow and stability tests meet the ASTM specifications. In the last section, an environmentally friendly mix was proposed. An optimal RAP percentage of 30%, which produced a well graded base course and asphalt mix, was determined through a series of trials.

Keywords: Asphalt mix, reclaimed asphalt pavement, California bearing ratio, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
2225 Near Perfect Reconstruction Quadrature Mirror Filter

Authors: A. Kumar, G. K. Singh, R. S. Anand

Abstract:

In this paper, various algorithms for designing quadrature mirror filter are reviewed and a new algorithm is presented for the design of near perfect reconstruction quadrature mirror filter bank. In the proposed algorithm, objective function is formulated using the perfect reconstruction condition or magnitude response condition of prototype filter at frequency (ω = 0.5π) in ideal condition. The cutoff frequency is iteratively changed to adjust the filters coefficients using optimization algorithm. The performances of the proposed algorithm are evaluated in term of computation time, reconstruction error and number of iterations. The design examples illustrate that the proposed algorithm is superior in term of peak reconstruction error, computation time, and number of iterations. The proposed algorithm is simple, easy to implement, and linear in nature.

Keywords: Aliasing cancellations filter bank, Filter banks, quadrature mirror filter (QMF), subband coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2531
2224 On the Reduction of Side Effects in Tomography

Authors: V. Masilamani, C. Vanniarajan, Kamala Krithivasan

Abstract:

As the Computed Tomography(CT) requires normally hundreds of projections to reconstruct the image, patients are exposed to more X-ray energy, which may cause side effects such as cancer. Even when the variability of the particles in the object is very less, Computed Tomography requires many projections for good quality reconstruction. In this paper, less variability of the particles in an object has been exploited to obtain good quality reconstruction. Though the reconstructed image and the original image have same projections, in general, they need not be the same. In addition to projections, if a priori information about the image is known, it is possible to obtain good quality reconstructed image. In this paper, it has been shown by experimental results why conventional algorithms fail to reconstruct from a few projections, and an efficient polynomial time algorithm has been given to reconstruct a bi-level image from its projections along row and column, and a known sub image of unknown image with smoothness constraints by reducing the reconstruction problem to integral max flow problem. This paper also discusses the necessary and sufficient conditions for uniqueness and extension of 2D-bi-level image reconstruction to 3D-bi-level image reconstruction.

Keywords: Discrete Tomography, Image Reconstruction, Projection, Computed Tomography, Integral Max Flow Problem, Smooth Binary Image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370
2223 3-D Reconstruction of Objects Using Digital Fringe Projection: Survey and Experimental Study

Authors: R. Talebi, A. Abdel-Dayem, J. Johnson

Abstract:

Three-dimensional reconstruction of small objects has been one of the most challenging problems over the last decade. Computer graphics researchers and photography professionals have been working on improving 3D reconstruction algorithms to fit the high demands of various real life applications. Medical sciences, animation industry, virtual reality, pattern recognition, tourism industry, and reverse engineering are common fields where 3D reconstruction of objects plays a vital role. Both lack of accuracy and high computational cost are the major challenges facing successful 3D reconstruction. Fringe projection has emerged as a promising 3D reconstruction direction that combines low computational cost to both high precision and high resolution. It employs digital projection, structured light systems and phase analysis on fringed pictures. Research studies have shown that the system has acceptable performance, and moreover it is insensitive to ambient light. This paper presents an overview of fringe projection approaches. It also presents an experimental study and implementation of a simple fringe projection system. We tested our system using two objects with different materials and levels of details. Experimental results have shown that, while our system is simple, it produces acceptable results.

Keywords: Digital fringe projection, 3D reconstruction, phase unwrapping, phase shifting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5220
2222 Optimization of a Triangular Fin with Variable Fin Base Thickness

Authors: Hyung Suk Kang

Abstract:

A triangular fin with variable fin base thickness is analyzed and optimized using a two-dimensional analytical method. The influence of fin base height and fin base thickness on the temperature in the fin is listed. For the fixed fin volumes, the maximum heat loss, the corresponding optimum fin effectiveness, fin base height and fin tip length as a function of the fin base thickness, convection characteristic number and dimensionless fin volume are represented. One of the results shows that the optimum heat loss increases whereas the corresponding optimum fin effectiveness decreases with the increase of fin volume.

Keywords: A triangular fin, Convection characteristic number, Heat loss, Fin base thickness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4123
2221 Effect of Heat Input on the Weld Metal Toughness of Chromium-Molybdenum Steel

Authors: M. S. Kaiser

Abstract:

An attempt has been made to determine the strength and impact properties of Cr-Mo steel weld and base materials by varying the current during manual metal arc welding. Toughness over a temperature range from -32 to 100°C of base, heat affected zone (HAZ) and weld zones at three current settings are made. It is observed that the deterioration in notch toughness at any zone with the temperature decreases. The values of notch toughness for all zones at -32°C are almost same for any current settings. The values of notch toughness at HAZ area are higher than that of weld area due to the coarsening of ferrite grain of HAZ occurs with higher heat input. From microhardness and microstructure result, it can be concluded that large inclusion content in weld deposit is the cause of lower notch toughness value.

Keywords: Chromium-Molybdenum steel, post-weld heat treatment, heat affected zone, microstructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3643
2220 Design Method for Knowledge Base Systems in Education Using COKB-ONT

Authors: Nhon Do, Tuyen Trong Tran, Phan Hoai Truong

Abstract:

Nowadays e-Learning is more popular, in Vietnam especially. In e-learning, materials for studying are very important. It is necessary to design the knowledge base systems and expert systems which support for searching, querying, solving of problems. The ontology, which was called Computational Object Knowledge Base Ontology (COB-ONT), is a useful tool for designing knowledge base systems in practice. In this paper, a design method for knowledge base systems in education using COKB-ONT will be presented. We also present the design of a knowledge base system that supports studying knowledge and solving problems in higher mathematics.

Keywords: artificial intelligence, knowledge base systems, ontology, educational software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2042