Search results for: joint sparse reconstruction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 619

Search results for: joint sparse reconstruction

379 Stress Analysis of the Ceramics Heads with Different Sizes under the Destruction Tests

Authors: V. Fuis, P. Janicek, T. Navrat

Abstract:

The global solved problem is the calculation of the parameters of ceramic material from a set of destruction tests of ceramic heads of total hip joint endoprosthesis. The standard way of calculation of the material parameters consists in carrying out a set of 3 or 4 point bending tests of specimens cut out from parts of the ceramic material to be analysed. In case of ceramic heads, it is not possible to cut out specimens of required dimensions because the heads are too small (if the cut out specimens were smaller than the normalised ones, the material parameters derived from them would exhibit higher strength values than those which the given ceramic material really has). A special destruction device for heads destruction was designed and the solved local problem is the modification of this destructive device based on the analysis of tensile stress in the head for two different values of the depth of the conical hole in the head. The goal of device modification is a shift of the location with extreme value of σ1max from the region of head’s hole bottom to its opening. This modification will increase the credibility of the obtained material properties of bioceramics, which will be determined from a set of head destructions using the Weibull weakest link theory.

Keywords: Ceramic heads, depth of the conical hole, destruction test, material parameters, principal stress, total hip joint endoprosthesis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1807
378 Adaptive Fuzzy Control of Stewart Platform under Actuator Saturation

Authors: Dongsu Wu, Hongbin Gu, Peng Li

Abstract:

A novel adaptive fuzzy trajectory tracking algorithm of Stewart platform based motion platform is proposed to compensate path deviation and degradation of controller-s performance due to actuator torque limit. The algorithm can be divided into two parts: the real-time trajectory shaping part and the joint space adaptive fuzzy controller part. For a reference trajectory in task space whenever any of the actuators is saturated, the desired acceleration of the reference trajectory is modified on-line by using dynamic model of motion platform. Meanwhile an additional action with respect to the difference between the nominal and modified trajectories is utilized in the non-saturated region of actuators to reduce the path error. Using modified trajectory as input, the joint space controller incorporates compute torque controller, leg velocity observer and fuzzy disturbance observer with saturation compensation. It can ensure stability and tracking performance of controller in present of external disturbance and position only measurement. Simulation results verify the effectiveness of proposed control scheme.

Keywords: Actuator saturation, adaptive fuzzy control, Stewartplatform, trajectory shaping, flight simulator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983
377 Auto Regressive Tree Modeling for Parametric Optimization in Fuzzy Logic Control System

Authors: Arshia Azam, J. Amarnath, Ch. D. V. Paradesi Rao

Abstract:

The advantage of solving the complex nonlinear problems by utilizing fuzzy logic methodologies is that the experience or expert-s knowledge described as a fuzzy rule base can be directly embedded into the systems for dealing with the problems. The current limitation of appropriate and automated designing of fuzzy controllers are focused in this paper. The structure discovery and parameter adjustment of the Branched T-S fuzzy model is addressed by a hybrid technique of type constrained sparse tree algorithms. The simulation result for different system model is evaluated and the identification error is observed to be minimum.

Keywords: Fuzzy logic, branch T-S fuzzy model, tree modeling, complex nonlinear system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1358
376 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, Solder Joint Reliability, NUDD, connectivity issues, qualifications, characterization and control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 351
375 Studying the Possibility to Weld AA1100 Aluminum Alloy by Friction Stir Spot Welding

Authors: Ahmad K. Jassim, Raheem Kh. Al-Subar

Abstract:

Friction stir welding is a modern and an environmentally friendly solid state joining process used to joint relatively lighter family of materials. Recently, friction stir spot welding has been used instead of resistance spot welding which has received considerable attention from the automotive industry. It is environmentally friendly process that eliminated heat and pollution. In this research, friction stir spot welding has been used to study the possibility to weld AA1100 aluminum alloy sheet with 3 mm thickness by overlapping the edges of sheet as lap joint. The process was done using a drilling machine instead of milling machine. Different tool rotational speeds of 760, 1065, 1445, and 2000 RPM have been applied with manual and automatic compression to study their effect on the quality of welded joints. Heat generation, pressure applied, and depth of tool penetration have been measured during the welding process. The result shows that there is a possibility to weld AA1100 sheets; however, there is some surface defect that happened due to insufficient condition of welding. Moreover, the relationship between rotational speed, pressure, heat generation and tool depth penetration was created.

Keywords: Friction, spot, stir, environmental, sustainable, AA1100 aluminum alloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1100
374 Grid Computing for the Bi-CGSTAB Applied to the Solution of the Modified Helmholtz Equation

Authors: E. N. Mathioudakis, E. P. Papadopoulou

Abstract:

The problem addressed herein is the efficient management of the Grid/Cluster intense computation involved, when the preconditioned Bi-CGSTAB Krylov method is employed for the iterative solution of the large and sparse linear system arising from the discretization of the Modified Helmholtz-Dirichlet problem by the Hermite Collocation method. Taking advantage of the Collocation ma-trix's red-black ordered structure we organize efficiently the whole computation and map it on a pipeline architecture with master-slave communication. Implementation, through MPI programming tools, is realized on a SUN V240 cluster, inter-connected through a 100Mbps and 1Gbps ethernet network,and its performance is presented by speedup measurements included.

Keywords: Collocation, Preconditioned Bi-CGSTAB, MPI, Grid and DSM Systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
373 Development and Validation of a HPLC Method for 6-Gingerol and 6-Shogaol in Joint Pain Relief Gel Containing Ginger (Zingiber officinale)

Authors: Tanwarat Kajsongkram, Saowalux Rotamporn, Sirinat Limbunruang, Sirinan Thubthimthed

Abstract:

High Performance Liquid Chromatography (HPLC) method was developed and validated for simultaneous estimation of 6-Gingerol(6G) and 6-Shogaol(6S) in joint pain relief gel containing ginger extract. The chromatographic separation was achieved by using C18 column, 150 x 4.6mm i.d., 5μ Luna, mobile phase containing acetonitrile and water (gradient elution). The flow rate was 1.0 ml/min and the absorbance was monitored at 282 nm. The proposed method was validated in terms of the analytical parameters such as specificity, accuracy, precision, linearity, range, limit of detection (LOD), limit of quantification (LOQ), and determined based on the International Conference on Harmonization (ICH) guidelines. The linearity ranges of 6G and 6S were obtained over 20- 60 and 6-18 μg/ml respectively. Good linearity was observed over the above-mentioned range with linear regression equation Y= 11016x- 23778 for 6G and Y = 19276x-19604 for 6S (x is concentration of analytes in μg/ml and Y is peak area). The value of correlation coefficient was found to be 0.9994 for both markers. The limit of detection (LOD) and limit of quantification (LOQ) for 6G were 0.8567 and 2.8555 μg/ml and for 6S were 0.3672 and 1.2238 μg/ml respectively. The recovery range for 6G and 6S were found to be 91.57 to 102.36 % and 84.73 to 92.85 % for all three spiked levels. The RSD values from repeated extractions for 6G and 6S were 3.43 and 3.09% respectively. The validation of developed method on precision, accuracy, specificity, linearity, and range were also performed with well-accepted results.

Keywords: Ginger, 6-gingerol, HPLC, 6-shogaol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3377
372 Some Computational Results on MPI Parallel Implementation of Dense Simplex Method

Authors: El-Said Badr, Mahmoud Moussa, Konstantinos Paparrizos, Nikolaos Samaras, Angelo Sifaleras

Abstract:

There are two major variants of the Simplex Algorithm: the revised method and the standard, or tableau method. Today, all serious implementations are based on the revised method because it is more efficient for sparse linear programming problems. Moreover, there are a number of applications that lead to dense linear problems so our aim in this paper is to present some computational results on parallel implementation of dense Simplex Method. Our implementation is implemented on a SMP cluster using C programming language and the Message Passing Interface MPI. Preliminary computational results on randomly generated dense linear programs support our results.

Keywords: Linear Programming, MPI, Parallel Implementation, Simplex Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
371 Apply Super-SVA to SAR Imaging with Both Aperture Gaps and Bandwidth Gaps

Authors: Wenshuai Zhai, Yunhua Zhang

Abstract:

Synthetic aperture radar (SAR) imaging usually requires echo data collected continuously pulse by pulse with certain bandwidth. However in real situation, data collection or part of signal spectrum can be interrupted due to various reasons, i.e. there will be gaps in spatial spectrum. In this case we need to find ways to fill out the resulted gaps and get image with defined resolution. In this paper we introduce our work on how to apply iterative spatially variant apodization (Super-SVA) technique to extrapolate the spatial spectrum in both azimuthal and range directions so as to fill out the gaps and get correct radar image.

Keywords: SAR imaging, Sparse aperture, Stepped frequencychirp signal, high resolution, Super-SVA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1914
370 Restarted GMRES Method Augmented with the Combination of Harmonic Ritz Vectors and Error Approximations

Authors: Qiang Niu, Linzhang Lu

Abstract:

Restarted GMRES methods augmented with approximate eigenvectors are widely used for solving large sparse linear systems. Recently a new scheme of augmenting with error approximations is proposed. The main aim of this paper is to develop a restarted GMRES method augmented with the combination of harmonic Ritz vectors and error approximations. We demonstrate that the resulted combination method can gain the advantages of two approaches: (i) effectively deflate the small eigenvalues in magnitude that may hamper the convergence of the method and (ii) partially recover the global optimality lost due to restarting. The effectiveness and efficiency of the new method are demonstrated through various numerical examples.

Keywords: Arnoldi process, GMRES, Krylov subspace, systems of linear equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
369 The Reconstruction New Agegraphic and Gauss- Bonnet Dark Energy Models with a Special Power Law Expasion

Authors: V. Fayaz , F. Felegary

Abstract:

Here, in this work we study correspondence the energy density New agegraphic and the energy density Gauss- Bonnet models in flat universe. We reconstruct Λ  and Λ ω for them with 0 ( ) 0 h a t = a t .

Keywords: dark energy, new age graphic, gauss- bonnet, late time universe

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1445
368 Symbolic Analysis of Large Circuits Using Discrete Wavelet Transform

Authors: Ali Al-Ataby , Fawzi Al-Naima

Abstract:

Symbolic Circuit Analysis (SCA) is a technique used to generate the symbolic expression of a network. It has become a well-established technique in circuit analysis and design. The symbolic expression of networks offers excellent way to perform frequency response analysis, sensitivity computation, stability measurements, performance optimization, and fault diagnosis. Many approaches have been proposed in the area of SCA offering different features and capabilities. Numerical Interpolation methods are very common in this context, especially by using the Fast Fourier Transform (FFT). The aim of this paper is to present a method for SCA that depends on the use of Wavelet Transform (WT) as a mathematical tool to generate the symbolic expression for large circuits with minimizing the analysis time by reducing the number of computations.

Keywords: Numerical Interpolation, Sparse Matrices, SymbolicAnalysis, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
367 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform

Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee

Abstract:

This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications.

Keywords: Boid algorithm, crowd simulation, mobile platform, Newtonian laws, virtual heritage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1431
366 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 977
365 Optimal Sliding Mode Controller for Knee Flexion During Walking

Authors: Gabriel Sitler, Yousef Sardahi, Asad Salem

Abstract:

This paper presents an optimal and robust sliding mode controller (SMC) to regulate the position of the knee joint angle for patients suffering from knee injuries. The controller imitates the role of active orthoses that produce the joint torques required to overcome gravity and loading forces and regain natural human movements. To this end, a mathematical model of the shank, the lower part of the leg, is derived first and then used for the control system design and computer simulations. The design of the controller is carried out in optimal and multi-objective settings. Four objectives are considered: minimization of the control effort and tracking error; and maximization of the control signal smoothness and closed-loop system’s speed of response. Optimal solutions in terms of the Pareto set and its image, the Pareto front, are obtained. The results show that there are trade-offs among the design objectives and many optimal solutions from which the decision-maker can choose to implement. Also, computer simulations conducted at different points from the Pareto set and assuming knee squat movement demonstrate competing relationships among the design goals. In addition, the proposed control algorithm shows robustness in tracking a standard gait signal when accounting for uncertainty in the shank’s parameters.

Keywords: Optimal control, multi-objective optimization, sliding mode control, wearable knee exoskeletons.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 92
364 An Evaluation of TIG Welding Parametric Influence on Tensile Strength of 5083 Aluminium Alloy

Authors: Lakshman Singh, Rajeshwar Singh, Naveen Kumar Singh, Davinder Singh, Pargat Singh

Abstract:

Tungsten Inert Gas (TIG) welding is a high quality welding process used to weld the thin metals and their alloy. 5083 Aluminium alloys play an important role in engineering and metallurgy field because of excellent corrosion properties, ease of fabrication and high specific strength coupled with best combination of toughness and formability.

TIG welding technique is one of the precise and fastest processes used in aerospace, ship and marine industries. TIG welding process is used to analyze the data and evaluate the influence of input parameters on tensile strength of 5083 Al-alloy specimens with dimensions of 100mm long x 15mm wide x 5mm thick. Welding current (I), gas flow rate (G) and welding speed (S) are the input parameters which effect tensile strength of 5083 Al-alloy welded joints. As welding speed increased, tensile strength increases first till optimum value and after that both decreases by increasing welding speed further. Results of the study show that maximum tensile strength of 129 MPa of weld joint are obtained at welding current of 240 Amps, gas flow rate of 7 Lt/min and welding speed of 98 mm/min. These values are the optimum values of input parameters which help to produce efficient weld joint that have good mechanical properties as a tensile strength.

Keywords: 5083 Aluminium alloy, Gas flow rate, TIG welding, Welding current, Welding speed and Tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4033
363 A Hybrid CamShift and l1-Minimization Video Tracking Algorithm

Authors: Clark Van Dam, Gagan Mirchandani

Abstract:

The Continuously Adaptive Mean-Shift (CamShift) algorithm, incorporating scene depth information is combined with the l1-minimization sparse representation based method to form a hybrid kernel and state space-based tracking algorithm. We take advantage of the increased efficiency of the former with the robustness to occlusion property of the latter. A simple interchange scheme transfers control between algorithms based upon drift and occlusion likelihood. It is quantified by the projection of target candidates onto a depth map of the 2D scene obtained with a low cost stereo vision webcam. Results are improved tracking in terms of drift over each algorithm individually, in a challenging practical outdoor multiple occlusion test case.

Keywords: CamShift, l1-minimization, particle filter, stereo vision, video tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
362 Reliability of Eyewitness Statements in Fire and Explosion Investigations

Authors: Jeff D. Colwell, Benjamin W. Knox

Abstract:

While fire and explosion incidents are often observed by eyewitnesses, the weight that fire investigators should place on those observations in their investigations is a complex issue. There is no doubt that eyewitness statements can be an important component to an investigation, particularly when other evidence is sparse, as is often the case when damage to the scene is severe. However, it is well known that eyewitness statements can be incorrect for a variety of reasons, including deception. In this paper, we reviewed factors that can have an effect on the complex processes associated with the perception, retention, and retrieval of an event. We then review the accuracy of eyewitness statements from unique criminal and civil incidents, including fire and explosion incidents, in which the accuracy of the statements could be independently evaluated. Finally, the motives for deceptive eyewitness statements are described, along with techniques that fire and explosion investigators can employ, to increase the accuracy of the eyewitness statements that they solicit.

Keywords: Explosion, eyewitness, fire, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 407
361 Optimization of Process Parameters for Friction Stir Welding of Cast Alloy AA7075 by Taguchi Method

Authors: Dhairya Partap Sing, Vikram Singh, Sudhir Kumar

Abstract:

This investigation proposes Friction stir welding technique to solve the fusion welding problems. Objectives of this investigation are fabrication of AA7075-10%wt. Silicon carbide (SiC) aluminum metal matrix composite and optimization of optimal process parameters of friction stir welded AA7075-10%wt. SiC Composites. Composites were prepared by the mechanical stir casting process. Experiments were performed with four process parameters such as tool rotational speed, weld speed, axial force and tool geometry considering three levels of each. The quality characteristics considered is joint efficiency (JE). The welding experiments were conducted using L27 orthogonal array. An orthogonal array and design of experiments were used to give best possible welding parameters that give optimal JE. The fabricated welded joints using rotational speed of 1500 rpm, welding speed (1.3 mm/sec), axial force (7 k/n) of and tool geometry (square) give best possible results. Experimental result reveals that the tool rotation speed, welding speed and axial force are the significant process parameters affecting the welding performance. The predicted optimal value of percentage JE is 95.621. The confirmation tests also have been done for verifying the results.

Keywords: Metal matrix composite, axial force, joint efficiency, rotational speed, traverse speed, tool geometry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814
360 Evolutionary Feature Selection for Text Documents using the SVM

Authors: Daniel I. Morariu, Lucian N. Vintan, Volker Tresp

Abstract:

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, we present three feature selection methods: Information Gain, Support Vector Machine feature selection called (SVM_FS) and Genetic Algorithm with SVM (called GA_SVM). We show that the best results were obtained with GA_SVM method for a relatively small dimension of the feature vector.

Keywords: Feature Selection, Learning with Kernels, Support Vector Machine, Genetic Algorithm, and Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
359 Feature Selection Methods for an Improved SVM Classifier

Authors: Daniel Morariu, Lucian N. Vintan, Volker Tresp

Abstract:

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, three feature selection methods are evaluated: Random Selection, Information Gain (IG) and Support Vector Machine feature selection (called SVM_FS). We show that the best results were obtained with SVM_FS method for a relatively small dimension of the feature vector. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).

Keywords: Feature Selection, Learning with Kernels, SupportVector Machine, and Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1779
358 GPS Signal Correction to Improve Vehicle Location during Experimental Campaign

Authors: L. Della Ragione, G. Meccariello

Abstract:

In recent years in Italy the progress of the automobile industry, in the field of reduction of emissions values, is very remarkable. Nevertheless their evaluation and reduction is a key problem, especially in the cities, that account for more than 50% of world population. In this paper we dealt with the problem of describing a quantitatively approach for the reconstruction of GPS coordinates and altitude, in the context of correlation study between driving cycles / emission / geographical location, during an experimental campaign realized with some instrumented cars.

Keywords: Air pollution, Driving cycles, GPS signal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868
357 Producing Outdoor Design Conditions Based on the Dependency between Meteorological Elements: Copula Approach

Authors: Zhichao Jiao, Craig Farnham, Jihui Yuan, Kazuo Emura

Abstract:

It is common to use the outdoor design weather data to select the air-conditioning capacity in the building design stage. The meteorological elements of outdoor design weather data are usually selected based on their excess frequency separately while the dependency between the elements is not well considered. It means that the simultaneous occurrence probability of these elements is smaller than the original excess frequency which may cause an overestimation of selecting air-conditioning capacity. Therefore, the copula approach which can capture the dependency between multivariate data was used to model the joint distributions of the meteorological elements, like air temperature and global solar radiation. We suggest a method based on the specific simultaneous occurrence probability of these two elements of selecting more credible outdoor design conditions. The hourly weather data at 12 noon from 2001 to 2010 in Tokyo, Japan are used to analyze the dependency structure and joint distribution, the Gaussian copula represents the dependence of data best. According to calculating the air temperature and global solar radiation in specific simultaneous occurrence probability and the common exceeding, the results show that both the air temperature and global solar radiation based on simultaneous occurrence probability are lower than these based on the conventional method in the same probability.

Keywords: Copula approach, Design weather database, energy conservation, HVAC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 282
356 Accelerating Sparse Matrix Vector Multiplication on Many-Core GPUs

Authors: Weizhi Xu, Zhiyong Liu, Dongrui Fan, Shuai Jiao, Xiaochun Ye, Fenglong Song, Chenggang Yan

Abstract:

Many-core GPUs provide high computing ability and substantial bandwidth; however, optimizing irregular applications like SpMV on GPUs becomes a difficult but meaningful task. In this paper, we propose a novel method to improve the performance of SpMV on GPUs. A new storage format called HYB-R is proposed to exploit GPU architecture more efficiently. The COO portion of the matrix is partitioned recursively into a ELL portion and a COO portion in the process of creating HYB-R format to ensure that there are as many non-zeros as possible in ELL format. The method of partitioning the matrix is an important problem for HYB-R kernel, so we also try to tune the parameters to partition the matrix for higher performance. Experimental results show that our method can get better performance than the fastest kernel (HYB) in NVIDIA-s SpMV library with as high as 17% speedup.

Keywords: GPU, HYB-R, Many-core, Performance Tuning, SpMV

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950
355 Some Preconditioners for Block Pentadiagonal Linear Systems Based on New Approximate Factorization Methods

Authors: Xian Ming Gu, Ting Zhu Huang, Hou Biao Li

Abstract:

In this paper, getting an high-efficiency parallel algorithm to solve sparse block pentadiagonal linear systems suitable for vectors and parallel processors, stair matrices are used to construct some parallel polynomial approximate inverse preconditioners. These preconditioners are appropriate when the desired target is to maximize parallelism. Moreover, some theoretical results about these preconditioners are presented and how to construct preconditioners effectively for any nonsingular block pentadiagonal H-matrices is also described. In addition, the availability of these preconditioners is illustrated with some numerical experiments arising from two dimensional biharmonic equation.

Keywords: Parallel algorithm, Pentadiagonal matrix, Polynomial approximate inverse, Preconditioners, Stair matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
354 An Inflatable and Foldable Knee Exosuit Based on Intelligent Management of Biomechanical Energy

Authors: Jing Fang, Yao Cui, Mingming Wang, Shengli She, Jianping Yuan

Abstract:

Wearable robotics is a potential solution in aiding gait rehabilitation of lower limbs dyskinesia patients, such as knee osteoarthritis or stroke afflicted patients. Many wearable robots have been developed in the form of rigid exoskeletons, but their bulk devices, high cost and control complexity hinder their popularity in the field of gait rehabilitation. Thus, the development of a portable, compliant and low-cost wearable robot for gait rehabilitation is necessary. Inspired by Chinese traditional folding fans and balloon inflators, the authors present an inflatable, foldable and variable stiffness knee exosuit (IFVSKE) in this paper. The pneumatic actuator of IFVSKE was fabricated in the shape of folding fans by using thermoplastic polyurethane (TPU) fabric materials. The geometric and mechanical properties of IFVSKE were characterized with experimental methods. To assist the knee joint smartly, an intelligent control profile for IFVSKE was proposed based on the concept of full-cycle energy management of the biomechanical energy during human movement. The biomechanical energy of knee joints in a walking gait cycle of patients could be collected and released to assist the joint motion just by adjusting the inner pressure of IFVSKE. Finally, a healthy subject was involved to walk with and without the IFVSKE to evaluate the assisting effects.

Keywords: Biomechanical energy management, gait rehabilitation, knee exosuit, wearable robotics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088
353 Extended Arithmetic Precision in Meshfree Calculations

Authors: Edward J. Kansa, Pavel Holoborodko

Abstract:

Continuously differentiable radial basis functions (RBFs) are meshfree, converge faster as the dimensionality increases, and is theoretically spectrally convergent. When implemented on current single and double precision computers, such RBFs can suffer from ill-conditioning because the systems of equations needed to be solved to find the expansion coefficients are full. However, the Advanpix extended precision software package allows computer mathematics to resemble asymptotically ideal Platonic mathematics. Additionally, full systems with extended precision execute faster graphical processors units and field-programmable gate arrays because no branching is needed. Sparse equation systems are fast for iterative solvers in a very limited number of cases.

Keywords: Meshless spectrally convergent, partial differential equations, extended arithmetic precision, no branching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 565
352 Reconstruction of Binary Matrices Satisfying Neighborhood Constraints by Simulated Annealing

Authors: Divyesh Patel, Tanuja Srivastava

Abstract:

This paper considers the NP-hard problem of reconstructing binary matrices satisfying exactly-1-4-adjacency constraint from its row and column projections. This problem is formulated into a maximization problem. The objective function gives a measure of adjacency constraint for the binary matrices. The maximization problem is solved by the simulated annealing algorithm and experimental results are presented.

Keywords: Discrete Tomography, exactly-1-4-adjacency, simulated annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2416
351 A Hybrid Recommender System based on Collaborative Filtering and Cloud Model

Authors: Chein-Shung Hwang, Ruei-Siang Fong

Abstract:

User-based Collaborative filtering (CF), one of the most prevailing and efficient recommendation techniques, provides personalized recommendations to users based on the opinions of other users. Although the CF technique has been successfully applied in various applications, it suffers from serious sparsity problems. The cloud-model approach addresses the sparsity problems by constructing the user-s global preference represented by a cloud eigenvector. The user-based CF approach works well with dense datasets while the cloud-model CF approach has a greater performance when the dataset is sparse. In this paper, we present a hybrid approach that integrates the predictions from both the user-based CF and the cloud-model CF approaches. The experimental results show that the proposed hybrid approach can ameliorate the sparsity problem and provide an improved prediction quality.

Keywords: Cloud model, Collaborative filtering, Hybridrecommender system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
350 Boundary-Element-Based Finite Element Methods for Helmholtz and Maxwell Equations on General Polyhedral Meshes

Authors: Dylan M. Copeland

Abstract:

We present new finite element methods for Helmholtz and Maxwell equations on general three-dimensional polyhedral meshes, based on domain decomposition with boundary elements on the surfaces of the polyhedral volume elements. The methods use the lowest-order polynomial spaces and produce sparse, symmetric linear systems despite the use of boundary elements. Moreover, piecewise constant coefficients are admissible. The resulting approximation on the element surfaces can be extended throughout the domain via representation formulas. Numerical experiments confirm that the convergence behavior on tetrahedral meshes is comparable to that of standard finite element methods, and equally good performance is attained on more general meshes.

Keywords: Boundary elements, finite elements, Helmholtz equation, Maxwell equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684