Search results for: pairing computation.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 476

Search results for: pairing computation.

416 Numerical Computation of Sturm-Liouville Problem with Robin Boundary Condition

Authors: Theddeus T. Akano, Omotayo A. Fakinlede

Abstract:

The modelling of physical phenomena, such as the earth’s free oscillations, the vibration of strings, the interaction of atomic particles, or the steady state flow in a bar give rise to Sturm- Liouville (SL) eigenvalue problems. The boundary applications of some systems like the convection-diffusion equation, electromagnetic and heat transfer problems requires the combination of Dirichlet and Neumann boundary conditions. Hence, the incorporation of Robin boundary condition in the analyses of Sturm-Liouville problem. This paper deals with the computation of the eigenvalues and eigenfunction of generalized Sturm-Liouville problems with Robin boundary condition using the finite element method. Numerical solution of classical Sturm–Liouville problem is presented. The results show an agreement with the exact solution. High results precision is achieved with higher number of elements.

Keywords: Sturm-Liouville problem, Robin boundary condition, finite element method, eigenvalue problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2995
415 Parallel Computation of Data Summation for Multiple Problem Spaces on Partitioned Optical Passive Stars Network

Authors: Khin Thida Latt, Mineo Kaneko, Yoichi Shinoda

Abstract:

In Partitioned Optical Passive Stars POPS network,nodes and couplers become free after slot to slot in some computation.It is necessary to efficiently utilize free couplers and nodes to be cost effective. Improving parallelism, we present the fast data summation algorithm for multiple problem spaces on P OP S(g, g) with smaller number of nodes for the case of d =n = g. For the case of d >n > g, we simulate the calculation of large number of data items dedicated to larger system with many nodes on smaller system with smaller number of nodes. The algorithm is faster than the best know algorithm and using smaller number of nodes and groups make the system low cost and practical.

Keywords: Partitioned optical passive stars network, parallelcomputing, optical computing, data sum

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1179
414 Parallel and Distributed Mining of Association Rule on Knowledge Grid

Authors: U. Sakthi, R. Hemalatha, R. S. Bhuvaneswaran

Abstract:

In Virtual organization, Knowledge Discovery (KD) service contains distributed data resources and computing grid nodes. Computational grid is integrated with data grid to form Knowledge Grid, which implements Apriori algorithm for mining association rule on grid network. This paper describes development of parallel and distributed version of Apriori algorithm on Globus Toolkit using Message Passing Interface extended with Grid Services (MPICHG2). The creation of Knowledge Grid on top of data and computational grid is to support decision making in real time applications. In this paper, the case study describes design and implementation of local and global mining of frequent item sets. The experiments were conducted on different configurations of grid network and computation time was recorded for each operation. We analyzed our result with various grid configurations and it shows speedup of computation time is almost superlinear.

Keywords: Association rule, Grid computing, Knowledge grid, Mobility prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2181
413 A Pairing-based Blind Signature Scheme with Message Recovery

Authors: Song Han, Elizabeth Chang

Abstract:

Blind signatures enable users to obtain valid signatures for a message without revealing its content to the signer. This paper presents a new blind signature scheme, i.e. identity-based blind signature scheme with message recovery. Due to the message recovery property, the new scheme requires less bandwidth than the identitybased blind signatures with similar constructions. The scheme is based on modified Weil/Tate pairings over elliptic curves, and thus requires smaller key sizes for the same level of security compared to previous approaches not utilizing bilinear pairings. Security and efficiency analysis for the scheme is provided in this paper.

Keywords: Blind Signature, Message Recovery, Pairings, Elliptic Curves, Blindness

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2094
412 On Speeding Up Support Vector Machines: Proximity Graphs Versus Random Sampling for Pre-Selection Condensation

Authors: Xiaohua Liu, Juan F. Beltran, Nishant Mohanchandra, Godfried T. Toussaint

Abstract:

Support vector machines (SVMs) are considered to be the best machine learning algorithms for minimizing the predictive probability of misclassification. However, their drawback is that for large data sets the computation of the optimal decision boundary is a time consuming function of the size of the training set. Hence several methods have been proposed to speed up the SVM algorithm. Here three methods used to speed up the computation of the SVM classifiers are compared experimentally using a musical genre classification problem. The simplest method pre-selects a random sample of the data before the application of the SVM algorithm. Two additional methods use proximity graphs to pre-select data that are near the decision boundary. One uses k-Nearest Neighbor graphs and the other Relative Neighborhood Graphs to accomplish the task.

Keywords: Machine learning, data mining, support vector machines, proximity graphs, relative-neighborhood graphs, k-nearestneighbor graphs, random sampling, training data condensation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919
411 Artificial Neural Network Development by means of Genetic Programming with Graph Codification

Authors: Daniel Rivero, Julián Dorado, Juan R. Rabuñal, Alejandro Pazos, Javier Pereira

Abstract:

The development of Artificial Neural Networks (ANNs) is usually a slow process in which the human expert has to test several architectures until he finds the one that achieves best results to solve a certain problem. This work presents a new technique that uses Genetic Programming (GP) for automatically generating ANNs. To do this, the GP algorithm had to be changed in order to work with graph structures, so ANNs can be developed. This technique also allows the obtaining of simplified networks that solve the problem with a small group of neurons. In order to measure the performance of the system and to compare the results with other ANN development methods by means of Evolutionary Computation (EC) techniques, several tests were performed with problems based on some of the most used test databases. The results of those comparisons show that the system achieves good results comparable with the already existing techniques and, in most of the cases, they worked better than those techniques.

Keywords: Artificial Neural Networks, Evolutionary Computation, Genetic Programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460
410 Generating Speq Rules based on Automatic Proof of Logical Equivalence

Authors: Katsunori Miura, Kiyoshi Akama, Hiroshi Mabuchi

Abstract:

In the Equivalent Transformation (ET) computation model, a program is constructed by the successive accumulation of ET rules. A method by meta-computation by which a correct ET rule is generated has been proposed. Although the method covers a broad range in the generation of ET rules, all important ET rules are not necessarily generated. Generation of more ET rules can be achieved by supplementing generation methods which are specialized for important ET rules. A Specialization-by-Equation (Speq) rule is one of those important rules. A Speq rule describes a procedure in which two variables included in an atom conjunction are equalized due to predicate constraints. In this paper, we propose an algorithm that systematically and recursively generate Speq rules and discuss its effectiveness in the synthesis of ET programs. A Speq rule is generated based on proof of a logical formula consisting of given atom set and dis-equality. The proof is carried out by utilizing some ET rules and the ultimately obtained rules in generating Speq rules.

Keywords: Equivalent transformation, ET rule, Equation of two variables, Rule generation, Specialization-by-Equation rule

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
409 Network of Coupled Stochastic Oscillators and One-way Quantum Computations

Authors: Eugene Grichuk, Margarita Kuzmina, Eduard Manykin

Abstract:

A network of coupled stochastic oscillators is proposed for modeling of a cluster of entangled qubits that is exploited as a computation resource in one-way quantum computation schemes. A qubit model has been designed as a stochastic oscillator formed by a pair of coupled limit cycle oscillators with chaotically modulated limit cycle radii and frequencies. The qubit simulates the behavior of electric field of polarized light beam and adequately imitates the states of two-level quantum system. A cluster of entangled qubits can be associated with a beam of polarized light, light polarization degree being directly related to cluster entanglement degree. Oscillatory network, imitating qubit cluster, is designed, and system of equations for network dynamics has been written. The constructions of one-qubit gates are suggested. Changing of cluster entanglement degree caused by measurements can be exactly calculated.

Keywords: network of stochastic oscillators, one-way quantumcomputations, a beam of polarized light.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400
408 Efficient Aggregate Signature Algorithm and Its Application in MANET

Authors: Daxing Wang, Jikai Teng

Abstract:

An aggregate signature scheme can aggregate n signatures on n distinct messages from n distinct signers into a single signature. Thus, n verification equations can be reduced to one. So the aggregate signature adapts to Mobile Ad hoc Network (MANET). In this paper, we propose an efficient ID-based aggregate signature scheme with constant pairing computations. Compared with the existing ID-based aggregate signature scheme, this scheme greatly improves the efficiency of signature communication and verification. In addition, in this work, we apply our ID-based aggregate sig- nature to authenticated routing protocol to present a secure routing scheme. Our scheme not only provides sound authentication and a secure routing protocol in ad hoc networks, but also meets the nature of MANET.

Keywords: Identity-based cryptography, Aggregate signature, Bilinear pairings, Authenticated routing scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
407 Effect of Non Uniformity Factors and Assignment Factors on Errors in Charge Simulation Method with Point Charge Model

Authors: Gururaj S Punekar, N K Kishore Senior, H S Y Shastry

Abstract:

Charge Simulation Method (CSM) is one of the very widely used numerical field computation technique in High Voltage (HV) engineering. The high voltage fields of varying non uniformities are encountered in practice. CSM programs being case specific, the simulation accuracies heavily depend on the user (programmers) experience. Here is an effort to understand CSM errors and evolve some guidelines to setup accurate CSM models, relating non uniformities with assignment factors. The results are for the six-point-charge model of sphere-plane gap geometry. Using genetic algorithm (GA) as tool, optimum assignment factors at different non uniformity factors for this model have been evaluated and analyzed. It is shown that the symmetrically placed six-point-charge models can be good enough to set up CSM programs with potential errors less than 0.1% when the field non uniformity factor is greater than 2.64 (field utilization factor less than 52.76%).

Keywords: Assignment factor, Charge Simulation Method, High Voltage, Numerical field computation, Non uniformity factor, Simulation errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051
406 Motion Area Estimated Motion Estimation with Triplet Search Patterns for H.264/AVC

Authors: T. Song, T. Shimamoto

Abstract:

In this paper a fast motion estimation method for H.264/AVC named Triplet Search Motion Estimation (TS-ME) is proposed. Similar to some of the traditional fast motion estimation methods and their improved proposals which restrict the search points only to some selected candidates to decrease the computation complexity, proposed algorithm separate the motion search process to several steps but with some new features. First, proposed algorithm try to search the real motion area using proposed triplet patterns instead of some selected search points to avoid dropping into the local minimum. Then, in the localized motion area a novel 3-step motion search algorithm is performed. Proposed search patterns are categorized into three rings on the basis of the distance from the search center. These three rings are adaptively selected by referencing the surrounding motion vectors to early terminate the motion search process. On the other hand, computation reduction for sub pixel motion search is also discussed considering the appearance probability of the sub pixel motion vector. From the simulation results, motion estimation speed improved by a factor of up to 38 when using proposed algorithm than that of the reference software of H.264/AVC with ignorable picture quality loss.

Keywords: Motion estimation, VLSI, image processing, search patterns

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332
405 Materialized View Effect on Query Performance

Authors: Yusuf Ziya Ayık, Ferhat Kahveci

Abstract:

Currently, database management systems have various tools such as backup and maintenance, and also provide statistical information such as resource usage and security. In terms of query performance, this paper covers query optimization, views, indexed tables, pre-computation materialized view, query performance analysis in which query plan alternatives can be created and the least costly one selected to optimize a query. Indexes and views can be created for related table columns. The literature review of this study showed that, in the course of time, despite the growing capabilities of the database management system, only database administrators are aware of the need for dealing with archival and transactional data types differently. These data may be constantly changing data used in everyday life, and also may be from the completed questionnaire whose data input was completed. For both types of data, the database uses its capabilities; but as shown in the findings section, instead of repeating similar heavy calculations which are carrying out same results with the same query over a survey results, using materialized view results can be in a more simple way. In this study, this performance difference was observed quantitatively considering the cost of the query.

Keywords: Materialized view, pre-computation, query cost, query performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1344
404 Electromagnetic Wave Propagation Equations in 2D by Finite Difference Method

Authors: N. Fusun Oyman Serteller

Abstract:

In this paper, the techniques to solve time dependent electromagnetic wave propagation equations based on the Finite Difference Method (FDM) are proposed by comparing the results with Finite Element Method (FEM) in 2D while discussing some special simulation examples.  Here, 2D dynamical wave equations for lossy media, even with a constant source, are discussed for establishing symbolic manipulation of wave propagation problems. The main objective of this contribution is to introduce a comparative study of two suitable numerical methods and to show that both methods can be applied effectively and efficiently to all types of wave propagation problems, both linear and nonlinear cases, by using symbolic computation. However, the results show that the FDM is more appropriate for solving the nonlinear cases in the symbolic solution. Furthermore, some specific complex domain examples of the comparison of electromagnetic waves equations are considered. Calculations are performed through Mathematica software by making some useful contribution to the programme and leveraging symbolic evaluations of FEM and FDM.

Keywords: Finite difference method, finite element method, linear-nonlinear PDEs, symbolic computation, wave propagation equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713
403 The Wine List Design by Upscale Restaurants

Authors: A. Oliveira-Brochado, R. Vinhas da Silva

Abstract:

This paper investigates the structure and content of the wine lists in upscale restaurants in Portugal (N=61). The respondents considered that a wine list should be easy to use and to modify, welldesigned, modern and varied. Respondents also stated that they perform on average 6 revisions to the wine list per year. The restaurant owner, the restaurant manager and the sommelier were the main persons in charge of the wine list design. One of the most important reasons for selecting wines across most restaurants was to ‘complement the menu’ and ‘pairing food with wine’. Restaurants also reported to be relatively independent from suppliers and magazine evaluations. Moreover, this work revealed that the restaurant wine list is considered by restaurateurs as a strategic tool to sell wine as a complement to the menu, to improve customer satisfaction and loyalty, to increase restaurant value and to enhance a successful positioning.

Keywords: Portugal, restaurants, wine list design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3957
402 Primer Design with Specific PCR Product using Particle Swarm Optimization

Authors: Cheng-Hong Yang, Yu-Huei Cheng, Hsueh-Wei Chang, Li-Yeh Chuang

Abstract:

Before performing polymerase chain reactions (PCR), a feasible primer set is required. Many primer design methods have been proposed for design a feasible primer set. However, the majority of these methods require a relatively long time to obtain an optimal solution since large quantities of template DNA need to be analyzed. Furthermore, the designed primer sets usually do not provide a specific PCR product. In recent years, evolutionary computation has been applied to PCR primer design and yielded promising results. In this paper, a particle swarm optimization (PSO) algorithm is proposed to solve primer design problems associated with providing a specific product for PCR experiments. A test set of the gene CYP1A1, associated with a heightened lung cancer risk was analyzed and the comparison of accuracy and running time with the genetic algorithm (GA) and memetic algorithm (MA) was performed. A comparison of results indicated that the proposed PSO method for primer design finds optimal or near-optimal primer sets and effective PCR products in a relatively short time.

Keywords: polymerase chain reaction (PCR), primer design, evolutionary computation, particle swarm optimization (PSO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1880
401 A Simplified Approach for Load Flow Analysis of Radial Distribution Network

Authors: K. Vinoth Kumar, M.P. Selvan

Abstract:

This paper presents a simple approach for load flow analysis of a radial distribution network. The proposed approach utilizes forward and backward sweep algorithm based on Kirchoff-s current law (KCL) and Kirchoff-s voltage law (KVL) for evaluating the node voltages iteratively. In this approach, computation of branch current depends only on the current injected at the neighbouring node and the current in the adjacent branch. This approach starts from the end nodes of sub lateral line, lateral line and main line and moves towards the root node during branch current computation. The node voltage evaluation begins from the root node and moves towards the nodes located at the far end of the main, lateral and sub lateral lines. The proposed approach has been tested using four radial distribution systems of different size and configuration and found to be computationally efficient.

Keywords: constant current load, constant impedance load, constant power load, forward–backward sweep, load flow analysis, radial distribution system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2674
400 Performance Analysis of Certificateless Signature for IKE Authentication

Authors: Nazrul M. Ahmad, Asrul H. Yaacob, Ridza Fauzi, Alireza Khorram

Abstract:

Elliptic curve-based certificateless signature is slowly gaining attention due to its ability to retain the efficiency of identity-based signature to eliminate the need of certificate management while it does not suffer from inherent private key escrow problem. Generally, cryptosystem based on elliptic curve offers equivalent security strength at smaller key sizes compared to conventional cryptosystem such as RSA which results in faster computations and efficient use of computing power, bandwidth, and storage. This paper proposes to implement certificateless signature based on bilinear pairing to structure the framework of IKE authentication. In this paper, we perform a comparative analysis of certificateless signature scheme with a well-known RSA scheme and also present the experimental results in the context of signing and verification execution times. By generalizing our observations, we discuss the different trade-offs involved in implementing IKE authentication by using certificateless signature.

Keywords: Certificateless signature, IPSec, RSA signature, IKE authentication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800
399 Fingerprint Compression Using Contourlet Transform and Multistage Vector Quantization

Authors: S. Esakkirajan, T. Veerakumar, V. Senthil Murugan, R. Sudhakar

Abstract:

This paper presents a new fingerprint coding technique based on contourlet transform and multistage vector quantization. Wavelets have shown their ability in representing natural images that contain smooth areas separated with edges. However, wavelets cannot efficiently take advantage of the fact that the edges usually found in fingerprints are smooth curves. This issue is addressed by directional transforms, known as contourlets, which have the property of preserving edges. The contourlet transform is a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. The computation and storage requirements are the major difficulty in implementing a vector quantizer. In the full-search algorithm, the computation and storage complexity is an exponential function of the number of bits used in quantizing each frame of spectral information. The storage requirement in multistage vector quantization is less when compared to full search vector quantization. The coefficients of contourlet transform are quantized by multistage vector quantization. The quantized coefficients are encoded by Huffman coding. The results obtained are tabulated and compared with the existing wavelet based ones.

Keywords: Contourlet Transform, Directional Filter bank, Laplacian Pyramid, Multistage Vector Quantization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012
398 A New Approach to Face Recognition Using Dual Dimension Reduction

Authors: M. Almas Anjum, M. Younus Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.

Keywords: Biometrics, DCT, Face Recognition, Illumination, Computation, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
397 Application of De-Laval Nozzle Transonic Flow Field Computation Approaches

Authors: A. Haddad, H. Kbab

Abstract:

A supersonic expansion cannot be achieved within a convergent-divergent nozzle if the flow velocity does not reach that of the sound at the throat. The computation of the flow field characteristics at the throat is thus essential to the nozzle developed thrust value and therefore to the aircraft or rocket it propels. Several approaches were developed in order to describe the transonic expansion, which takes place through the throat of a De-Laval convergent-divergent nozzle. They all allow reaching good results but showing a major shortcoming represented by their inability to describe the transonic flow field for nozzles having a small throat radius. The approach initially developed by Kliegel & Levine uses the velocity series development in terms of the normalized throat radius added to unity instead of solely the normalized throat radius or the traditional small disturbances theory approach. The present investigation carries out the application of these three approaches for different throat radiuses of curvature. The method using the normalized throat radius added to unity shows better results when applied to geometries integrating small throat radiuses.

Keywords: De-Laval nozzles, transonic calculations, transonic flow, supersonic nozzle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3285
396 Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500

Authors: Mustafa Elfituri, Jonathan Cook

Abstract:

Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation.

Keywords: Graph computation, Graph500 benchmark, parallel architectures, parallel programming, workload characterization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 548
395 Enhancing the Performance of H.264/AVC in Adaptive Group of Pictures Mode Using Octagon and Square Search Pattern

Authors: S. Sowmyayani, P. Arockia Jansi Rani

Abstract:

This paper integrates Octagon and Square Search pattern (OCTSS) motion estimation algorithm into H.264/AVC (Advanced Video Coding) video codec in Adaptive Group of Pictures (AGOP) mode. AGOP structure is computed based on scene change in the video sequence. Octagon and square search pattern block-based motion estimation method is implemented in inter-prediction process of H.264/AVC. Both these methods reduce bit rate and computational complexity while maintaining the quality of the video sequence respectively. Experiments are conducted for different types of video sequence. The results substantially proved that the bit rate, computation time and PSNR gain achieved by the proposed method is better than the existing H.264/AVC with fixed GOP and AGOP. With a marginal gain in quality of 0.28dB and average gain in bitrate of 132.87kbps, the proposed method reduces the average computation time by 27.31 minutes when compared to the existing state-of-art H.264/AVC video codec.

Keywords: Block Distortion Measure, Block Matching Algorithms, H.264/AVC, Motion estimation, Search patterns, Shot cut detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1731
394 Real-Time Fitness Monitoring with MediaPipe

Authors: Chandra Prayaga, Lakshmi Prayaga, Aaron Wade, Kyle Rank, Gopi Shankar Mallu, Sri Satya Harsha Pola

Abstract:

In today's tech-driven world, where connectivity shapes our daily lives, maintaining physical and emotional health is crucial. Athletic trainers play a vital role in optimizing athletes' performance and preventing injuries. However, a shortage of trainers impacts the quality of care. This study introduces a vision-based exercise monitoring system leveraging Google's MediaPipe library for precise tracking of bicep curl exercises and simultaneous posture monitoring. We propose a three-stage methodology: landmark detection, side detection, and angle computation. Our system calculates angles at the elbow, wrist, neck, and torso to assess exercise form. Experimental results demonstrate the system's effectiveness in distinguishing between good and partial repetitions and evaluating body posture during exercises, providing real-time feedback for precise fitness monitoring.

Keywords: Physical health, athletic trainers, fitness monitoring, technology driven solutions, Google's MediaPipe, landmark detection, angle computation, real-time feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 113
393 Aerodynamic Prediction and Performance Analysis for Mars Science Laboratory Entry Vehicle

Authors: Tang Wei, Yang Xiaofeng, Gui Yewei, Du Yanxia

Abstract:

Complex lifting entry was selected for precise landing performance during the Mars Science Laboratory entry. This study aims to develop the three-dimensional numerical method for precise computation and the surface panel method for rapid engineering prediction. Detailed flow field analysis for Mars exploration mission was performed by carrying on a series of fully three-dimensional Navier-Stokes computations. The static aerodynamic performance was then discussed, including the surface pressure, lift and drag coefficient, lift-to-drag ratio with the numerical and engineering method. Computation results shown that the shock layer is thin because of lower effective specific heat ratio, and that calculated results from both methods agree well with each other, and is consistent with the reference data. Aerodynamic performance analysis shows that CG location determines trim characteristics and pitch stability, and certain radially and axially shift of the CG location can alter the capsule lifting entry performance, which is of vital significance for the aerodynamic configuration design and inner instrument layout of the Mars entry capsule.

Keywords: Mars entry capsule, static aerodynamics, computational fluid dynamics, hypersonic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3034
392 Electron Spin Resonance of Conduction Electrons and Spin Waves Dynamics Investigations in Bi-2223 Superconductor for Decoding Pairing Mechanism

Authors: S. N. Ekbote, G. K. Padam, Manju Arora

Abstract:

Electron spin resonance (ESR) spectroscopic investigations of (Bi, Pb)2Sr2Ca2Cu3O10-x (Bi-2223) bulk samples were carried out in both the normal and superconducting states. A broad asymmetric resonance signal with side signals is obtained in the normal state, and all of them disappear in the superconducting state. The temperature and angular orientation effects on these signals suggest that the broad asymmetric signal arises from electron spin resonance of conduction electrons (CESR) and the side signals from exchange interactions as Platzman-Wolff type spin waves. The disappearance of CESR and spin waves in a superconducting state demonstrates the role of exchange interactions in Cooper pair formation.

Keywords: Bi-2223 superconductor, electron spin resonance of conduction electrons, electron spin resonance, Exchange interactions, spin waves.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 232
391 A Pairwise-Gaussian-Merging Approach: Towards Genome Segmentation for Copy Number Analysis

Authors: Chih-Hao Chen, Hsing-Chung Lee, Qingdong Ling, Hsiao-Jung Chen, Sun-Chong Wang, Li-Ching Wu, H.C. Lee

Abstract:

Segmentation, filtering out of measurement errors and identification of breakpoints are integral parts of any analysis of microarray data for the detection of copy number variation (CNV). Existing algorithms designed for these tasks have had some successes in the past, but they tend to be O(N2) in either computation time or memory requirement, or both, and the rapid advance of microarray resolution has practically rendered such algorithms useless. Here we propose an algorithm, SAD, that is much faster and much less thirsty for memory – O(N) in both computation time and memory requirement -- and offers higher accuracy. The two key ingredients of SAD are the fundamental assumption in statistics that measurement errors are normally distributed and the mathematical relation that the product of two Gaussians is another Gaussian (function). We have produced a computer program for analyzing CNV based on SAD. In addition to being fast and small it offers two important features: quantitative statistics for predictions and, with only two user-decided parameters, ease of use. Its speed shows little dependence on genomic profile. Running on an average modern computer, it completes CNV analyses for a 262 thousand-probe array in ~1 second and a 1.8 million-probe array in 9 seconds

Keywords: Cancer, pathogenesis, chromosomal aberration, copy number variation, segmentation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
390 Transform-Domain Rate-Distortion Optimization Accelerator for H.264/AVC Video Encoding

Authors: Mohammed Golam Sarwer, Lai Man Po, Kai Guo, Q.M. Jonathan Wu

Abstract:

In H.264/AVC video encoding, rate-distortion optimization for mode selection plays a significant role to achieve outstanding performance in compression efficiency and video quality. However, this mode selection process also makes the encoding process extremely complex, especially in the computation of the ratedistortion cost function, which includes the computations of the sum of squared difference (SSD) between the original and reconstructed image blocks and context-based entropy coding of the block. In this paper, a transform-domain rate-distortion optimization accelerator based on fast SSD (FSSD) and VLC-based rate estimation algorithm is proposed. This algorithm could significantly simplify the hardware architecture for the rate-distortion cost computation with only ignorable performance degradation. An efficient hardware structure for implementing the proposed transform-domain rate-distortion optimization accelerator is also proposed. Simulation results demonstrated that the proposed algorithm reduces about 47% of total encoding time with negligible degradation of coding performance. The proposed method can be easily applied to many mobile video application areas such as a digital camera and a DMB (Digital Multimedia Broadcasting) phone.

Keywords: Context-adaptive variable length coding (CAVLC), H.264/AVC, rate-distortion optimization (RDO), sum of squareddifference (SSD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
389 A New Approach for Image Segmentation using Pillar-Kmeans Algorithm

Authors: Ali Ridho Barakbah, Yasushi Kiyoki

Abstract:

This paper presents a new approach for image segmentation by applying Pillar-Kmeans algorithm. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after optimized by Pillar Algorithm. The Pillar algorithm considers the pillars- placement which should be located as far as possible from each other to withstand against the pressure distribution of a roof, as identical to the number of centroids amongst the data distribution. This algorithm is able to optimize the K-means clustering for image segmentation in aspects of precision and computation time. It designates the initial centroids- positions by calculating the accumulated distance metric between each data point and all previous centroids, and then selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. This paper evaluates the proposed approach for image segmentation by comparing with K-means and Gaussian Mixture Model algorithm and involving RGB, HSV, HSL and CIELAB color spaces. The experimental results clarify the effectiveness of our approach to improve the segmentation quality in aspects of precision and computational time.

Keywords: Image segmentation, K-means clustering, Pillaralgorithm, color spaces.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3372
388 Applying Element Free Galerkin Method on Beam and Plate

Authors: Mahdad M’hamed, Belaidi Idir

Abstract:

This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate hole

Keywords: Numerical computation, element-free Galerkin, moving least squares, meshless methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2436
387 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740