Search results for: learning algorithms
1391 Automated Segmentation of ECG Signals using Piecewise Derivative Dynamic Time Warping
Authors: Ali Zifan, Mohammad Hassan Moradi, Sohrab Saberi, Farzad Towhidkhah
Abstract:
Electrocardiogram (ECG) segmentation is necessary to help reduce the time consuming task of manually annotating ECG-s. Several algorithms have been developed to segment the ECG automatically. We first review several of such methods, and then present a new single lead segmentation method based on Adaptive piecewise constant approximation (APCA) and Piecewise derivative dynamic time warping (PDDTW). The results are tested on the QT database. We compared our results to Laguna-s two lead method. Our proposed approach has a comparable mean error, but yields a slightly higher standard deviation than Laguna-s method.Keywords: Adaptive Piecewise Constant Approximation, Dynamic programming, ECG segmentation, Piecewise DerivativeDynamic Time Warping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20661390 Unconventional Calculus Spreadsheet Functions
Authors: Chahid K. Ghaddar
Abstract:
The spreadsheet engine is exploited via a non-conventional mechanism to enable novel worksheet solver functions for computational calculus. The solver functions bypass inherent restrictions on built-in math and user defined functions by taking variable formulas as a new type of argument while retaining purity and recursion properties. The enabling mechanism permits integration of numerical algorithms into worksheet functions for solving virtually any computational problem that can be modelled by formulas and variables. Several examples are presented for computing integrals, derivatives, and systems of deferential-algebraic equations. Incorporation of the worksheet solver functions with the ubiquitous spreadsheet extend the utility of the latter as a powerful tool for computational mathematics.Keywords: Calculus functions, nonlinear systems, differential algebraic equations, solvers, spreadsheet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24581389 Biometric Methods and Implementation of Algorithms
Authors: Parvinder S. Sandhu, Iqbaldeep Kaur, Amit Verma, Samriti Jindal, Shailendra Singh
Abstract:
Biometric measures of one kind or another have been used to identify people since ancient times, with handwritten signatures, facial features, and fingerprints being the traditional methods. Of late, Systems have been built that automate the task of recognition, using these methods and newer ones, such as hand geometry, voiceprints and iris patterns. These systems have different strengths and weaknesses. This work is a two-section composition. In the starting section, we present an analytical and comparative study of common biometric techniques. The performance of each of them has been viewed and then tabularized as a result. The latter section involves the actual implementation of the techniques under consideration that has been done using a state of the art tool called, MATLAB. This tool aids to effectively portray the corresponding results and effects.Keywords: Matlab, Recognition, Facial Vectors, Functions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31911388 A Kernel Based Rejection Method for Supervised Classification
Authors: Abdenour Bounsiar, Edith Grall, Pierre Beauseroy
Abstract:
In this paper we are interested in classification problems with a performance constraint on error probability. In such problems if the constraint cannot be satisfied, then a rejection option is introduced. For binary labelled classification, a number of SVM based methods with rejection option have been proposed over the past few years. All of these methods use two thresholds on the SVM output. However, in previous works, we have shown on synthetic data that using thresholds on the output of the optimal SVM may lead to poor results for classification tasks with performance constraint. In this paper a new method for supervised classification with rejection option is proposed. It consists in two different classifiers jointly optimized to minimize the rejection probability subject to a given constraint on error rate. This method uses a new kernel based linear learning machine that we have recently presented. This learning machine is characterized by its simplicity and high training speed which makes the simultaneous optimization of the two classifiers computationally reasonable. The proposed classification method with rejection option is compared to a SVM based rejection method proposed in recent literature. Experiments show the superiority of the proposed method.Keywords: rejection, Chow's rule, error-reject tradeoff, SupportVector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14441387 Exploiting Machine Learning Techniques for the Enhancement of Acceptance Sampling
Authors: Aikaterini Fountoulaki, Nikos Karacapilidis, Manolis Manatakis
Abstract:
This paper proposes an innovative methodology for Acceptance Sampling by Variables, which is a particular category of Statistical Quality Control dealing with the assurance of products quality. Our contribution lies in the exploitation of machine learning techniques to address the complexity and remedy the drawbacks of existing approaches. More specifically, the proposed methodology exploits Artificial Neural Networks (ANNs) to aid decision making about the acceptance or rejection of an inspected sample. For any type of inspection, ANNs are trained by data from corresponding tables of a standard-s sampling plan schemes. Once trained, ANNs can give closed-form solutions for any acceptance quality level and sample size, thus leading to an automation of the reading of the sampling plan tables, without any need of compromise with the values of the specific standard chosen each time. The proposed methodology provides enough flexibility to quality control engineers during the inspection of their samples, allowing the consideration of specific needs, while it also reduces the time and the cost required for these inspections. Its applicability and advantages are demonstrated through two numerical examples.Keywords: Acceptance Sampling, Neural Networks, Statistical Quality Control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16941386 Effects of External and Internal Focus of Attention in Motor Learning of Children Cerebral Palsy
Authors: Morteza Pourazar, Fatemeh Mirakhori, Fazlolah Bagherzadeh, Rasool Hemayattalab
Abstract:
The purpose of study was to examine the effects of external and internal focus of attention in the motor learning of children with cerebral palsy. The study involved 30 boys (7 to 12 years old) with CP type 1 who practiced throwing beanbags. The participants were randomly assigned to the internal focus, external focus, and control groups, and performed six blocks of 10-trial with attentional focus reminders during a practice phase and no reminders during retention and transfer tests. Analysis of variance (ANOVA) with repeated measures on the last factor was used. The results show that significant main effects were found for time and group. However, the interaction of time and group was not significant. Retention scores were significantly higher for the external focus group. The external focus group performed better than other groups; however, the internal focus and control groups’ performance did not differ. The study concluded that motor skills in Spastic Hemiparetic Cerebral Palsy (SHCP) children could be enhanced by external attention.
Keywords: Cerebral Palsy, external attention, internal attention, throwing task.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15531385 Determining Senses for Word Sense Disambiguation in Turkish
Authors: Zeynep Orhan, Zeynep Altan
Abstract:
Word sense disambiguation is an important intermediate stage for many natural language processing applications. The senses of an ambiguous word are the classification of usages for that specific word. This paper deals with the methodologies of determining the senses for a given word if they can not be obtained from an already available resource like WordNet. We offer a method that helps us to determine the sense boundaries gradually. In this method, first we decide on some features that are thought to be effective on the senses and divide the instances first into two, then according to the results of evaluations we continue dividing instances gradually. In a second method we use the pseudo words. We devise artificial words depending on some criteria and evaluate classification algorithms on these previously classified words.
Keywords: Word sense disambiguation, sense determination, pseudo words, sense granularity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14081384 Reimagining the Learning Management System as a “Third” Space
Authors: Christina Van Wingerden
Abstract:
This paper focuses on a sense of belonging, isolation, and the use of a learning management system as a “third space” for connection and community. Given student use of learning management systems (LMS) for courses on campuses, moderate to high use of social media and hand-held devices, the author explores the possibilities of LMS as a third space. The COVID-19 pandemic has exacerbated student experiences of isolation, and research indicates that students who experience a sense of belonging have a greater likelihood for academic retention and success. The impacts on students of an LMS designed for student employee orientation and training were examined through a mixed methods approach, including a survey, individual interviews, and focus groups. The sample involved 250-450 undergraduate student employees at a US northwestern university. The goal of the study was to find out the efficiency and effectiveness of the orientation information for a wide range of student employees from multiple student affairs departments. And unexpected finding emerged within the study in 2015 and was noted again as a finding in the 2017 study. Students reported feeling like they individually connected to the department, and further to the university because of the LMS orientation. They stated they could see themselves as part of the university community and like they belonged. The orientation, through the LMS, was designed for and occurred online (asynchronous), prior to students traveling and beginning university life for the academic year. The students indicated connection and belonging resulting from some of the design features. With the onset of COVID-19 and prolonged sheltering in place in North America, as well as other parts of the world, students have been precluded from physically gathering to educate and learn. COVID-19 essentially paused face-to-face education in 2020. Media, governments, and higher education outlets have been reporting on widespread college student stress, isolation, loneliness, and sadness. In this context, the author conducted a current mixed methods study (online survey, online interviews) of students in advanced degree programs, like Ph.D. and Ed.D. specifically investigating isolation and sense of belonging. As a part of the study a prototype of a Canvas site was experienced by student interviewees for their reaction of this Canvas site prototype as a “third” space. Some preliminary findings of this study are presented. Doctoral students in the study affirmed the potential of LMS as a third space for community and social academic connection.Keywords: COVID-19, learning management systems, sense of belonging, third space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6051383 An Optimization Algorithm Based on Dynamic Schema with Dissimilarities and Similarities of Chromosomes
Authors: Radhwan Yousif Sedik Al-Jawadi
Abstract:
Optimization is necessary for finding appropriate solutions to a range of real-life problems. In particular, genetic (or more generally, evolutionary) algorithms have proved very useful in solving many problems for which analytical solutions are not available. In this paper, we present an optimization algorithm called Dynamic Schema with Dissimilarity and Similarity of Chromosomes (DSDSC) which is a variant of the classical genetic algorithm. This approach constructs new chromosomes from a schema and pairs of existing ones by exploring their dissimilarities and similarities. To show the effectiveness of the algorithm, it is tested and compared with the classical GA, on 15 two-dimensional optimization problems taken from literature. We have found that, in most cases, our method is better than the classical genetic algorithm.Keywords: Genetic algorithm, similarity and dissimilarity, chromosome injection, dynamic schema.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12951382 Automated ECG Segmentation Using Piecewise Derivative Dynamic Time Warping
Authors: Ali Zifan, Sohrab Saberi, Mohammad Hassan Moradi, Farzad Towhidkhah
Abstract:
Electrocardiogram (ECG) segmentation is necessary to help reduce the time consuming task of manually annotating ECG's. Several algorithms have been developed to segment the ECG automatically. We first review several of such methods, and then present a new single lead segmentation method based on Adaptive piecewise constant approximation (APCA) and Piecewise derivative dynamic time warping (PDDTW). The results are tested on the QT database. We compared our results to Laguna's two lead method. Our proposed approach has a comparable mean error, but yields a slightly higher standard deviation than Laguna's method.
Keywords: Adaptive Piecewise Constant Approximation, Dynamic programming, ECG segmentation, Piecewise Derivative Dynamic Time Warping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23901381 The Construction of Interactive Computer Multimedia Instruction on “Basic Japanese Vocabulary“
Authors: Kongrit Jittangthammagul, Sakesun Yampinij, Thapanee Endoo, Nattapong Kramwong
Abstract:
The study entitled “The Construction of Interactive Computer Multimedia Instruction on Basic Japanese Vocabulary" was aimed: 1) To construct the interactive computer multimedia instruction on Basic Japanese Vocabulary, 2) To find out multimedia-s quality, 3) To examine the student-s satisfaction and 4) To study the learning achievement in Basic Japanese vocabulary. The sampling group used in this study was composed of 40 1st year student in Educational Communications and Technology Department, Faculty of Industrial Education and Technology, King Mongkut-s University of Technology Thonburi, in the academic year 2553 B.E. (2010). According to research results, we found that 1). The quality assessment by 3 mass media experts was at 4.72 on average or at high level. 2) In terms of contents, the evaluation by 3 experts was at 4.81 on average or at high level. 3) In terms of achievement, there was a statistical significance between before and after the treatment at the .05 level. 4) The satisfaction of students towards the interactive computer multimedia Instruction on “Basic Japanese Vocabulary" was 4.35 on average, or at high level.Keywords: Interactive Computer Multimedia on Basic Japanese Vocabulary, Learning Achievement, Quality
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14941380 Dissipation of Higher Mode using Numerical Integration Algorithm in Dynamic Analysis
Authors: Jin Sup Kim, Woo Young Jung, Minho Kwon
Abstract:
In general dynamic analyses, lower mode response is of interest, however the higher modes of spatially discretized equations generally do not represent the real behavior and not affects to global response much. Some implicit algorithms, therefore, are introduced to filter out the high-frequency modes using intended numerical error. The objective of this study is to introduce the P-method and PC α-method to compare that with dissipation method and Newmark method through the stability analysis and numerical example. PC α-method gives more accuracy than other methods because it based on the α-method inherits the superior properties of the implicit α-method. In finite element analysis, the PC α-method is more useful than other methods because it is the explicit scheme and it achieves the second order accuracy and numerical damping simultaneously.Keywords: Dynamic, α-Method, P-Method, PC α-Method, Newmark method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30751379 Self-evolving Neural Networks Based On PSO and JPSO Algorithms
Authors: Abdussamad Ismail, Dong-Sheng Jeng
Abstract:
A self-evolution algorithm for optimizing neural networks using a combination of PSO and JPSO is proposed. The algorithm optimizes both the network topology and parameters simultaneously with the aim of achieving desired accuracy with less complicated networks. The performance of the proposed approach is compared with conventional back-propagation networks using several synthetic functions, with better results in the case of the former. The proposed algorithm is also implemented on slope stability problem to estimate the critical factor of safety. Based on the results obtained, the proposed self evolving network produced a better estimate of critical safety factor in comparison to conventional BPN network.
Keywords: Neural networks, Topology evolution, Particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18051378 Compression of Semistructured Documents
Authors: Leo Galambos, Jan Lansky, Katsiaryna Chernik
Abstract:
EGOTHOR is a search engine that indexes the Web and allows us to search the Web documents. Its hit list contains URL and title of the hits, and also some snippet which tries to shortly show a match. The snippet can be almost always assembled by an algorithm that has a full knowledge of the original document (mostly HTML page). It implies that the search engine is required to store the full text of the documents as a part of the index. Such a requirement leads us to pick up an appropriate compression algorithm which would reduce the space demand. One of the solutions could be to use common compression methods, for instance gzip or bzip2, but it might be preferable if we develop a new method which would take advantage of the document structure, or rather, the textual character of the documents. There already exist a special compression text algorithms and methods for a compression of XML documents. The aim of this paper is an integration of the two approaches to achieve an optimal level of the compression ratioKeywords: Compression, search engine, HTML, XML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15761377 New Coordinate System for Countries with Big Territories
Authors: Mohammed Sabri Ali Akresh
Abstract:
The modern technologies and developments in computer and Global Positioning System (GPS) as well as Geographic Information System (GIS) and total station TS. This paper presents a new proposal for coordinates system by a harmonic equations “United projections”, which have five projections (Mercator, Lambert, Russell, Lagrange, and compound of projection) in one zone coordinate system width 14 degrees, also it has one degree for overlap between zones, as well as two standards parallels for zone from 10 S to 45 S. Also this paper presents two cases; first case is to compare distances between a new coordinate system and UTM, second case creating local coordinate system for the city of Sydney to measure the distances directly from rectangular coordinates using projection of Mercator, Lambert and UTM.
Keywords: Harmonic equations, coordinate system, projections, algorithms and parallels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18451376 The Problem of Using the Calculation of the Critical Path to Solver Instances of the Job Shop Scheduling Problem
Authors: Marco Antonio Cruz-Chávez, Juan Frausto-Solís, Fernando Ramos-Quintana
Abstract:
A procedure commonly used in Job Shop Scheduling Problem (JSSP) to evaluate the neighborhoods functions that use the non-deterministic algorithms is the calculation of the critical path in a digraph. This paper presents an experimental study of the cost of computation that exists when the calculation of the critical path in the solution for instances in which a JSSP of large size is involved. The results indicate that if the critical path is use in order to generate neighborhoods in the meta-heuristics that are used in JSSP, an elevated cost of computation exists in spite of the fact that the calculation of the critical path in any digraph is of polynomial complexity.
Keywords: Job Shop, CPM, critical path, neighborhood, meta-heuristic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23011375 A Model for Analyzing the Startup Dynamics of a Belt Transmission Driven by a DC Motor
Authors: Giovanni Incerti
Abstract:
In this paper the vibration of a synchronous belt drive during start-up is analyzed and discussed. Besides considering the belt elasticity, the model here proposed also takes into consideration the electromagnetic response of the DC motor. The solution of the motion equations is obtained by means of the modal analysis in state space, which allows to obtain the decoupling of all equations, without introducing the hypothesis of proportional damping. The mathematical model of the transmission and the solution algorithms have been implemented within a computing software that allows the user to simulate the dynamics of the system and to evaluate the effects due to the elasticity of the belt branches and to the electromagnetic behavior of the DC motor. In order to show the details of the calculation procedure, the paper presents a case study developed with the aid of the above-mentioned software.
Keywords: Belt drive, Vibrations, Startup, DC motor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30981374 Evaluating 8D Reports Using Text-Mining
Authors: Benjamin Kuester, Bjoern Eilert, Malte Stonis, Ludger Overmeyer
Abstract:
Increasing quality requirements make reliable and effective quality management indispensable. This includes the complaint handling in which the 8D method is widely used. The 8D report as a written documentation of the 8D method is one of the key quality documents as it internally secures the quality standards and acts as a communication medium to the customer. In practice, however, the 8D report is mostly faulty and of poor quality. There is no quality control of 8D reports today. This paper describes the use of natural language processing for the automated evaluation of 8D reports. Based on semantic analysis and text-mining algorithms the presented system is able to uncover content and formal quality deficiencies and thus increases the quality of the complaint processing in the long term.
Keywords: 8D report, complaint management, evaluation system, text-mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10211373 Scene Adaptive Shadow Detection Algorithm
Authors: Mohammed Ibrahim M, Anupama R.
Abstract:
Robustness is one of the primary performance criteria for an Intelligent Video Surveillance (IVS) system. One of the key factors in enhancing the robustness of dynamic video analysis is,providing accurate and reliable means for shadow detection. If left undetected, shadow pixels may result in incorrect object tracking and classification, as it tends to distort localization and measurement information. Most of the algorithms proposed in literature are computationally expensive; some to the extent of equalling computational requirement of motion detection. In this paper, the homogeneity property of shadows is explored in a novel way for shadow detection. An adaptive division image (which highlights homogeneity property of shadows) analysis followed by a relatively simpler projection histogram analysis for penumbra suppression is the key novelty in our approach.
Keywords: homogeneity, penumbra, projection histogram, shadow correction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19011372 A GPU Based Texture Mapping Technique for 3D Models Using Multi-View Images
Authors: In Lee, Kyung-Kyu Kang, Jaewoon Lee, Dongho Kim
Abstract:
Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.
Keywords: Texture Mapping, Multi-view Images, Camera Calibration, GPU Shader.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19461371 Optimal Planning of Ground Grid Based on Particle Swam Algorithm
Authors: Chun-Yao Lee, Yi-Xing Shen
Abstract:
This paper presents an application of particle swarm optimization (PSO) to the grounding grid planning which compares to the application of genetic algorithm (GA). Firstly, based on IEEE Std.80, the cost function of the grounding grid and the constraints of ground potential rise, step voltage and touch voltage are constructed for formulating the optimization problem of grounding grid planning. Secondly, GA and PSO algorithms for obtaining optimal solution of grounding grid are developed. Finally, a case of grounding grid planning is shown the superiority and availability of the PSO algorithm and proposal planning results of grounding grid in cost and computational time.Keywords: Genetic algorithm, particle swarm optimization, grounding grid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20771370 A Survey on Facial Feature Points Detection Techniques and Approaches
Authors: Rachid Ahdid, Khaddouj Taifi, Said Safi, Bouzid Manaut
Abstract:
Automatic detection of facial feature points plays an important role in applications such as facial feature tracking, human-machine interaction and face recognition. The majority of facial feature points detection methods using two-dimensional or three-dimensional data are covered in existing survey papers. In this article chosen approaches to the facial features detection have been gathered and described. This overview focuses on the class of researches exploiting facial feature points detection to represent facial surface for two-dimensional or three-dimensional face. In the conclusion, we discusses advantages and disadvantages of the presented algorithms.Keywords: Facial feature points, face recognition, facial feature tracking, two-dimensional data, three-dimensional data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16801369 Enhancing Spatial Interpolation: A Multi-Layer Inverse Distance Weighting Model for Complex Regression and Classification Tasks in Spatial Data Analysis
Authors: Yakin Hajlaoui, Richard Labib, Jean-Franc¸ois Plante, Michel Gamache
Abstract:
This study presents the Multi-Layer Inverse Distance Weighting Model (ML-IDW), inspired by the mathematical formulation of both multi-layer neural networks (ML-NNs) and Inverse Distance Weighting model (IDW). ML-IDW leverages ML-NNs’ processing capabilities, characterized by compositions of learnable non-linear functions applied to input features, and incorporates IDW’s ability to learn anisotropic spatial dependencies, presenting a promising solution for nonlinear spatial interpolation and learning from complex spatial data. We employ gradient descent and backpropagation to train ML-IDW. The performance of the proposed model is compared against conventional spatial interpolation models such as Kriging and standard IDW on regression and classification tasks using simulated spatial datasets of varying complexity. Our results highlight the efficacy of ML-IDW, particularly in handling complex spatial dataset, exhibiting lower mean square error in regression and higher F1 score in classification.
Keywords: Deep Learning, Multi-Layer Neural Networks, Gradient Descent, Spatial Interpolation, Inverse Distance Weighting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 321368 Efficient Time Synchronization in Wireless Sensor Networks
Authors: Shehzad Ashraf Ch., Aftab Ahmed Khan, Zahid Mehmood, Muhammad Ahsan Habib, Qasim Mehmood
Abstract:
Energy efficiency is the key requirement in wireless sensor network as sensors are small, cheap and are deployed in very large number in a large geographical area, so there is no question of replacing the batteries of the sensors once deployed. Different ways can be used for efficient energy transmission including Multi-Hop algorithms, collaborative communication, cooperativecommunication, Beam- forming, routing algorithm, phase, frequency and time synchronization. The paper reviews the need for time synchronization and proposed a BFS based synchronization algorithm to achieve energy efficiency. The efficiency of our protocol has been tested and verified by simulation
Keywords: time synchronization, sensor networks, energy efficiency, breadth first search
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17211367 Evaluation of Classification Algorithms for Road Environment Detection
Authors: T. Anbu, K. Aravind Kumar
Abstract:
The road environment information is needed accurately for applications such as road maintenance and virtual 3D city modeling. Mobile laser scanning (MLS) produces dense point clouds from huge areas efficiently from which the road and its environment can be modeled in detail. Objects such as buildings, cars and trees are an important part of road environments. Different methods have been developed for detection of above such objects, but still there is a lack of accuracy due to the problems of illumination, environmental changes, and multiple objects with same features. In this work the comparison between different classifiers such as Multiclass SVM, kNN and Multiclass LDA for the road environment detection is analyzed. Finally the classification accuracy for kNN with LBP feature improved the classification accuracy as 93.3% than the other classifiers.Keywords: Classifiers, feature extraction, mobile-based laser scanning, object location estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7731366 Conception of a Reliable, Low Cost and Autonomous Explorative Hovercraft
Authors: S. Burgalat, L. Teilhac, A. Brand, E. Chastel, M. Jumeline
Abstract:
The paper presents actual benefits and drawbacks of a multidirectional autonomous hovercraft conceived with limited resources and designed for indoor exploration. Recent developments in the field have led to the apparition of very powerful automotive systems capable of very high calculation and exploration in complex unknown environments. They usually propose very complex algorithms, high precision/cost sensors and sometimes have heavy calculation consumption with complex data fusion. These systems are usually powerful but have a certain price, and the benefits may not be worth the cost, especially considering their hardware limitations and their power consumption. The present approach is to build a compromise between cost, power consumption and results preciseness.
Keywords: Hovercraft, Indoor Exploration, Autonomous, Multidirectional, Wireless Control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22231365 A Method for Solving a Bi-Objective Transportation Problem under Fuzzy Environment
Authors: Sukhveer Singh, Sandeep Singh
Abstract:
A bi-objective fuzzy transportation problem with the objectives to minimize the total fuzzy cost and fuzzy time of transportation without according priorities to them is considered. To the best of our knowledge, there is no method in the literature to find efficient solutions of the bi-objective transportation problem under uncertainty. In this paper, a bi-objective transportation problem in an uncertain environment has been formulated. An algorithm has been proposed to find efficient solutions of the bi-objective transportation problem under uncertainty. The proposed algorithm avoids the degeneracy and gives the optimal solution faster than other existing algorithms for the given uncertain transportation problem.
Keywords: Transportation problem, efficient solution, ranking function, fuzzy transportation problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13551364 IBFO_PSO: Evaluating the Performance of Bio-Inspired Integrated Bacterial Foraging Optimization Algorithm and Particle Swarm Optimization Algorithm in MANET Routing
Authors: K. Geetha, P. Thangaraj, C. Rasi Priya, C. Rajan, S. Geetha
Abstract:
This paper presents the performance of Integrated Bacterial Foraging Optimization and Particle Swarm Optimization (IBFO_PSO) technique in MANET routing. The BFO is a bio-inspired algorithm, which simulates the foraging behavior of bacteria. It is effectively applied in improving the routing performance in MANET. In results, it is proved that the PSO integrated with BFO reduces routing delay, energy consumption and communication overhead.Keywords: Ant Colony Optimization, Bacterial Foraging Optimization, Hybrid Routing Intelligent Algorithm, Naturally inspired algorithms, Particle Swarm Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27291363 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit
Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu
Abstract:
Efficient matrix-vector multiplication with diagonal sparse matrices is pivotal in a multitude of computational domains, ranging from scientific simulations to machine learning workloads. When encoded in the conventional Diagonal (DIA) format, these matrices often induce computational overheads due to extensive zero-padding and non-linear memory accesses, which can hamper the computational throughput, and elevate the usage of precious compute and memory resources beyond necessity. The ’DIA-Adaptive’ approach, a methodological enhancement introduced in this paper, confronts these challenges head-on by leveraging the advanced parallel instruction sets embedded within Machine Learning Units (MLUs). This research presents a thorough analysis of the DIA-Adaptive scheme’s efficacy in optimizing Sparse Matrix-Vector Multiplication (SpMV) operations. The scope of the evaluation extends to a variety of hardware architectures, examining the repercussions of distinct thread allocation strategies and cluster configurations across multiple storage formats. A dedicated computational kernel, intrinsic to the DIA-Adaptive approach, has been meticulously developed to synchronize with the nuanced performance characteristics of MLUs. Empirical results, derived from rigorous experimentation, reveal that the DIA-Adaptive methodology not only diminishes the performance bottlenecks associated with the DIA format but also exhibits pronounced enhancements in execution speed and resource utilization. The analysis delineates a marked improvement in parallelism, showcasing the DIA-Adaptive scheme’s ability to adeptly manage the interplay between storage formats, hardware capabilities, and algorithmic design. The findings suggest that this approach could set a precedent for accelerating SpMV tasks, thereby contributing significantly to the broader domain of high-performance computing and data-intensive applications.
Keywords: Adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2331362 Orthogonal Functions Approach to LQG Control
Authors: B. M. Mohan, Sanjeeb Kumar Kar
Abstract:
In this paper a unified approach via block-pulse functions (BPFs) or shifted Legendre polynomials (SLPs) is presented to solve the linear-quadratic-Gaussian (LQG) control problem. Also a recursive algorithm is proposed to solve the above problem via BPFs. By using the elegant operational properties of orthogonal functions (BPFs or SLPs) these computationally attractive algorithms are developed. To demonstrate the validity of the proposed approaches a numerical example is included.
Keywords: Linear quadratic Gaussian control, linear quadratic estimator, linear quadratic regulator, time-invariant systems, orthogonal functions, block-pulse functions, shifted legendre polynomials.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858