Search results for: improved sparrow search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9452

Search results for: improved sparrow search algorithm

8732 Comparative Study of IC and Perturb and Observe Method of MPPT Algorithm for Grid Connected PV Module

Authors: Arvind Kumar, Manoj Kumar, Dattatraya H. Nagaraj, Amanpreet Singh, Jayanthi Prattapati

Abstract:

The purpose of this paper is to study and compare two maximum power point tracking (MPPT) algorithms in a photovoltaic simulation system and also show a simulation study of maximum power point tracking (MPPT) for photovoltaic systems using perturb and observe algorithm and Incremental conductance algorithm. Maximum power point tracking (MPPT) plays an important role in photovoltaic systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency and minimize the overall system cost. Since the maximum power point (MPP) varies, based on the irradiation and cell temperature, appropriate algorithms must be utilized to track the (MPP) and maintain the operation of the system in it. MATLAB/Simulink is used to establish a model of photovoltaic system with (MPPT) function. This system is developed by combining the models established of solar PV module and DC-DC Boost converter. The system is simulated under different climate conditions. Simulation results show that the photovoltaic simulation system can track the maximum power point accurately.

Keywords: incremental conductance algorithm, perturb and observe algorithm, photovoltaic system, simulation results

Procedia PDF Downloads 556
8731 Designing a Corpus Database to Enhance the Learning of Old English Language

Authors: Raquel Mateo Mendaza, Carmen Novo Urraca

Abstract:

The current paper presents the elaboration of a corpus database that aligns two different corpora in order to simplify the search of information both for researchers and students of Old English. This database comprises the information contained in two main reference corpora, namely the Dictionary of Old English Corpus (DOEC), compiled at the University of Toronto, and the York-Toronto-Helsinki Parsed Corpus of Old English (YCOE). The first one provides information on all surviving texts written in the Old English language. The latter offers the syntactical and morphological annotation of several texts included in the DOEC. Although both corpora are closely related, as the YCOE includes the DOE source text identifier, the main problem detected is that there is not an alignment of texts that allows for the search of whole fragments to be further analysed in terms of morphology and syntax. The database proposed in this paper gathers all this information and presents it in a simple, more accessible, visual, and educational way. The alignment of fragments has been done in an automatized way. However, some problems have emerged during the creating process particularly related to the lack of correspondence in the division of fragments. For this reason, it has been necessary to revise the whole entries manually to obtain a truthful high-quality product and to carefully indicate the gaps encountered in these corpora. All in all, this database contains more than 60,000 entries corresponding with the DOE fragments annotated by the YCOE. The main strength of the resulting product is its research and teaching implications in the study of Old English. The use of this database will help researchers and students in the study of different aspects of the language, such as inflectional morphology, syntactic behaviour of given words, or translation studies, among others. By means of the search of words or fragments, the annotated information on morphology and syntax will be automatically displayed, automatizing, and speeding up the search of data.

Keywords: alignment, corpus database, morphosyntactic analysis, Old English

Procedia PDF Downloads 134
8730 Heart Murmurs and Heart Sounds Extraction Using an Algorithm Process Separation

Authors: Fatima Mokeddem

Abstract:

The phonocardiogram signal (PCG) is a physiological signal that reflects heart mechanical activity, is a promising tool for curious researchers in this field because it is full of indications and useful information for medical diagnosis. PCG segmentation is a basic step to benefit from this signal. Therefore, this paper presents an algorithm that serves the separation of heart sounds and heart murmurs in case they exist in order to use them in several applications and heart sounds analysis. The separation process presents here is founded on three essential steps filtering, envelope detection, and heart sounds segmentation. The algorithm separates the PCG signal into S1 and S2 and extract cardiac murmurs.

Keywords: phonocardiogram signal, filtering, Envelope, Detection, murmurs, heart sounds

Procedia PDF Downloads 140
8729 Association Rules Mining and NOSQL Oriented Document in Big Data

Authors: Sarra Senhadji, Imene Benzeguimi, Zohra Yagoub

Abstract:

Big Data represents the recent technology of manipulating voluminous and unstructured data sets over multiple sources. Therefore, NOSQL appears to handle the problem of unstructured data. Association rules mining is one of the popular techniques of data mining to extract hidden relationship from transactional databases. The algorithm for finding association dependencies is well-solved with Map Reduce. The goal of our work is to reduce the time of generating of frequent itemsets by using Map Reduce and NOSQL database oriented document. A comparative study is given to evaluate the performances of our algorithm with the classical algorithm Apriori.

Keywords: Apriori, Association rules mining, Big Data, Data Mining, Hadoop, MapReduce, MongoDB, NoSQL

Procedia PDF Downloads 162
8728 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 71
8727 Design to Cryogenic System for Dilution Refrigerator with Cavity and Superconducting Magnet

Authors: Ki Woong Lee

Abstract:

The Center for Axion and Precision Physics Research is studying the search for dark matter using 12 tesla superconducting magnets. A dilution refrigerator is being used for search experiments, and superconducting magnets, superconducting cavities. The dilution refrigerator requires a stable cryogenic environment using liquid helium. Accordingly, a cryogenic system for a stable supply of liquid helium is to be established. This cryogenic system includes the liquefying, supply, storage, and purification of liquid helium. This article presents the basic design, construction, and operation plans for building cryogenic systems.

Keywords: cryogenic system, dilution refrigerator, superconducting magnet, helium recovery system

Procedia PDF Downloads 121
8726 Stochastic Programming and C-Somga: Animal Ration Formulation

Authors: Pratiksha Saxena, Dipti Singh, Neha Khanna

Abstract:

A self-organizing migrating genetic algorithm(C-SOMGA) is developed for animal diet formulation. This paper presents animal diet formulation using stochastic and genetic algorithm. Tri-objective models for cost minimization and shelf life maximization are developed. These objectives are achieved by combination of stochastic programming and C-SOMGA. Stochastic programming is used to introduce nutrient variability for animal diet. Self-organizing migrating genetic algorithm provides exact and quick solution and presents an innovative approach towards successful application of soft computing technique in the area of animal diet formulation.

Keywords: animal feed ration, feed formulation, linear programming, stochastic programming, self-migrating genetic algorithm, C-SOMGA technique, shelf life maximization, cost minimization, nutrient maximization

Procedia PDF Downloads 442
8725 Fire Safety Assessment of At-Risk Groups

Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen

Abstract:

Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.

Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty

Procedia PDF Downloads 103
8724 Hard Disk Failure Predictions in Supercomputing System Based on CNN-LSTM and Oversampling Technique

Authors: Yingkun Huang, Li Guo, Zekang Lan, Kai Tian

Abstract:

Hard disk drives (HDD) failure of the exascale supercomputing system may lead to service interruption and invalidate previous calculations, and it will cause permanent data loss. Therefore, initiating corrective actions before hard drive failures materialize is critical to the continued operation of jobs. In this paper, a highly accurate analysis model based on CNN-LSTM and oversampling technique was proposed, which can correctly predict the necessity of a disk replacement even ten days in advance. Generally, the learning-based method performs poorly on a training dataset with long-tail distribution, especially fault prediction is a very classic situation as the scarcity of failure data. To overcome the puzzle, a new oversampling was employed to augment the data, and then, an improved CNN-LSTM with the shortcut was built to learn more effective features. The shortcut transmits the results of the previous layer of CNN and is used as the input of the LSTM model after weighted fusion with the output of the next layer. Finally, a detailed, empirical comparison of 6 prediction methods is presented and discussed on a public dataset for evaluation. The experiments indicate that the proposed method predicts disk failure with 0.91 Precision, 0.91 Recall, 0.91 F-measure, and 0.90 MCC for 10 days prediction horizon. Thus, the proposed algorithm is an efficient algorithm for predicting HDD failure in supercomputing.

Keywords: HDD replacement, failure, CNN-LSTM, oversampling, prediction

Procedia PDF Downloads 79
8723 Complete Enumeration Approach for Calculation of Residual Entropy for Diluted Spin Ice

Authors: Yuriy A. Shevchenko, Konstantin V. Nefedev

Abstract:

We consider the antiferromagnetic systems of Ising spins located at the sites of the hexagonal, triangular and pyrochlore lattices. Such systems can be diluted to a certain concentration level by randomly replacing the magnetic spins with nonmagnetic ones. Quite recently we studied density of states (DOS) was calculated by the Wang-Landau method. Based on the obtained data, we calculated the dependence of the residual entropy (entropy at a temperature tending to zero) on the dilution concentration for quite large systems (more than 2000 spins). In the current study, we obtained the same data for small systems (less than 20 spins) by a complete search of all possible magnetic configurations and compared the result with the result for large systems. The shape of the curve remains unchanged in both cases, but the specific values of the residual entropy are different because of the finite size effect.

Keywords: entropy, pyrochlore, spin ice, Wang-Landau algorithm

Procedia PDF Downloads 264
8722 Poster : Incident Signals Estimation Based on a Modified MCA Learning Algorithm

Authors: Rashid Ahmed , John N. Avaritsiotis

Abstract:

Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural networks are presented. First, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Second, minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a Minor Component Analysis (MCA(R)) learning algorithm to enhance the convergence, where a convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Preliminary results will be furnished to illustrate the convergences results achieved.

Keywords: Direction of Arrival, neural networks, Principle Component Analysis, Minor Component Analysis

Procedia PDF Downloads 451
8721 Analysis of Airborne Data Using Range Migration Algorithm for the Spotlight Mode of Synthetic Aperture Radar

Authors: Peter Joseph Basil Morris, Chhabi Nigam, S. Ramakrishnan, P. Radhakrishna

Abstract:

This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data using the Range Migration Algorithm (RMA) for the spotlight mode of operation. Unlike in polar format algorithm (PFA), space-variant defocusing and geometric distortion effects are mitigated in RMA since it does not assume that the illuminating wave-fronts are planar. This facilitates the use of RMA for imaging scenarios involving severe differential range curvatures enabling the imaging of larger scenes at fine resolution and at shorter ranges with low center frequencies. The RMA algorithm for the spotlight mode of SAR is analyzed in this paper using the airborne data. Pre-processing operations viz: - range de-skew and motion compensation to a line are performed on the raw data before being fed to the RMA component. Various stages of the RMA viz:- 2D Matched Filtering, Along Track Fourier Transform and Slot Interpolation are analyzed to find the performance limits and the dependence of the imaging geometry on the resolution of the final image. The ability of RMA to compensate for severe differential range curvatures in the two-dimensional spatial frequency domain are also illustrated in this paper.

Keywords: range migration algorithm, spotlight SAR, synthetic aperture radar, matched filtering, slot interpolation

Procedia PDF Downloads 241
8720 Analysis of Productivity and Poverty Status among Users of Improved Sorghum Varieties in Kano State, Nigeria

Authors: Temitope Adefunsho Olatoye, Julius Olabode Elega

Abstract:

Raising agricultural productivity is an important policy goal for governments and development agencies, and this is central to growth, income distribution, improved food security, and poverty alleviation among practitioners. This study analyzed the productivity and poverty status among users of improved sorghum varieties in Kano State, Nigeria. A multistage sampling technique was adopted in the selection of 131 sorghum farmers who were users of improved sorghum varieties. Data collected were analyzed using both descriptive (frequency distribution and percentage) and inferential (productivity index and FGT model) statistics. The result of the socioeconomic characteristics of the sorghum farmers showed a mean age of 40 years, with about 93.13% of the sorghum farmers being male. Also, as indicated by the result, the majority (82.44%) of the farmers were married, with most of them having qur’anic education with a mean farm size of 3.6 ha, as reported in the study area. Furthermore, the result showed that the mean farming experience of the sorghum farmers in the study area was 19 years, with an average monthly income of about ₦48,794, as reported in the study area. The result of the productivity index showed a ratio of 192,977kg/ha, while the result of poverty status shows that 62.88% were in the non-poor category, 21.21% were poor, and 15.91% were very poor, respectively. The result also showed that the incidence of poverty for sorghum farmers was 16%, indicating that the incidence of poverty was prevalent in the study area. Based on the findings of this study, it was therefore recommended that seed companies should facilitate the spread of improved sorghum varieties as it has an impact on the productivity and poverty status of sorghum farmers in the study area.

Keywords: Foster Greer Thorbecke model, improved sorghum varieties, productivity, poverty status

Procedia PDF Downloads 73
8719 The Use of Voice in Online Public Access Catalog as Faster Searching Device

Authors: Maisyatus Suadaa Irfana, Nove Eka Variant Anna, Dyah Puspitasari Sri Rahayu

Abstract:

Technological developments provide convenience to all the people. Nowadays, the communication of human with the computer is done via text. With the development of technology, human and computer communications have been conducted with a voice like communication between human beings. It provides an easy facility for many people, especially those who have special needs. Voice search technology is applied in the search of book collections in the OPAC (Online Public Access Catalog), so library visitors will find it faster and easier to find books that they need. Integration with Google is needed to convert the voice into text. To optimize the time and the results of searching, Server will download all the book data that is available in the server database. Then, the data will be converted into JSON format. In addition, the incorporation of some algorithms is conducted including Decomposition (parse) in the form of array of JSON format, the index making, analyzer to the result. It aims to make the process of searching much faster than the usual searching in OPAC because the data are directly taken to the database for every search warrant. Data Update Menu is provided with the purpose to enable users perform their own data updates and get the latest data information.

Keywords: OPAC, voice, searching, faster

Procedia PDF Downloads 344
8718 Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering

Authors: Alhadi Bustaman, Soeganda Formalidin, Titin Siswantining

Abstract:

DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data.

Keywords: agglomerative hierarchical clustering (AHC), biclustering, gene expression data, lymphoma, singular value decomposition (SVD)

Procedia PDF Downloads 278
8717 Optimization of Steel Moment Frame Structures Using Genetic Algorithm

Authors: Mohammad Befkin, Alireza Momtaz

Abstract:

Structural design is the challenging aspect of every project due to limitations in dimensions, functionality of the structure, and more importantly, the allocated budget for construction. This research study aims to investigate the optimized design for three steel moment frame buildings with different number of stories using genetic algorithm code. The number and length of spans, and height of each floor were constant in all three buildings. The design of structures are carried out according to AISC code within the provisions of plastic design with allowable stress values. Genetic code for optimization is produced using MATLAB program, while buildings modeled in Opensees program and connected to the MATLAB code to perform iterations in optimization steps. In the end designs resulted from genetic algorithm code were compared with the analysis of buildings in ETABS program. The results demonstrated that suggested structural elements by the code utilize their full capacity, indicating the desirable efficiency of produced code.

Keywords: genetic algorithm, structural analysis, steel moment frame, structural design

Procedia PDF Downloads 118
8716 Collective Movement between Two Lego EV3 Mobile Robots

Authors: Luis Fernando Pinedo-Lomeli, Rosa Martha Lopez-Gutierrez, Jose Antonio Michel-Macarty, Cesar Cruz-Hernandez, Liliana Cardoza-Avendaño, Humberto Cruz-Hernandez

Abstract:

Robots are working in industry and services performing repetitive or dangerous tasks, however, when flexible movement capabilities and complex tasks are required, the use of many robots is needed. Also, productivity can be improved by reducing times to perform tasks. In the last years, a lot of effort has been invested in research and development of collective control of mobile robots. This interest is justified as there are many advantages when two or more robots are collaborating in a particular task. Some examples are: cleaning toxic waste, transportation and manipulation of objects, exploration, and surveillance, search and rescue. In this work a study of collective movements of mobile robots is presented. A solution of collisions avoidance is developed. This solution is levered on a communication implementation that allows coordinate movements in different paths were avoiding obstacles.

Keywords: synchronization, communication, robots, legos

Procedia PDF Downloads 432
8715 A Deletion-Cost Based Fast Compression Algorithm for Linear Vector Data

Authors: Qiuxiao Chen, Yan Hou, Ning Wu

Abstract:

As there are deficiencies of the classic Douglas-Peucker Algorithm (DPA), such as high risks of deleting key nodes by mistake, high complexity, time consumption and relatively slow execution speed, a new Deletion-Cost Based Compression Algorithm (DCA) for linear vector data was proposed. For each curve — the basic element of linear vector data, all the deletion costs of its middle nodes were calculated, and the minimum deletion cost was compared with the pre-defined threshold. If the former was greater than or equal to the latter, all remaining nodes were reserved and the curve’s compression process was finished. Otherwise, the node with the minimal deletion cost was deleted, its two neighbors' deletion costs were updated, and the same loop on the compressed curve was repeated till the termination. By several comparative experiments using different types of linear vector data, the comparison between DPA and DCA was performed from the aspects of compression quality and computing efficiency. Experiment results showed that DCA outperformed DPA in compression accuracy and execution efficiency as well.

Keywords: Douglas-Peucker algorithm, linear vector data, compression, deletion cost

Procedia PDF Downloads 251
8714 Subspace Rotation Algorithm for Implementing Restricted Hopfield Network as an Auto-Associative Memory

Authors: Ci Lin, Tet Yeap, Iluju Kiringa

Abstract:

This paper introduces the subspace rotation algorithm (SRA) to train the Restricted Hopfield Network (RHN) as an auto-associative memory. Subspace rotation algorithm is a gradient-free subspace tracking approach based on the singular value decomposition (SVD). In comparison with Backpropagation Through Time (BPTT) on training RHN, it is observed that SRA could always converge to the optimal solution and BPTT could not achieve the same performance when the model becomes complex, and the number of patterns is large. The AUTS case study showed that the RHN model trained by SRA could achieve a better structure of attraction basin with larger radius(in general) than the Hopfield Network(HNN) model trained by Hebbian learning rule. Through learning 10000 patterns from MNIST dataset with RHN models with different number of hidden nodes, it is observed that an several components could be adjusted to achieve a balance between recovery accuracy and noise resistance.

Keywords: hopfield neural network, restricted hopfield network, subspace rotation algorithm, hebbian learning rule

Procedia PDF Downloads 117
8713 Teacher Professional Development Programs on K-12 Engineering Education: A Systematic Review of the Literature

Authors: Canan Mesutoglu, Evrim Baran

Abstract:

Teachers have a prominent role in facilitating the place of engineering in K-12 classrooms. This study addresses the need to understand how teacher professional development programs focusing on K-12 engineering education can be designed and delivered more effectively. A systematic review of the literature on such programs can offer possible ideas and recommendations. The purpose of this study is to systematically synthesize the peer-reviewed articles published on K-12 engineering education teacher professional development programs. The methodology that guided the study was comprised of four phases: search, selection, coding, and synthesis. The search phase included articles published in the time period between 2000 and 2016. With a comprehensive search in databases, inclusion criteria were applied. This was followed by evaluation of the quality of articles with a checklist, and finally analysis of the results. The results revealed two categories of themes. These were 1) five themes related to the overarching agenda of the PD programs, and 2) five themes related to the instructional techniques of the PD programs. Finally, core elements were generated to guide the design and delivery of teacher PD programs for K-12 engineering education. The results aimed to provide a conceptual basis for future research and practice on teacher PD programs for K-12 engineering education.

Keywords: core elements, K-12 engineering education, systematic literature review, teacher professional development programs

Procedia PDF Downloads 323
8712 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)

Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim

Abstract:

This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.

Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm

Procedia PDF Downloads 401
8711 RFID Based Indoor Navigation with Obstacle Detection Based on A* Algorithm for the Visually Impaired

Authors: Jayron Sanchez, Analyn Yumang, Felicito Caluyo

Abstract:

The visually impaired individual may use a cane, guide dog or ask for assistance from a person. This study implemented the RFID technology which consists of a low-cost RFID reader and passive RFID tag cards. The passive RFID tag cards served as checkpoints for the visually impaired. The visually impaired was guided through audio output from the system while traversing the path. The study implemented an ultrasonic sensor in detecting static obstacles. The system generated an alternate path based on A* algorithm to avoid the obstacles. Alternate paths were also generated in case the visually impaired traversed outside the intended path to the destination. A* algorithm generated the shortest path to the destination by calculating the total cost of movement. The algorithm then selected the smallest movement cost as a successor to the current tag card. Several trials were conducted to determine the effect of obstacles in the time traversal of the visually impaired. A dependent sample t-test was applied for the statistical analysis of the study. Based on the analysis, the obstacles along the path generated delays while requesting for the alternate path because of the delay in transmission from the laptop to the device via ZigBee modules.

Keywords: A* algorithm, RFID technology, ultrasonic sensor, ZigBee module

Procedia PDF Downloads 409
8710 RFID and Intelligence: A Smart Authentication Method for Blind People​

Authors: V. Vishu, R. Manimegalai

Abstract:

A combination of Intelligence and Radio frequency identification to bring an enhanced authentication method for the improvement of visually challenged people. The main goal is to provide an improved authentication by combining Advanced Encryption Standard algorithm and Intelligence. Here the encryption key will be generated as a combination of intelligent information from sensors and tag values. The main challenges are security, privacy and cost. Besides, the method was created to evaluate the amount of interaction between sensors and significant influence on the level of visually challenged people’s mental and physical states. The proposal is to apply various ideas on independent living or to assist them for a good life.

Keywords: AES, encryption, intelligence, smart key

Procedia PDF Downloads 241
8709 A Novel Gateway Location Algorithm for Wireless Mesh Networks

Authors: G. M. Komba

Abstract:

The Internet Gateway (IGW) has extra ability than a simple Mesh Router (MR) and the responsibility to route mostly the all traffic from Mesh Clients (MCs) to the Internet backbone however, IGWs are more expensive. Choosing strategic locations for the Internet Gateways (IGWs) best location in Backbone Wireless Mesh (BWM) precarious to the Wireless Mesh Network (WMN) and the location of IGW can improve a quantity of performance related problem. In this paper, we propose a novel algorithm, namely New Gateway Location Algorithm (NGLA), which aims to achieve four objectives, decreasing the network cost effective, minimizing delay, optimizing the throughput capacity, Different from existing algorithms, the NGLA increasingly recognizes IGWs, allocates mesh routers (MRs) to identify IGWs and promises to find a feasible IGW location and install minimum as possible number of IGWs while regularly conserving the all Quality of Service (QoS) requests. Simulation results showing that the NGLA outperforms other different algorithms by comparing the number of IGWs with a large margin and it placed 40% less IGWs and 80% gain of throughput. Furthermore the NGLA is easy to implement and could be employed for BWM.

Keywords: Wireless Mesh Network, Gateway Location Algorithm, Quality of Service, BWM

Procedia PDF Downloads 371
8708 Clutter Suppression Based on Singular Value Decomposition and Fast Wavelet Algorithm

Authors: Ruomeng Xiao, Zhulin Zong, Longfa Yang

Abstract:

Aiming at the problem that the target signal is difficult to detect under the strong ground clutter environment, this paper proposes a clutter suppression algorithm based on the combination of singular value decomposition and the Mallat fast wavelet algorithm. The method first carries out singular value decomposition on the radar echo data matrix, realizes the initial separation of target and clutter through the threshold processing of singular value, and then carries out wavelet decomposition on the echo data to find out the target location, and adopts the discard method to select the appropriate decomposition layer to reconstruct the target signal, which ensures the minimum loss of target information while suppressing the clutter. After the verification of the measured data, the method has a significant effect on the target extraction under low SCR, and the target reconstruction can be realized without the prior position information of the target and the method also has a certain enhancement on the output SCR compared with the traditional single wavelet processing method.

Keywords: clutter suppression, singular value decomposition, wavelet transform, Mallat algorithm, low SCR

Procedia PDF Downloads 118
8707 Improoving Readability for Tweet Contextualization Using Bipartite Graphs

Authors: Amira Dhokar, Lobna Hlaoua, Lotfi Ben Romdhane

Abstract:

Tweet contextualization (TC) is a new issue that aims to answer questions of the form 'What is this tweet about?' The idea of this task was imagined as an extension of a previous area called multi-document summarization (MDS), which consists in generating a summary from many sources. In both TC and MDS, the summary should ideally contain the most relevant information of the topic that is being discussed in the source texts (for MDS) and related to the query (for TC). Furthermore of being informative, a summary should be coherent, i.e. well written to be readable and grammatically compact. Hence, coherence is an essential characteristic in order to produce comprehensible texts. In this paper, we propose a new approach to improve readability and coherence for tweet contextualization based on bipartite graphs. The main idea of our proposed method is to reorder sentences in a given paragraph by combining most expressive words detection and HITS (Hyperlink-Induced Topic Search) algorithm to make up a coherent context.

Keywords: bipartite graphs, readability, summarization, tweet contextualization

Procedia PDF Downloads 193
8706 Advantages of Neural Network Based Air Data Estimation for Unmanned Aerial Vehicles

Authors: Angelo Lerro, Manuela Battipede, Piero Gili, Alberto Brandl

Abstract:

Redundancy requirements for UAV (Unmanned Aerial Vehicle) are hardly faced due to the generally restricted amount of available space and allowable weight for the aircraft systems, limiting their exploitation. Essential equipment as the Air Data, Attitude and Heading Reference Systems (ADAHRS) require several external probes to measure significant data as the Angle of Attack or the Sideslip Angle. Previous research focused on the analysis of a patented technology named Smart-ADAHRS (Smart Air Data, Attitude and Heading Reference System) as an alternative method to obtain reliable and accurate estimates of the aerodynamic angles. This solution is based on an innovative sensor fusion algorithm implementing soft computing techniques and it allows to obtain a simplified inertial and air data system reducing external devices. In fact, only one external source of dynamic and static pressures is needed. This paper focuses on the benefits which would be gained by the implementation of this system in UAV applications. A simplification of the entire ADAHRS architecture will bring to reduce the overall cost together with improved safety performance. Smart-ADAHRS has currently reached Technology Readiness Level (TRL) 6. Real flight tests took place on ultralight aircraft equipped with a suitable Flight Test Instrumentation (FTI). The output of the algorithm using the flight test measurements demonstrates the capability for this fusion algorithm to embed in a single device multiple physical and virtual sensors. Any source of dynamic and static pressure can be integrated with this system gaining a significant improvement in terms of versatility.

Keywords: aerodynamic angles, air data system, flight test, neural network, unmanned aerial vehicle, virtual sensor

Procedia PDF Downloads 221
8705 Research on Control Strategy of Differential Drive Assisted Steering of Distributed Drive Electric Vehicle

Authors: J. Liu, Z. P. Yu, L. Xiong, Y. Feng, J. He

Abstract:

According to the independence, accuracy and controllability of the driving/braking torque of the distributed drive electric vehicle, a control strategy of differential drive assisted steering was designed. Firstly, the assisted curve under different speed and steering wheel torque was developed and the differential torques were distributed to the right and left front wheels. Then the steering return ability assisted control algorithm was designed. At last, the joint simulation was conducted by CarSim/Simulink. The result indicated: the differential drive assisted steering algorithm could provide enough steering drive-assisted under low speed and improve the steering portability. Along with the increase of the speed, the provided steering drive-assisted decreased. With the control algorithm, the steering stiffness of the steering system increased along with the increase of the speed, which ensures the driver’s road feeling. The control algorithm of differential drive assisted steering could avoid the understeer under low speed effectively.

Keywords: differential assisted steering, control strategy, distributed drive electric vehicle, driving/braking torque

Procedia PDF Downloads 478
8704 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data

Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin

Abstract:

The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.

Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test

Procedia PDF Downloads 299
8703 An Experimental Study on Some Conventional and Hybrid Models of Fuzzy Clustering

Authors: Jeugert Kujtila, Kristi Hoxhalli, Ramazan Dalipi, Erjon Cota, Ardit Murati, Erind Bedalli

Abstract:

Clustering is a versatile instrument in the analysis of collections of data providing insights of the underlying structures of the dataset and enhancing the modeling capabilities. The fuzzy approach to the clustering problem increases the flexibility involving the concept of partial memberships (some value in the continuous interval [0, 1]) of the instances in the clusters. Several fuzzy clustering algorithms have been devised like FCM, Gustafson-Kessel, Gath-Geva, kernel-based FCM, PCM etc. Each of these algorithms has its own advantages and drawbacks, so none of these algorithms would be able to perform superiorly in all datasets. In this paper we will experimentally compare FCM, GK, GG algorithm and a hybrid two-stage fuzzy clustering model combining the FCM and Gath-Geva algorithms. Firstly we will theoretically dis-cuss the advantages and drawbacks for each of these algorithms and we will describe the hybrid clustering model exploiting the advantages and diminishing the drawbacks of each algorithm. Secondly we will experimentally compare the accuracy of the hybrid model by applying it on several benchmark and synthetic datasets.

Keywords: fuzzy clustering, fuzzy c-means algorithm (FCM), Gustafson-Kessel algorithm, hybrid clustering model

Procedia PDF Downloads 514