Search results for: algorithm symbol recognition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5240

Search results for: algorithm symbol recognition

4010 Optimal Design of Composite Patch for a Cracked Pipe by Utilizing Genetic Algorithm and Finite Element Method

Authors: Mahdi Fakoor, Seyed Mohammad Navid Ghoreishi

Abstract:

Composite patching is a common way for reinforcing the cracked pipes and cylinders. The effects of composite patch reinforcement on fracture parameters of a cracked pipe depend on a variety of parameters such as number of layers, angle, thickness, and material of each layer. Therefore, stacking sequence optimization of composite patch becomes crucial for the applications of cracked pipes. In this study, in order to obtain the optimal stacking sequence for a composite patch that has minimum weight and maximum resistance in propagation of cracks, a coupled Multi-Objective Genetic Algorithm (MOGA) and Finite Element Method (FEM) process is proposed. This optimization process has done for longitudinal and transverse semi-elliptical cracks and optimal stacking sequences and Pareto’s front for each kind of cracks are presented. The proposed algorithm is validated against collected results from the existing literature.

Keywords: multi objective optimization, pareto front, composite patch, cracked pipe

Procedia PDF Downloads 312
4009 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance

Procedia PDF Downloads 491
4008 Spectrum Allocation in Cognitive Radio Using Monarch Butterfly Optimization

Authors: Avantika Vats, Kushal Thakur

Abstract:

This paper displays the point at issue, improvement, and utilization of a Monarch Butterfly Optimization (MBO) rather than a Genetic Algorithm (GA) in cognitive radio for the channel portion. This approach offers a satisfactory approach to get the accessible range of both the users, i.e., primary users (PUs) and secondary users (SUs). The proposed enhancement procedure depends on a nature-inspired metaheuristic algorithm. In MBO, all the monarch butterfly individuals are located in two distinct lands, viz. Southern Canada and the northern USA (land 1), and Mexico (Land 2). The positions of the monarch butterflies are modernizing in two ways. At first, the offsprings are generated (position updating) by the migration operator and can be adjusted by the migration ratio. It is trailed by tuning the positions for different butterflies by the methods for the butterfly adjusting operator. To keep the population unaltered and minimize fitness evaluations, the aggregate of the recently produced butterflies in these two ways stays equivalent to the first population. The outcomes obviously display the capacity of the MBO technique towards finding the upgraded work values on issues regarding the genetic algorithm.

Keywords: cognitive radio, channel allocation, monarch butterfly optimization, evolutionary, computation

Procedia PDF Downloads 76
4007 A New Heuristic Algorithm for Maximization Total Demands of Nodes and Number of Covered Nodes Simultaneously

Authors: Ehsan Saghehei, Mahdi Eghbali

Abstract:

The maximal covering location problem (MCLP) was originally developed to determine a set of facility locations which would maximize the total customers' demand serviced by the facilities within a predetermined critical service criterion. However, on some problems that differences between the demand nodes are covered or the number of nodes each node is large, the method of solving MCLP may ignore these differences. In this paper, Heuristic solution based on the ranking of demands in each node and the number of nodes covered by each node according to a predetermined critical value is proposed. The output of this method is to maximize total demands of nodes and number of covered nodes, simultaneously. Furthermore, by providing an example, the solution algorithm is described and its results are compared with Greedy and Lagrange algorithms. Also, the results of the algorithm to solve the larger problem sizes that compared with other methods are provided. A summary and future works conclude the paper.

Keywords: heuristic solution, maximal covering location problem, ranking, set covering

Procedia PDF Downloads 573
4006 A Generalized Space-Efficient Algorithm for Quantum Bit String Comparators

Authors: Khuram Shahzad, Omar Usman Khan

Abstract:

Quantum bit string comparators (QBSC) operate on two sequences of n-qubits, enabling the determination of their relationships, such as equality, greater than, or less than. This is analogous to the way conditional statements are used in programming languages. Consequently, QBSCs play a crucial role in various algorithms that can be executed or adapted for quantum computers. The development of efficient and generalized comparators for any n-qubit length has long posed a challenge, as they have a high-cost footprint and lead to quantum delays. Comparators that are efficient are associated with inputs of fixed length. As a result, comparators without a generalized circuit cannot be employed at a higher level, though they are well-suited for problems with limited size requirements. In this paper, we introduce a generalized design for the comparison of two n-qubit logic states using just two ancillary bits. The design is examined on the basis of qubit requirements, ancillary bit usage, quantum cost, quantum delay, gate operations, and circuit complexity and is tested comprehensively on various input lengths. The work allows for sufficient flexibility in the design of quantum algorithms, which can accelerate quantum algorithm development.

Keywords: quantum comparator, quantum algorithm, space-efficient comparator, comparator

Procedia PDF Downloads 17
4005 On Dynamic Chaotic S-BOX Based Advanced Encryption Standard Algorithm for Image Encryption

Authors: Ajish Sreedharan

Abstract:

Security in transmission and storage of digital images has its importance in today’s image communications and confidential video conferencing. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. Advanced Encryption Standard (AES) is a well known block cipher that has several advantages in data encryption. However, it is not suitable for real-time applications. This paper presents modifications to the Advanced Encryption Standard to reflect a high level security and better image encryption. The modifications are done by adjusting the ShiftRow Transformation and using On Dynamic chaotic S-BOX. In AES the Substitute bytes, Shift row and Mix columns by themselves would provide no security because they do not use the key. In Dynamic chaotic S-BOX Based AES the Substitute bytes provide security because the S-Box is constructed from the key. Experimental results verify and prove that the proposed modification to image cryptosystem is highly secure from the cryptographic viewpoint. The results also prove that with a comparison to original AES encryption algorithm the modified algorithm gives better encryption results in terms of security against statistical attacks.

Keywords: advanced encryption standard (AES), on dynamic chaotic S-BOX, image encryption, security analysis, ShiftRow transformation

Procedia PDF Downloads 437
4004 DWT-SATS Based Detection of Image Region Cloning

Authors: Michael Zimba

Abstract:

A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency sub-band of the DWT of the suspicious image thereby leaving valuable information in the other three sub-bands, the proposed algorithm simultaneously extracts features from all the four sub-bands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.

Keywords: affine transformation, discrete wavelet transform, radix sort, SATS

Procedia PDF Downloads 230
4003 The Facilitatory Effect of Phonological Priming on Visual Word Recognition in Arabic as a Function of Lexicality and Overlap Positions

Authors: Ali Al Moussaoui

Abstract:

An experiment was designed to assess the performance of 24 Lebanese adults (mean age 29:5 years) in a lexical decision making (LDM) task to find out how the facilitatory effect of phonological priming (PP) affects the speed of visual word recognition in Arabic as lexicality (wordhood) and phonological overlap positions (POP) vary. The experiment falls in line with previous research on phonological priming in the light of the cohort theory and in relation to visual word recognition. The experiment also departs from the research on the Arabic language in which the importance of the consonantal root as a distinct morphological unit is confirmed. Based on previous research, it is hypothesized that (1) PP has a facilitating effect in LDM with words but not with nonwords and (2) final phonological overlap between the prime and the target is more facilitatory than initial overlap. An LDM task was programmed on PsychoPy application. Participants had to decide if a target (e.g., bayn ‘between’) preceded by a prime (e.g., bayt ‘house’) is a word or not. There were 4 conditions: no PP (NP), nonwords priming nonwords (NN), nonwords priming words (NW), and words priming words (WW). The conditions were simultaneously controlled for word length, wordhood, and POP. The interstimulus interval was 700 ms. Within the PP conditions, POP was controlled for in which there were 3 overlap positions between the primes and the targets: initial (e.g., asad ‘lion’ and asaf ‘sorrow’), final (e.g., kattab ‘cause to write’ 2sg-mas and rattab ‘organize’ 2sg-mas), or two-segmented (e.g., namle ‘ant’ and naħle ‘bee’). There were 96 trials, 24 in each condition, using a within-subject design. The results show that concerning (1), the highest average reaction time (RT) is that in NN, followed firstly by NW and finally by WW. There is statistical significance only between the pairs NN-NW and NN-WW. Regarding (2), the shortest RT is that in the two-segmented overlap condition, followed by the final POP in the first place and the initial POP in the last place. The difference between the two-segmented and the initial overlap is significant, while other pairwise comparisons are not. Based on these results, PP emerges as a facilitatory phenomenon that is highly sensitive to lexicality and POP. While PP can have a facilitating effect under lexicality, it shows no facilitation in its absence, which intersects with several previous findings. Participants are found to be more sensitive to the final phonological overlap than the initial overlap, which also coincides with a body of earlier literature. The results contradict the cohort theory’s stress on the onset overlap position and, instead, give more weight to final overlap, and even heavier weight to the two-segmented one. In conclusion, this study confirms the facilitating effect of PP with words but not when stimuli (at least the primes and at most both the primes and targets) are nonwords. It also shows that the two-segmented priming is the most influential in LDM in Arabic.

Keywords: lexicality, phonological overlap positions, phonological priming, visual word recognition

Procedia PDF Downloads 186
4002 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease

Authors: Usama Ahmed

Abstract:

Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.

Keywords: data mining, classification, diabetes, WEKA

Procedia PDF Downloads 147
4001 A New Learning Automata-Based Algorithm to the Priority-Based Target Coverage Problem in Directional Sensor Networks

Authors: Shaharuddin Salleh, Sara Marouf, Hosein Mohammadi

Abstract:

Directional sensor networks (DSNs) have recently attracted a great deal of attention due to their extensive applications in a wide range of situations. One of the most important problems associated with DSNs is covering a set of targets in a given area and, at the same time, maximizing the network lifetime. This is due to limitation in sensing angle and battery power of the directional sensors. This problem gets more complicated by the possibility that targets may have different coverage requirements. In the present study, this problem is referred to as priority-based target coverage (PTC). As sensors are often densely deployed, organizing the sensors into several cover sets and then activating these cover sets successively is a promising solution to this problem. In this paper, we propose a learning automata-based algorithm to organize the directional sensors into several cover sets in such a way that each cover set could satisfy coverage requirements of all the targets. Several experiments are conducted to evaluate the performance of the proposed algorithm. The results demonstrated that the algorithms were able to contribute to solving the problem.

Keywords: directional sensor networks, target coverage problem, cover set formation, learning automata

Procedia PDF Downloads 415
4000 The Effects of Goal Setting and Feedback on Inhibitory Performance

Authors: Mami Miyasaka, Kaichi Yanaoka

Abstract:

Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.

Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control

Procedia PDF Downloads 104
3999 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran

Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard

Abstract:

Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.

Keywords: data mining, ischemic stroke, decision tree, Bayesian network

Procedia PDF Downloads 176
3998 Performance Analysis and Multi-Objective Optimization of a Kalina Cycle for Low-Temperature Applications

Authors: Sadegh Sadeghi, Negar Shabani

Abstract:

From a thermal point of view, zeotropic mixtures are likely to be more efficient than azeotropic fluids in low-temperature thermodynamic cycles due to their suitable boiling characteristics. In this study, performance of a low-temperature Kalina cycle with R717/water working fluid used in different existing power plants is mathematically investigated. To analyze the behavior of the cycle, mass conservation, energy conservation, and exergy balance equations are presented. With regard to the similarity in molar mass of R717 (17.03 gr/mol) and water (18.01 gr/mol), there is no need to alter the size of Kalina system components such as turbine and pump. To optimize the cycle energy and exergy efficiencies simultaneously, a constrained multi-objective optimization is carried out applying an Artificial Bee Colony algorithm. The main motivation behind using this algorithm lies on its robustness, reliability, remarkable precision and high–speed convergence rate in dealing with complicated constrained multi-objective problems. Convergence rates of the algorithm for calculating the optimal energy and exergy efficiencies are presented. Subsequently, due to the importance of exergy concept in Kalina cycles, exergy destructions occurring in the components are computed. Finally, the impacts of pressure, temperature, mass fraction and mass flow rate on the energy and exergy efficiencies are elaborately studied.

Keywords: artificial bee colony algorithm, binary zeotropic mixture, constrained multi-objective optimization, energy efficiency, exergy efficiency, Kalina cycle

Procedia PDF Downloads 154
3997 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data

Authors: M. Kharrat, G. Moreau, Z. Aboura

Abstract:

The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.

Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition

Procedia PDF Downloads 156
3996 A Greedy Alignment Algorithm Supporting Medication Reconciliation

Authors: David Tresner-Kirsch

Abstract:

Reconciling patient medication lists from multiple sources is a critical task supporting the safe delivery of patient care. Manual reconciliation is a time-consuming and error-prone process, and recently attempts have been made to develop efficiency- and safety-oriented automated support for professionals performing the task. An important capability of any such support system is automated alignment – finding which medications from a list correspond to which medications from a different source, regardless of misspellings, naming differences (e.g. brand name vs. generic), or changes in treatment (e.g. switching a patient from one antidepressant class to another). This work describes a new algorithmic solution to this alignment task, using a greedy matching approach based on string similarity, edit distances, concept extraction and normalization, and synonym search derived from the RxNorm nomenclature. The accuracy of this algorithm was evaluated against a gold-standard corpus of 681 medication records; this evaluation found that the algorithm predicted alignments with 99% precision and 91% recall. This performance is sufficient to support decision support applications for medication reconciliation.

Keywords: clinical decision support, medication reconciliation, natural language processing, RxNorm

Procedia PDF Downloads 286
3995 An Efficient Robot Navigation Model in a Multi-Target Domain amidst Static and Dynamic Obstacles

Authors: Michael Ayomoh, Adriaan Roux, Oyindamola Omotuyi

Abstract:

This paper presents an efficient robot navigation model in a multi-target domain amidst static and dynamic workspace obstacles. The problem is that of developing an optimal algorithm to minimize the total travel time of a robot as it visits all target points within its task domain amidst unknown workspace obstacles and finally return to its initial position. In solving this problem, a classical algorithm was first developed to compute the optimal number of paths to be travelled by the robot amidst the network of paths. The principle of shortest distance between robot and targets was used to compute the target point visitation order amidst workspace obstacles. Algorithm premised on the standard polar coordinate system was developed to determine the length of obstacles encountered by the robot hence giving room for a geometrical estimation of the total surface area occupied by the obstacle especially when classified as a relevant obstacle i.e. obstacle that lies in between a robot and its potential visitation point. A stochastic model was developed and used to estimate the likelihood of a dynamic obstacle bumping into the robot’s navigation path and finally, the navigation/obstacle avoidance algorithm was hinged on the hybrid virtual force field (HVFF) method. Significant modelling constraints herein include the choice of navigation path to selected target points, the possible presence of static obstacles along a desired navigation path and the likelihood of encountering a dynamic obstacle along the robot’s path and the chances of it remaining at this position as a static obstacle hence resulting in a case of re-routing after routing. The proposed algorithm demonstrated a high potential for optimal solution in terms of efficiency and effectiveness.

Keywords: multi-target, mobile robot, optimal path, static obstacles, dynamic obstacles

Procedia PDF Downloads 281
3994 Application of Chinese Remainder Theorem to Find The Messages Sent in Broadcast

Authors: Ayubi Wirara, Ardya Suryadinata

Abstract:

Improper application of the RSA algorithm scheme can cause vulnerability to attacks. The attack utilizes the relationship between broadcast messages sent to the user with some fixed polynomial functions that belong to each user. Scheme attacks carried out by applying the Chinese Remainder Theorem to obtain a general polynomial equation with the same modulus. The formation of the general polynomial becomes a first step to get back the original message. Furthermore, to solve these equations can use Coppersmith's theorem.

Keywords: RSA algorithm, broadcast message, Chinese Remainder Theorem, Coppersmith’s theorem

Procedia PDF Downloads 342
3993 Battery Grading Algorithm in 2nd-Life Repurposing LI-Ion Battery System

Authors: Ya L. V., Benjamin Ong Wei Lin, Wanli Niu, Benjamin Seah Chin Tat

Abstract:

This article introduces a methodology that improves reliability and cyclability of 2nd-life Li-ion battery system repurposed as an energy storage system (ESS). Most of the 2nd-life retired battery systems in the market have module/pack-level state-of-health (SOH) indicator, which is utilized for guiding appropriate depth-of-discharge (DOD) in the application of ESS. Due to the lack of cell-level SOH indication, the different degrading behaviors among various cells cannot be identified upon reaching retired status; in the end, considering end-of-life (EOL) loss and pack-level DOD, the repurposed ESS has to be oversized by > 1.5 times to complement the application requirement of reliability and cyclability. This proposed battery grading algorithm, using non-invasive methodology, is able to detect outlier cells based on historical voltage data and calculate cell-level historical maximum temperature data using semi-analytic methodology. In this way, the individual battery cell in the 2nd-life battery system can be graded in terms of SOH on basis of the historical voltage fluctuation and estimated historical maximum temperature variation. These grades will have corresponding DOD grades in the application of the repurposed ESS to enhance system reliability and cyclability. In all, this introduced battery grading algorithm is non-invasive, compatible with all kinds of retired Li-ion battery systems which lack of cell-level SOH indication, as well as potentially being embedded into battery management software for preventive maintenance and real-time cyclability optimization.

Keywords: battery grading algorithm, 2nd-life repurposing battery system, semi-analytic methodology, reliability and cyclability

Procedia PDF Downloads 204
3992 Recognising and Managing Haematoma Following Thyroid Surgery: Simulation Teaching is Effective

Authors: Emily Moore, Dora Amos, Tracy Ellimah, Natasha Parrott

Abstract:

Postoperative haematoma is a well-recognised complication of thyroid surgery with an incidence of 1-5%. Haematoma formation causes progressive airway obstruction, necessitating emergency bedside haematoma evacuation in up to ¼ of patients. ENT UK, BAETS and DAS have developed consensus guidelines to improve perioperative care, recommending that all healthcare staff interacting with patients undergoing thyroid surgery should be trained in managing post-thyroidectomy haematoma. The aim was to assess the effectiveness of a hybrid simulation model in improving clinician’s confidence in dealing with this surgical emergency. A hybrid simulation was designed, consisting of a standardised patient wearing a part-task trainer to mimic a post-thyroidectomy haematoma in a real patient. The part-task trainer was an adapted C-spine collar with layers of silicone representing the skin and strap muscles and thickened jelly representing the haematoma. Both the skin and strap muscle layers had to be opened in order to evacuate the haematoma. Boxes have been implemented into the appropriate post operative areas (recovery and surgical wards), which contain a printed algorithm designed to assist in remembering a sequence of steps for haematoma evacuation using the ‘SCOOP’ method (skin exposure, cut sutures, open skin, open muscles, pack wound) along with all the necessary equipment to open the front of the neck. Small-group teaching sessions were delivered by ENT and anaesthetic trainees to members of the multidisciplinary team normally involved in perioperative patient care, which included ENT surgeons, anaesthetists, recovery nurses, HCAs and ODPs. The DESATS acronym of signs and symptoms to recognise (difficulty swallowing, EWS score, swelling, anxiety, tachycardia, stridor) was highlighted. Then participants took part in the hybrid simulation in order to practice this ‘SCOOP’ method of haematoma evacuation. Participants were surveyed using a Likert scale to assess their level of confidence pre- and post teaching session. 30 clinicians took part. Confidence (agreed/strongly agreed) in recognition of post thyroidectomy haematoma improved from 58.6% to 96.5%. Confidence in management improved from 27.5% to 89.7%. All participants successfully decompressed the haematoma. All participants agreed/strongly agreed, that the sessions were useful for their learning. Multidisciplinary team simulation teaching is effective at significantly improving confidence in both the recognition and management of postoperative haematoma. Hybrid simulation sessions are useful and should be incorporated into training for clinicians.

Keywords: thyroid surgery, haematoma, teaching, hybrid simulation

Procedia PDF Downloads 97
3991 Automatic Teller Machine System Security by Using Mobile SMS Code

Authors: Husnain Mushtaq, Mary Anjum, Muhammad Aleem

Abstract:

The main objective of this paper is used to develop a high security in Automatic Teller Machine (ATM). In these system bankers will collect the mobile numbers from the customers and then provide a code on their mobile number. In most country existing ATM machine use the magnetic card reader. The customer is identifying by inserting an ATM card with magnetic card that hold unique information such as card number and some security limitations. By entering a personal identification number, first the customer is authenticated then will access bank account in order to make cash withdraw or other services provided by the bank. Cases of card fraud are another problem once the user’s bank card is missing and the password is stolen, or simply steal a customer’s card & PIN the criminal will draw all cash in very short time, which will being great financial losses in customer, this type of fraud has increase worldwide. So to resolve this problem we are going to provide the solution using “Mobile SMS code” and ATM “PIN code” in order to improve the verify the security of customers using ATM system and confidence in the banking area.

Keywords: PIN, inquiry, biometric, magnetic strip, iris recognition, face recognition

Procedia PDF Downloads 366
3990 Partially Knowing of Least Support Orthogonal Matching Pursuit (PKLS-OMP) for Recovering Signal

Authors: Israa Sh. Tawfic, Sema Koc Kayhan

Abstract:

Given a large sparse signal, great wishes are to reconstruct the signal precisely and accurately from lease number of measurements as possible as it could. Although this seems possible by theory, the difficulty is in built an algorithm to perform the accuracy and efficiency of reconstructing. This paper proposes a new proved method to reconstruct sparse signal depend on using new method called Least Support Matching Pursuit (LS-OMP) merge it with the theory of Partial Knowing Support (PSK) given new method called Partially Knowing of Least Support Orthogonal Matching Pursuit (PKLS-OMP). The new methods depend on the greedy algorithm to compute the support which depends on the number of iterations. So to make it faster, the PKLS-OMP adds the idea of partial knowing support of its algorithm. It shows the efficiency, simplicity, and accuracy to get back the original signal if the sampling matrix satisfies the Restricted Isometry Property (RIP). Simulation results also show that it outperforms many algorithms especially for compressible signals.

Keywords: compressed sensing, lest support orthogonal matching pursuit, partial knowing support, restricted isometry property, signal reconstruction

Procedia PDF Downloads 244
3989 A Fast Algorithm for Electromagnetic Compatibility Estimation for Radio Communication Network Equipment in a Complex Electromagnetic Environment

Authors: C. Temaneh-Nyah

Abstract:

Electromagnetic compatibility (EMC) is the ability of a Radio Communication Equipment (RCE) to operate with a desired quality of service in a given Electromagnetic Environment (EME) and not to create harmful interference with other RCE. This paper presents an algorithm which improves the simulation speed of estimating EMC of RCE in a complex EME, based on a stage by stage frequency-energy criterion of filtering. This algorithm considers different interference types including: Blocking and intermodulation. It consist of the following steps: simplified energy criterion where filtration is based on comparing the free space interference level to the industrial noise, frequency criterion which checks whether the interfering emissions characteristic overlap with the receiver’s channels characteristic and lastly the detailed energy criterion where the real channel interference level is compared to the noise level. In each of these stages, some interference cases are filtered out by the relevant criteria. This reduces the total number of dual and different combinations of RCE involved in the tedious detailed energy analysis and thus provides an improved simulation speed.

Keywords: electromagnetic compatibility, electromagnetic environment, simulation of communication network

Procedia PDF Downloads 219
3988 A Metaheuristic Approach for the Pollution-Routing Problem

Authors: P. Parthiban, Sonu Rajak, R. Dhanalakshmi

Abstract:

This paper presents an Ant Colony Optimization (ACO) approach, combined with a Speed Optimization Algorithm (SOA) to solve the Vehicle Routing Problem (VRP) with environmental considerations, which is well known as Pollution-Routing Problem (PRP). It consists of routing a number of vehicles to serve a set of customers, and determining fuel consumption, driver wages and their speed on each route segment, while respecting the capacity constraints and time windows. Since VRP is NP-hard problem, so PRP also a NP-hard problem, which requires metaheuristics to solve this type of problems. The proposed solution method consists of two stages. Stage one is to solve a Vehicle Routing Problem with Time Window (VRPTW) using ACO and in the second stage, a SOA is run on the resulting VRPTW solution. Given a vehicle route, the SOA consists of finding the optimal speed on each arc of the route to minimize an objective function comprising fuel consumption costs and driver wages. The proposed algorithm tested on benchmark problem, the preliminary results show that the proposed algorithm can provide good solutions within reasonable computational time.

Keywords: ant colony optimization, CO2 emissions, speed optimization, vehicle routing

Procedia PDF Downloads 361
3987 Obstacle Classification Method Based on 2D LIDAR Database

Authors: Moohyun Lee, Soojung Hur, Yongwan Park

Abstract:

In this paper is proposed a method uses only LIDAR system to classification an obstacle and determine its type by establishing database for classifying obstacles based on LIDAR. The existing LIDAR system, in determining the recognition of obstruction in an autonomous vehicle, has an advantage in terms of accuracy and shorter recognition time. However, it was difficult to determine the type of obstacle and therefore accurate path planning based on the type of obstacle was not possible. In order to overcome this problem, a method of classifying obstacle type based on existing LIDAR and using the width of obstacle materials was proposed. However, width measurement was not sufficient to improve accuracy. In this research, the width data was used to do the first classification; database for LIDAR intensity data by four major obstacle materials on the road were created; comparison is made to the LIDAR intensity data of actual obstacle materials; and determine the obstacle type by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that data declined in quality in comparison to 3D LIDAR and it was possible to classify obstacle materials using 2D LIDAR.

Keywords: obstacle, classification, database, LIDAR, segmentation, intensity

Procedia PDF Downloads 352
3986 Cluster-Based Multi-Path Routing Algorithm in Wireless Sensor Networks

Authors: Si-Gwan Kim

Abstract:

Small-size and low-power sensors with sensing, signal processing and wireless communication capabilities is suitable for the wireless sensor networks. Due to the limited resources and battery constraints, complex routing algorithms used for the ad-hoc networks cannot be employed in sensor networks. In this paper, we propose node-disjoint multi-path hexagon-based routing algorithms in wireless sensor networks. We suggest the details of the algorithm and compare it with other works. Simulation results show that the proposed scheme achieves better performance in terms of efficiency and message delivery ratio.

Keywords: clustering, multi-path, routing protocol, sensor network

Procedia PDF Downloads 405
3985 Learning from Small Amount of Medical Data with Noisy Labels: A Meta-Learning Approach

Authors: Gorkem Algan, Ilkay Ulusoy, Saban Gonul, Banu Turgut, Berker Bakbak

Abstract:

Computer vision systems recently made a big leap thanks to deep neural networks. However, these systems require correctly labeled large datasets in order to be trained properly, which is very difficult to obtain for medical applications. Two main reasons for label noise in medical applications are the high complexity of the data and conflicting opinions of experts. Moreover, medical imaging datasets are commonly tiny, which makes each data very important in learning. As a result, if not handled properly, label noise significantly degrades the performance. Therefore, a label-noise-robust learning algorithm that makes use of the meta-learning paradigm is proposed in this article. The proposed solution is tested on retinopathy of prematurity (ROP) dataset with a very high label noise of 68%. Results show that the proposed algorithm significantly improves the classification algorithm's performance in the presence of noisy labels.

Keywords: deep learning, label noise, robust learning, meta-learning, retinopathy of prematurity

Procedia PDF Downloads 162
3984 Development of Star Image Simulator for Star Tracker Algorithm Validation

Authors: Zoubida Mahi

Abstract:

A successful satellite mission in space requires a reliable attitude and orbit control system to command, control and position the satellite in appropriate orbits. Several sensors are used for attitude control, such as magnetic sensors, earth sensors, horizon sensors, gyroscopes, and solar sensors. The star tracker is the most accurate sensor compared to other sensors, and it is able to offer high-accuracy attitude control without the need for prior attitude information. There are mainly three approaches in star sensor research: digital simulation, hardware in the loop simulation, and field test of star observation. In the digital simulation approach, all of the processes are done in software, including star image simulation. Hence, it is necessary to develop star image simulation software that could simulate real space environments and various star sensor configurations. In this paper, we present a new stellar image simulation tool that is used to test and validate the stellar sensor algorithms; the developed tool allows to simulate of stellar images with several types of noise, such as background noise, gaussian noise, Poisson noise, multiplicative noise, and several scenarios that exist in space such as the presence of the moon, the presence of optical system problem, illumination and false objects. On the other hand, we present in this paper a new star extraction algorithm based on a new centroid calculation method. We compared our algorithm with other star extraction algorithms from the literature, and the results obtained show the star extraction capability of the proposed algorithm.

Keywords: star tracker, star simulation, star detection, centroid, noise, scenario

Procedia PDF Downloads 97
3983 Comparative Study Using WEKA for Red Blood Cells Classification

Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy

Abstract:

Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-alaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.

Keywords: K-nearest neighbors algorithm, radial basis function neural network, red blood cells, support vector machine

Procedia PDF Downloads 411
3982 A New Internal Architecture Based On Feature Selection for Holonic Manufacturing System

Authors: Jihan Abdulazeez Ahmed, Adnan Mohsin Abdulazeez Brifcani

Abstract:

This paper suggests a new internal architecture of holon based on feature selection model using the combination of Bees Algorithm (BA) and Artificial Neural Network (ANN). BA is used to generate features while ANN is used as a classifier to evaluate the produced features. Proposed system is applied on the Wine data set, the statistical result proves that the proposed system is effective and has the ability to choose informative features with high accuracy.

Keywords: artificial neural network, bees algorithm, feature selection, Holon

Procedia PDF Downloads 457
3981 A Simple Algorithm for Real-Time 3D Capturing of an Interior Scene Using a Linear Voxel Octree and a Floating Origin Camera

Authors: Vangelis Drosos, Dimitrios Tsoukalos, Dimitrios Tsolis

Abstract:

We present a simple algorithm for capturing a 3D scene (focused on the usage of mobile device cameras in the context of augmented/mixed reality) by using a floating origin camera solution and storing the resulting information in a linear voxel octree. Data is derived from cloud points captured by a mobile device camera. For the purposes of this paper, we assume a scene of fixed size (known to us or determined beforehand) and a fixed voxel resolution. The resulting data is stored in a linear voxel octree using a hashtable. We commence by briefly discussing the logic behind floating origin approaches and the usage of linear voxel octrees for efficient storage. Following that, we present the algorithm for translating captured feature points into voxel data in the context of a fixed origin world and storing them. Finally, we discuss potential applications and areas of future development and improvement to the efficiency of our solution.

Keywords: voxel, octree, computer vision, XR, floating origin

Procedia PDF Downloads 133