Search results for: RLS identification algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6333

Search results for: RLS identification algorithm

4953 Intelligent Rescheduling Trains for Air Pollution Management

Authors: Kainat Affrin, P. Reshma, G. Narendra Kumar

Abstract:

Optimization of timetable is the need of the day for the rescheduling and routing of trains in real time. Trains are scheduled in parallel with the road transport vehicles to the same destination. As the number of trains is restricted due to single track, customers usually opt for road transport to use frequently. The air pollution increases as the density of vehicles on road transport is increased. Use of an alternate mode of transport like train helps in reducing air-pollution. This paper mainly aims at attracting the passengers to Train transport by proper rescheduling of trains using hybrid of stop-skip algorithm and iterative convex programming algorithm. Rescheduling of train bi-directionally is achieved on a single track with dynamic dual time and varying stops. Introduction of more trains attract customers to use rail transport frequently, thereby decreasing the pollution. The results are simulated using Network Simulator (NS-2).

Keywords: air pollution, AODV, re-scheduling, WSNs

Procedia PDF Downloads 361
4952 Target and Biomarker Identification Platform to Design New Drugs against Aging and Age-Related Diseases

Authors: Peter Fedichev

Abstract:

We studied fundamental aspects of aging to develop a mathematical model of gene regulatory network. We show that aging manifests itself as an inherent instability of gene network leading to exponential accumulation of regulatory errors with age. To validate our approach we studied age-dependent omic data such as transcriptomes, metabolomes etc. of different model organisms and humans. We build a computational platform based on our model to identify the targets and biomarkers of aging to design new drugs against aging and age-related diseases. As biomarkers of aging, we choose the rate of aging and the biological age since they completely determine the state of the organism. Since rate of aging rapidly changes in response to an external stress, this kind of biomarker can be useful as a tool for quantitative efficacy assessment of drugs, their combinations, dose optimization, chronic toxicity estimate, personalized therapies selection, clinical endpoints achievement (within clinical research), and death risk assessments. According to our model, we propose a method for targets identification for further interventions against aging and age-related diseases. Being a biotech company, we offer a complete pipeline to develop an anti-aging drug-candidate.

Keywords: aging, longevity, biomarkers, senescence

Procedia PDF Downloads 274
4951 Identification of Novel Differentially Expressed and Co-Expressed Genes between Tumor and Adjacent Tissue in Prostate Cancer

Authors: Luis Enrique Bautista-Hinojosa, Luis A. Herrera, Cristian Arriaga-Canon

Abstract:

Text should be written in the third person. Please avoid using "I" “my” or the pronoun "one". It is best to say "It is believed..." rather than "I believe..." or "One believes...".

Keywords: transcriptomics, co-expression, cancer, biomarkers

Procedia PDF Downloads 75
4950 Decision Trees Constructing Based on K-Means Clustering Algorithm

Authors: Loai Abdallah, Malik Yousef

Abstract:

A domain space for the data should reflect the actual similarity between objects. Since objects belonging to the same cluster usually share some common traits even though their geometric distance might be relatively large. In general, the Euclidean distance of data points that represented by large number of features is not capturing the actual relation between those points. In this study, we propose a new method to construct a different space that is based on clustering to form a new distance metric. The new distance space is based on ensemble clustering (EC). The EC distance space is defined by tracking the membership of the points over multiple runs of clustering algorithm metric. Over this distance, we train the decision trees classifier (DT-EC). The results obtained by applying DT-EC on 10 datasets confirm our hypotheses that embedding the EC space as a distance metric would improve the performance.

Keywords: ensemble clustering, decision trees, classification, K nearest neighbors

Procedia PDF Downloads 191
4949 Flexible Arm Manipulator Control for Industrial Tasks

Authors: Mircea Ivanescu, Nirvana Popescu, Decebal Popescu, Dorin Popescu

Abstract:

This paper addresses the control problem of a class of hyper-redundant arms. In order to avoid discrepancy between the mathematical model and the actual dynamics, the dynamic model with uncertain parameters of this class of manipulators is inferred. A procedure to design a feedback controller which stabilizes the uncertain system has been proposed. A PD boundary control algorithm is used in order to control the desired position of the manipulator. This controller is easy to implement from the point of view of measuring techniques and actuation. Numerical simulations verify the effectiveness of the presented methods. In order to verify the suitability of the control algorithm, a platform with a 3D flexible manipulator has been employed for testing. Experimental tests on this platform illustrate the applications of the techniques developed in the paper.

Keywords: distributed model, flexible manipulator, observer, robot control

Procedia PDF Downloads 321
4948 The Effect of Feature Selection on Pattern Classification

Authors: Chih-Fong Tsai, Ya-Han Hu

Abstract:

The aim of feature selection (or dimensionality reduction) is to filter out unrepresentative features (or variables) making the classifier perform better than the one without feature selection. Since there are many well-known feature selection algorithms, and different classifiers based on different selection results may perform differently, very few studies consider examining the effect of performing different feature selection algorithms on the classification performances by different classifiers over different types of datasets. In this paper, two widely used algorithms, which are the genetic algorithm (GA) and information gain (IG), are used to perform feature selection. On the other hand, three well-known classifiers are constructed, which are the CART decision tree (DT), multi-layer perceptron (MLP) neural network, and support vector machine (SVM). Based on 14 different types of datasets, the experimental results show that in most cases IG is a better feature selection algorithm than GA. In addition, the combinations of IG with DT and IG with SVM perform best and second best for small and large scale datasets.

Keywords: data mining, feature selection, pattern classification, dimensionality reduction

Procedia PDF Downloads 669
4947 Adaptive Filtering in Subbands for Supervised Source Separation

Authors: Bruna Luisa Ramos Prado Vasques, Mariane Rembold Petraglia, Antonio Petraglia

Abstract:

This paper investigates MIMO (Multiple-Input Multiple-Output) adaptive filtering techniques for the application of supervised source separation in the context of convolutive mixtures. From the observation that there is correlation among the signals of the different mixtures, an improvement in the NSAF (Normalized Subband Adaptive Filter) algorithm is proposed in order to accelerate its convergence rate. Simulation results with mixtures of speech signals in reverberant environments show the superior performance of the proposed algorithm with respect to the performances of the NLMS (Normalized Least-Mean-Square) and conventional NSAF, considering both the convergence speed and SIR (Signal-to-Interference Ratio) after convergence.

Keywords: adaptive filtering, multi-rate processing, normalized subband adaptive filter, source separation

Procedia PDF Downloads 435
4946 Retraction Free Motion Approach and Its Application in Automated Robotic Edge Finishing and Inspection Processes

Authors: M. Nemer, E. I. Konukseven

Abstract:

In this paper, a motion generation algorithm for a six Degrees of Freedom (DoF) robotic hand in a static environment is presented. The purpose of developing this method is to be used in the path generation of the end-effector for edge finishing and inspection processes by utilizing the CAD model of the considered workpiece. Nonetheless, the proposed algorithm may be extended to be applicable for other similar manufacturing processes. A software package programmed in the application programming interface (API) of SolidWorks generates tool path data for the robot. The proposed method significantly simplifies the given problem, resulting in a reduction in the CPU time needed to generate the path, and offers an efficient overall solution. The ABB IRB2000 robot is chosen for executing the generated tool path.

Keywords: CAD-based tools, edge deburring, edge scanning, offline programming, path generation

Procedia PDF Downloads 284
4945 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov

Abstract:

The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm

Procedia PDF Downloads 165
4944 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 146
4943 An Improvement of ComiR Algorithm for MicroRNA Target Prediction by Exploiting Coding Region Sequences of mRNAs

Authors: Giorgio Bertolazzi, Panayiotis Benos, Michele Tumminello, Claudia Coronnello

Abstract:

MicroRNAs are small non-coding RNAs that post-transcriptionally regulate the expression levels of messenger RNAs. MicroRNA regulation activity depends on the recognition of binding sites located on mRNA molecules. ComiR (Combinatorial miRNA targeting) is a user friendly web tool realized to predict the targets of a set of microRNAs, starting from their expression profile. ComiR incorporates miRNA expression in a thermodynamic binding model, and it associates each gene with the probability of being a target of a set of miRNAs. ComiR algorithms were trained with the information regarding binding sites in the 3’UTR region, by using a reliable dataset containing the targets of endogenously expressed microRNA in D. melanogaster S2 cells. This dataset was obtained by comparing the results from two different experimental approaches, i.e., inhibition, and immunoprecipitation of the AGO1 protein; this protein is a component of the microRNA induced silencing complex. In this work, we tested whether including coding region binding sites in the ComiR algorithm improves the performance of the tool in predicting microRNA targets. We focused the analysis on the D. melanogaster species and updated the ComiR underlying database with the currently available releases of mRNA and microRNA sequences. As a result, we find that the ComiR algorithm trained with the information related to the coding regions is more efficient in predicting the microRNA targets, with respect to the algorithm trained with 3’utr information. On the other hand, we show that 3’utr based predictions can be seen as complementary to the coding region based predictions, which suggests that both predictions, from 3'UTR and coding regions, should be considered in a comprehensive analysis. Furthermore, we observed that the lists of targets obtained by analyzing data from one experimental approach only, that is, inhibition or immunoprecipitation of AGO1, are not reliable enough to test the performance of our microRNA target prediction algorithm. Further analysis will be conducted to investigate the effectiveness of the tool with data from other species, provided that validated datasets, as obtained from the comparison of RISC proteins inhibition and immunoprecipitation experiments, will be available for the same samples. Finally, we propose to upgrade the existing ComiR web-tool by including the coding region based trained model, available together with the 3’UTR based one.

Keywords: AGO1, coding region, Drosophila melanogaster, microRNA target prediction

Procedia PDF Downloads 451
4942 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 190
4941 Scalable Learning of Tree-Based Models on Sparsely Representable Data

Authors: Fares Hedayatit, Arnauld Joly, Panagiotis Papadimitriou

Abstract:

Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library.

Keywords: big data, sparsely representable data, tree-based models, scalable learning

Procedia PDF Downloads 263
4940 Case-Based Reasoning: A Hybrid Classification Model Improved with an Expert's Knowledge for High-Dimensional Problems

Authors: Bruno Trstenjak, Dzenana Donko

Abstract:

Data mining and classification of objects is the process of data analysis, using various machine learning techniques, which is used today in various fields of research. This paper presents a concept of hybrid classification model improved with the expert knowledge. The hybrid model in its algorithm has integrated several machine learning techniques (Information Gain, K-means, and Case-Based Reasoning) and the expert’s knowledge into one. The knowledge of experts is used to determine the importance of features. The paper presents the model algorithm and the results of the case study in which the emphasis was put on achieving the maximum classification accuracy without reducing the number of features.

Keywords: case based reasoning, classification, expert's knowledge, hybrid model

Procedia PDF Downloads 367
4939 Non-Candida Albicans Candida: Virulence Factors and Species Identification in India

Authors: Satender Saraswat, Dharmendra Prasad Singh, Rajesh Kumar Verma, Swati Sarswat

Abstract:

Background and Purpose: The predominant cause of candidiasis was Candida albicans which has shifted towards non-Candida albicans Candida (NCAC) (Candida species other than the C. albicans). NCAC, earlier considered non-pathogenic or minimally virulent, are now considered a primary cause of morbidity and mortality in immunocompromised. With the NCAC spp. gaining weightage in the clinical cases, this study was conducted to determine the prevalence of NCAC spp. in different clinical specimens and to assess a few of their virulence factors. Material and Methods: Routine samples for bacterial culture and sensitivity, showing colony characteristics like Candida on Blood Agar and microscopic features resembling Candida spp. were processed further. Candida isolates were tested for chlamydospore formation, biochemical tests including sugar fermentation and sugar assimilation tests, and growth at 42oC, colony colour on HiCrome™ Candida Differential Agar, HiCandida Identification Kit and VITEK-2 Compact. Virulence factors like adherence to buccal epithelial cells (ABEC), biofilm formation, hemolytic activity, and production of coagulase enzyme were also tested. Results: Mean age of the patients was 38.46 with a male-female ratio of 1.36:1. 137 Candida isolates were recovered. 45.3% isolates were isolated from urine, 19.7% from vaginal swabs and 13.9% from oropharyngeal swabs. 55 (40.1%) isolates of C. albicans and 82 (59.9%) of NCAC spp. were identified, with C. tropicalis (23.4%) in NCAC. C. albicans (3; 50%) was the commonest species in cases of candidemia. Haemolysin production (85.5%) and ABEC (78.2%) were the major virulence factors in C. albicans. C. tropicalis (59.4%) and C. dubliniensis (50%) showed maximum ABEC. Biofilm forming capacity was higher in C. tropicalis (78.1%) than C. albicans (67%). Conclusion: This study suggests varied prevalence and virulence based on geographical locations, even within a subcontinent. It clearly demarcates the emergence of NCAC and their predominance in different body fluids. Identification of Candida to species level should become a routine in all the laboratories.

Keywords: ABEC, NCAC, non-Candida albicans Candida, Vitek-2TM compact

Procedia PDF Downloads 134
4938 Radio Frequency Identification (Rfid) Cost-Effective, Location-Based System for Managing Construction Materials

Authors: Mourad Bakouka, Abdelaziz Rabehi

Abstract:

Companies need to have logistics and transportation in place that can adapt to the changing nature of construction sites. This ensures they can react quickly when needed. A study was conducted to develop a way to locate and track materials on construction sites. The system is an RFID/GPS integration that's required to pull off this feat. The study also reports how the platform has been used in construction. They found many advantages to using it, including reductions in both time and costs as well as improved management of materials orders. . For example, the time in which a project could start up was shortened from two weeks to three days with just a single digital order. As of now, the technology is still limited in its widespread adoption due largely to overall lack of awareness and difficulty connecting to it. However, as more and more companies embrace it in construction, the technology is expected to become ubiquitous. The developed platform provides contractors and construction managers with real-time information about the status of materials and work, allowing them to better manage the workflow in a project. The study sheds new light on this subject, which is essential to know. This work is becoming increasingly aware of the use of smart tools in constructing buildings.

Keywords: materials management, internet of things (IoT), radio frequency identification (RFID), construction site, supply chain management

Procedia PDF Downloads 82
4937 1G2A IMU\GPS Integration Algorithm for Land Vehicle Navigation

Authors: O. Maklouf, Ahmed Abdulla

Abstract:

A general decline in the cost, size, and power requirements of electronics is accelerating the adoption of integrated GPS/INS technologies in consumer applications such Land Vehicle Navigation. Researchers are looking for ways to eliminate additional components from product designs. One possibility is to drop one or more of the relatively expensive gyroscopes from microelectromechanical system (MEMS) versions of inertial measurement units (IMUs). For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a simplified integration algorithm for strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of the low-cost IMU and because of the relatively small area of the trajectory.

Keywords: GPS, ParIMU, INS, Kalman filter

Procedia PDF Downloads 516
4936 Distribution Network Optimization by Optimal Placement of Photovoltaic-Based Distributed Generation: A Case Study of the Nigerian Power System

Authors: Edafe Lucky Okotie, Emmanuel Osawaru Omosigho

Abstract:

This paper examines the impacts of the introduction of distributed energy generation (DEG) technology into the Nigerian power system as an alternative means of energy generation at distribution ends using Otovwodo 15 MVA, 33/11kV injection substation as a case study. The overall idea is to increase the generated energy in the system, improve the voltage profile and reduce system losses. A photovoltaic-based distributed energy generator (PV-DEG) was considered and was optimally placed in the network using Genetic Algorithm (GA) in Mat. Lab/Simulink environment. The results of simulation obtained shows that the dynamic performance of the network was optimized with DEG-grid integration.

Keywords: distributed energy generation (DEG), genetic algorithm (GA), power quality, total load demand, voltage profile

Procedia PDF Downloads 84
4935 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 80
4934 Rounding Technique's Application in Schnorr Signature Algorithm: Known Partially Most Significant Bits of Nonce

Authors: Wenjie Qin, Kewei Lv

Abstract:

In 1996, Boneh and Venkatesan proposed the Hidden Number Problem (HNP) and proved the most significant bits (MSB) of computational Diffie-Hellman key exchange scheme and related schemes are unpredictable bits. They also gave a method which is a lattice rounding technique to solve HNP in non-uniform model. In this paper, we put forward a new concept that is Schnorr-MSB-HNP. We also reduce the problem of solving Schnorr signature private key with a few consecutive most significant bits of random nonce (used at each signature generation) to Schnorr-MSB-HNP, then we use the rounding technique to solve the Schnorr-MSB-HNP. We have come to the conclusion that if there is a ‘miraculous box’ which inputs the random nonce and outputs 2loglogq (q is a prime number) most significant bits of nonce, the signature private key will be obtained by choosing 2logq signature messages randomly. Thus we get an attack on the Schnorr signature private key.

Keywords: rounding technique, most significant bits, Schnorr signature algorithm, nonce, Schnorr-MSB-HNP

Procedia PDF Downloads 233
4933 A Hybrid Genetic Algorithm and Neural Network for Wind Profile Estimation

Authors: M. Saiful Islam, M. Mohandes, S. Rehman, S. Badran

Abstract:

Increasing necessity of wind power is directing us to have precise knowledge on wind resources. Methodical investigation of potential locations is required for wind power deployment. High penetration of wind energy to the grid is leading multi megawatt installations with huge investment cost. This fact appeals to determine appropriate places for wind farm operation. For accurate assessment, detailed examination of wind speed profile, relative humidity, temperature and other geological or atmospheric parameters are required. Among all of these uncertainty factors influencing wind power estimation, vertical extrapolation of wind speed is perhaps the most difficult and critical one. Different approaches have been used for the extrapolation of wind speed to hub height which are mainly based on Log law, Power law and various modifications of the two. This paper proposes a Artificial Neural Network (ANN) and Genetic Algorithm (GA) based hybrid model, namely GA-NN for vertical extrapolation of wind speed. This model is very simple in a sense that it does not require any parametric estimations like wind shear coefficient, roughness length or atmospheric stability and also reliable compared to other methods. This model uses available measured wind speeds at 10m, 20m and 30m heights to estimate wind speeds up to 100m. A good comparison is found between measured and estimated wind speeds at 30m and 40m with approximately 3% mean absolute percentage error. Comparisons with ANN and power law, further prove the feasibility of the proposed method.

Keywords: wind profile, vertical extrapolation of wind, genetic algorithm, artificial neural network, hybrid machine learning

Procedia PDF Downloads 490
4932 Cryptography and Cryptosystem a Panacea to Security Risk in Wireless Networking

Authors: Modesta E. Ezema, Chikwendu V. Alabekee, Victoria N. Ishiwu, Ifeyinwa NwosuArize, Chinedu I. Nwoye

Abstract:

The advent of wireless networking in computing technology cannot be overemphasized, it opened up easy accessibility to information resources, networking made easier and brought internet accessibility to our doorsteps, but despite all these, some mishap came in with it that is causing mayhem in today ‘s overall information security. The cyber criminals will always compromise the integrity of a message that is not encrypted or that is encrypted with a weak algorithm.In other to correct the mayhem, this study focuses on cryptosystem and cryptography. This ensures end to end crypt messaging. The study of various cryptographic algorithms, as well as the techniques and applications of the cryptography for efficiency, were all considered in the work., present and future applications of cryptography were dealt with as well as Quantum Cryptography was exposed as the current and the future area in the development of cryptography. An empirical study was conducted to collect data from network users.

Keywords: algorithm, cryptography, cryptosystem, network

Procedia PDF Downloads 349
4931 A Futuristic Look at American Indian Nationhood: Zits in Sherman Alexie’s Flight

Authors: Shaimaa Alobaidi

Abstract:

The presentation examines how urbanization opens possibilities for American Indian characters like Zits in Alexie’s Flightto explore new definitionsoftheirtribal self-identification. Zits travels in time and views the world from different bodies, ages, and races; his journeys end with different perspectives on the idea of nationhood as an American Indian. He is an example of Vine Deloria’s statementthat “urban Indians have become the cutting edge of the new Indian nationalism” (248). Flight is chosen because the momentZits leaves the real world for time-traveling adventures is very critical; it is a moment of rage that ends in the mass murder of many Anglo-Americans. The paper focus on the turning point when he returns into his body with new opportunities towards his existence among the majority of anglo-Americans who cannot help but see him American Indian minority in need of help and assistance. Characters, such as Zits, attempt to outlive alienation, and Alexie gives new definitionsof their ethnic nationhood. Futuristicdoes not mean the very far unpredictable future; it is rather a nearpotential future for teenagers of American Indians, like Zits, Arnold, andCoyoteSprings- the band in ReservationBlues; all revolutionary personalitiesin Alexie’s works. They will be analyzed as Gerald Vizenor’s “postindianwarriors” who have the ability to identify Indigenous nationalism in a post-colonial context.

Keywords: alienation, self-identification, nationhood, urbanization, postindian

Procedia PDF Downloads 166
4930 Black Box Model and Evolutionary Fuzzy Control Methods of Coupled-Tank System

Authors: S. Yaman, S. Rostami

Abstract:

In this study, a black box modeling of the coupled-tank system is obtained by using fuzzy sets. The derived model is tested via adaptive neuro fuzzy inference system (ANFIS). In order to achieve a better control performance, the parameters of three different controller types, classical proportional integral controller (PID), fuzzy PID and function tuner method, are tuned by one of the evolutionary computation method, genetic algorithm. All tuned controllers are applied to the fuzzy model of the coupled-tank experimental setup and analyzed under the different reference input values. According to the results, it is seen that function tuner method demonstrates better robust control performance and guarantees the closed loop stability.

Keywords: function tuner method (FTM), fuzzy modeling, fuzzy PID controller, genetic algorithm (GA)

Procedia PDF Downloads 310
4929 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images

Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann

Abstract:

FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.

Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design

Procedia PDF Downloads 278
4928 Sources and Potential Ecological Risks of Heavy Metals in the Sediment Samples From Coastal Area in Ondo, Southwest Nigeria

Authors: Ogundele Lasun Tunde, Ayeku Oluwagbemiga Patrick

Abstract:

Heavy metals are released into the sediments in aquatic environment from both natural and anthropogenic sources and they are considered as worldwide issue due to their deleterious ecological risks and food chain disruption. In this study, sediments samples were collected at three major sites (Awoye, Abereke and Ayetoro) along Ondo coastal area using VanVeen grab sampler. The concentrations of As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, V and Zn were determined by employing Atomic Absorption Spectroscopy (AAS). The combined concentrations data were subjected to Positive Matrix Factorization (PMF) receptor approach for source identification and apportionment. The probable risks that might be posed by heavy metals in the sediment were estimated by potential and integrated ecological risks indices. Among the measured heavy metals, Fe had the average concentrations of 20.38 ± 2.86, 23.56 ± 4.16 and 25.32 ± 4.83 lg/g at Abereke, Awoye and Ayetoro sites, respectively. The PMF resulted in identification of four sources of heavy metals in the sediments. The resolved sources and their percentage contributions were oil exploration (39%), industrial waste/sludge (35%), detrital process (18%) and Mn-sources (8%). Oil exploration activities and industrial wastes are the major sources that contribute heavy metals into the coastal sediments. The major pollutants that posed ecological risks to the local aquatic ecosystem are As, Pb, Cr and Cd (40 B Ei ≤ 80) classifying the sites as moderate risk. The integrate risks values of Awoye, Abereke and Ayetoro are 231.2, 234.0 and 236.4, respectively suggesting that the study areas had a moderate ecological risk. The study showed the suitability of PMF receptor model for source identification of heavy metals in the sediments. Also, the intensive anthropogenic activities and natural sources could largely discharge heavy metals into the study area, which may increase the heavy metal contents of the sediments and further contribute to the associated ecological risk, thus affecting the local aquatic ecosystem.

Keywords: positive matrix factorization, sediments, heavy metals, sources, ecological risks

Procedia PDF Downloads 22
4927 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter

Authors: Jisun Lee, Jay Hyoun Kwon

Abstract:

As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.

Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain

Procedia PDF Downloads 349
4926 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 148
4925 Automatic Number Plate Recognition System Based on Deep Learning

Authors: T. Damak, O. Kriaa, A. Baccar, M. A. Ben Ayed, N. Masmoudi

Abstract:

In the last few years, Automatic Number Plate Recognition (ANPR) systems have become widely used in the safety, the security, and the commercial aspects. Forethought, several methods and techniques are computing to achieve the better levels in terms of accuracy and real time execution. This paper proposed a computer vision algorithm of Number Plate Localization (NPL) and Characters Segmentation (CS). In addition, it proposed an improved method in Optical Character Recognition (OCR) based on Deep Learning (DL) techniques. In order to identify the number of detected plate after NPL and CS steps, the Convolutional Neural Network (CNN) algorithm is proposed. A DL model is developed using four convolution layers, two layers of Maxpooling, and six layers of fully connected. The model was trained by number image database on the Jetson TX2 NVIDIA target. The accuracy result has achieved 95.84%.

Keywords: ANPR, CS, CNN, deep learning, NPL

Procedia PDF Downloads 306
4924 Unsteady Three-Dimensional Adaptive Spatial-Temporal Multi-Scale Direct Simulation Monte Carlo Solver to Simulate Rarefied Gas Flows in Micro/Nano Devices

Authors: Mirvat Shamseddine, Issam Lakkis

Abstract:

We present an efficient, three-dimensional parallel multi-scale Direct Simulation Monte Carlo (DSMC) algorithm for the simulation of unsteady rarefied gas flows in micro/nanosystems. The algorithm employs a novel spatiotemporal adaptivity scheme. The scheme performs a fully dynamic multi-level grid adaption based on the gradients of flow macro-parameters and an automatic temporal adaptation. The computational domain consists of a hierarchical octree-based Cartesian grid representation of the flow domain and a triangular mesh for the solid object surfaces. The hybrid mesh, combined with the spatiotemporal adaptivity scheme, allows for increased flexibility and efficient data management, rendering the framework suitable for efficient particle-tracing and dynamic grid refinement and coarsening. The parallel algorithm is optimized to run DSMC simulations of strongly unsteady, non-equilibrium flows over multiple cores. The presented method is validated by comparing with benchmark studies and then employed to improve the design of micro-scale hotwire thermal sensors in rarefied gas flows.

Keywords: DSMC, oct-tree hierarchical grid, ray tracing, spatial-temporal adaptivity scheme, unsteady rarefied gas flows

Procedia PDF Downloads 299