Search results for: Function Approximation Technique (FAT)
1242 A Formal Approach for Proof Constructions in Cryptography
Authors: Markus Kaiser, Johannes Buchmann
Abstract:
In this article we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (σ-algebras, probability spaces and conditional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes- Formula. Besides, we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this article shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in cryptographic research, if the corresponding basic mathematical knowledge is available in a database.Keywords: prime numbers, primality tests, (conditional) probabilitydistributions, formal proof system, higher-order logic, formalverification, Bayes' Formula, Miller-Rabin primality test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14691241 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio
Authors: Urvee B. Trivedi, U. D. Dalal
Abstract:
As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.Keywords: Cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary User (PU), secondary user (SU), Fast Fourier transform (FFT), signal to noise ratio (SNR).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14701240 Multi-Disciplinary Optimisation Methodology for Aircraft Load Prediction
Authors: Sudhir Kumar Tiwari
Abstract:
The paper demonstrates a methodology that can be used at an early design stage of any conventional aircraft. This research activity assesses the feasibility derivation of methodology for aircraft loads estimation during the various phases of design for a transport category aircraft by utilizing potential of using commercial finite element analysis software, which may drive significant time saving. Early Design phase have limited data and quick changing configuration results in handling of large number of load cases. It is useful to idealize the aircraft as a connection of beams, which can be very accurately modelled using finite element analysis (beam elements). This research explores the correct approach towards idealizing an aircraft using beam elements. FEM Techniques like inertia relief were studied for implementation during course of work. The correct boundary condition technique envisaged for generation of shear force, bending moment and torque diagrams for the aircraft. The possible applications of this approach are the aircraft design process, which have been investigated.
Keywords: Multi-disciplinary optimization, aircraft load, finite element analysis, Stick Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11301239 A Comparison of Marginal and Joint Generalized Quasi-likelihood Estimating Equations Based On the Com-Poisson GLM: Application to Car Breakdowns Data
Authors: N. Mamode Khan, V. Jowaheer
Abstract:
In this paper, we apply and compare two generalized estimating equation approaches to the analysis of car breakdowns data in Mauritius. Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observation as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use the two types of quasi-likelihood estimation approaches to estimate the parameters of the model: marginal and joint generalized quasi-likelihood estimating equation approaches. Under-dispersion parameter is estimated to be around 2.14 justifying the appropriateness of Com-Poisson distribution in modelling underdispersed count responses recorded in this study.
Keywords: Breakdowns, under-dispersion, com-poisson, generalized linear model, marginal quasi-likelihood estimation, joint quasi-likelihood estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14701238 Effects of Upstream Wall Roughness on Separated Turbulent Flow over a Forward Facing Step in an Open Channel
Authors: S. M. Rifat, André L. Marchildon, Mark F. Tachie
Abstract:
The effect of upstream surface roughness over a smooth forward facing step in an open channel was investigated using a particle image velocimetry technique. Three different upstream surface topographies consisting of hydraulically smooth wall, sandpaper 36 grit and sand grains were examined. Besides the wall roughness conditions, all other upstream flow characteristics were kept constant. It was also observed that upstream roughness decreased the approach velocity by 2% and 10% but increased the turbulence intensity by 14% and 35% at the wall-normal distance corresponding to the top plane of the step compared to smooth upstream. The results showed that roughness decreased the reattachment lengths by 14% and 30% compared to smooth upstream. Although the magnitudes of maximum positive and negative Reynolds shear stress in separated and reattached region were 0.02Ue for all the cases, the physical size of both the maximum and minimum contour levels were decreased by increasing upstream roughness.Keywords: Forward facing step, open channel, separated and reattached turbulent flows, wall roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13911237 Forming the Differential-Algebraic Model of Radial Power Systems for Simulation of both Transient and Steady-State Conditions
Authors: Saleh A. Al-Jufout
Abstract:
This paper presents a procedure of forming the mathematical model of radial electric power systems for simulation of both transient and steady-state conditions. The research idea has been based on nodal voltages technique and on differentiation of Kirchhoff's current law (KCL) applied to each non-reference node of the radial system, the result of which the nodal voltages has been calculated by solving a system of algebraic equations. Currents of the electric power system components have been determined by solving their respective differential equations. Transforming the three-phase coordinate system into Cartesian coordinate system in the model decreased the overall number of equations by one third. The use of Cartesian coordinate system does not ignore the DC component during transient conditions, but restricts the model's implementation for symmetrical modes of operation only. An example of the input data for a four-bus radial electric power system has been calculated.Keywords: Mathematical Modelling, Radial Power System, Steady-State, Transients
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12471236 Influence of Different Thicknesses on Mechanical and Corrosion Properties of α-C:H Films
Authors: S. Tunmee, P. Wongpanya, I. Toda, X. L. Zhou, Y. Nakaya, N. Konkhunthot, S. Arakawa, H. Saitoh
Abstract:
The hydrogenated amorphous carbon films (α-C:H) were deposited on p-type Si (100) substrates at different thicknesses by radio frequency plasma enhanced chemical vapor deposition technique (rf-PECVD). Raman spectra display asymmetric diamond-like carbon (DLC) peaks, representative of the α-C:H films. The decrease of intensity ID/IG ratios revealed the sp3 content arise at different thicknesses of the α-C:H films. In terms of mechanical properties, the high hardness and elastic modulus values showed the elastic and plastic deformation behaviors related to sp3 content in amorphous carbon films. Electrochemical properties showed that the α-C:H films exhibited excellent corrosion resistance in air-saturated 3.5 wt.% NaCl solution for pH 2 at room temperature. Thickness increasing affected the small sp2 clusters in matrix, restricting the velocity transfer and exchange of electrons. The deposited α-C:H films exhibited excellent mechanical properties and corrosion resistance.
Keywords: Thickness, Mechanical properties, Electrochemical corrosion properties, α-C:H film.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52701235 Synthesis, Characterization and Coating of the Zinc Oxide Nanoparticles on Cotton Fabric by Mechanical Thermo-Fixation Techniques to Impart Antimicrobial Activity
Authors: Imana Shahrin Tania, Mohammad Ali
Abstract:
The present study reports the synthesis, characterization and application of nano-sized zinc-oxide (ZnO) particles on a cotton fabric surface. The aim of the investigations is to impart the antimicrobial activity on textile cloth. Nanoparticle is synthesized by wet chemical method from zinc sulphate and sodium hydroxide. SEM (scanning electron micrograph) images are taken to demonstrate the surface morphology of nanoparticles. XRD analysis is done to determine the crystal size of the nanoparticle. With the conformation of nanoformation, the cotton woven fabric is treated with ZnO nanoparticle by mechanical thermo-fixation (pad-dry-cure) technique. To increase the wash durability of nano treated fabric, an acrylic binder is used as a fixing agent. The treated fabric shows up to 90% bacterial reduction for S. aureus (Staphylococcus aureus) and 87% for E. coli (Escherichia coli) which is appreciable for bacteria protective clothing.Keywords: Nanoparticle, zinc oxide, cotton fabric, antibacterial activity, binder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5861234 Structure and Properties of Meltblown Polyetherimide as High Temperature Filter Media
Authors: Gajanan Bhat, Vincent Kandagor, Daniel Prather, Ramesh Bhave
Abstract:
Polyetherimide (PEI), an engineering plastic with very high glass transition temperature and excellent chemical and thermal stability, has been processed into a controlled porosity filter media of varying pore size, performance, and surface characteristics. A special grade of the PEI was processed by melt blowing to produce microfiber nonwovens suitable as filter media. The resulting microfiber webs were characterized to evaluate their structure and properties. The fiber webs were further modified by hot pressing, a post processing technique, which reduces the pore size in order to improve the barrier properties of the resulting membranes. This ongoing research has shown that PEI can be a good candidate for filter media requiring high temperature and chemical resistance with good mechanical properties. Also, by selecting the appropriate processing conditions, it is possible to achieve desired filtration performance from this engineering plastic.
Keywords: Nonwovens, melt blowing, polyehterimide, filter media, microfibers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13681233 Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model
Authors: Anupama Pande, Ashok Kumar Thakur, Swapnoneel Roy
Abstract:
A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.Keywords: Complex valued neural network, Generalized Meanneuron model, Signal processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17301232 Optimal Design of Multimachine Power System Stabilizers Using Improved Multi-Objective Particle Swarm Optimization Algorithm
Authors: Badr M. Alshammari, T. Guesmi
Abstract:
In this paper, the concept of a non-dominated sorting multi-objective particle swarm optimization with local search (NSPSO-LS) is presented for the optimal design of multimachine power system stabilizers (PSSs). The controller design is formulated as an optimization problem in order to shift the system electromechanical modes in a pre-specified region in the s-plan. A composite set of objective functions comprising the damping factor and the damping ratio of the undamped and lightly damped electromechanical modes is considered. The performance of the proposed optimization algorithm is verified for the 3-machine 9-bus system. Simulation results based on eigenvalue analysis and nonlinear time-domain simulation show the potential and superiority of the NSPSO-LS algorithm in tuning PSSs over a wide range of loading conditions and large disturbance compared to the classic PSO technique and genetic algorithms.
Keywords: Multi-objective optimization, particle swarm optimization, power system stabilizer, low frequency oscillations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12331231 A Reconfigurable Distributed Multiagent System Optimized for Scalability
Authors: Summiya Moheuddin, Afzel Noore, Muhammad Choudhry
Abstract:
This paper proposes a novel solution for optimizing the size and communication overhead of a distributed multiagent system without compromising the performance. The proposed approach addresses the challenges of scalability especially when the multiagent system is large. A modified spectral clustering technique is used to partition a large network into logically related clusters. Agents are assigned to monitor dedicated clusters rather than monitor each device or node. The proposed scalable multiagent system is implemented using JADE (Java Agent Development Environment) for a large power system. The performance of the proposed topologyindependent decentralized multiagent system and the scalable multiagent system is compared by comprehensively simulating different fault scenarios. The time taken for reconfiguration, the overall computational complexity, and the communication overhead incurred are computed. The results of these simulations show that the proposed scalable multiagent system uses fewer agents efficiently, makes faster decisions to reconfigure when a fault occurs, and incurs significantly less communication overhead.Keywords: Multiagent system, scalable design, spectral clustering, reconfiguration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13811230 Hydrodynamic Modeling of Infinite Reservoir using Finite Element Method
Authors: M. A. Ghorbani, M. Pasbani Khiavi
Abstract:
In this paper, the dam-reservoir interaction is analyzed using a finite element approach. The fluid is assumed to be incompressible, irrotational and inviscid. The assumed boundary conditions are that the interface of the dam and reservoir is vertical and the bottom of reservoir is rigid and horizontal. The governing equation for these boundary conditions is implemented in the developed finite element code considering the horizontal and vertical earthquake components. The weighted residual standard Galerkin finite element technique with 8-node elements is used to discretize the equation that produces a symmetric matrix equation for the damreservoir system. A new boundary condition is proposed for truncating surface of unbounded fluid domain to show the energy dissipation in the reservoir, through radiation in the infinite upstream direction. The Sommerfeld-s and perfect damping boundary conditions are also implemented for a truncated boundary to compare with the proposed far end boundary. The results are compared with an analytical solution to demonstrate the accuracy of the proposed formulation and other truncated boundary conditions in modeling the hydrodynamic response of an infinite reservoir.Keywords: Reservoir, finite element, truncated boundary, hydrodynamic pressure
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23061229 Mathematical Modeling and Analysis of Forced Vibrations in Micro-Scale Microstretch Thermoelastic Simply Supported Beam
Authors: Geeta Partap, Nitika Chugh
Abstract:
The present paper deals with the flexural vibrations of homogeneous, isotropic, generalized micropolar microstretch thermoelastic thin Euler-Bernoulli beam resonators, due to Exponential time varying load. Both the axial ends of the beam are assumed to be at simply supported conditions. The governing equations have been solved analytically by using Laplace transforms technique twice with respect to time and space variables respectively. The inversion of Laplace transform in time domain has been performed by using the calculus of residues to obtain deflection.The analytical results have been numerically analyzed with the help of MATLAB software for magnesium like material. The graphical representations and interpretations have been discussed for Deflection of beam under Simply Supported boundary condition and for distinct considered values of time and space as well. The obtained results are easy to implement for engineering analysis and designs of resonators (sensors), modulators, actuators.Keywords: Microstretch, deflection, exponential load, Laplace transforms, Residue theorem, simply supported.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9421228 Solving Bus Terminal Location Problem Using Genetic Algorithm
Authors: S. Babaie-Kafaki, R. Ghanbari, S.H. Nasseri, E. Ardil
Abstract:
Bus networks design is an important problem in public transportation. The main step to this design, is determining the number of required terminals and their locations. This is an especial type of facility location problem, a large scale combinatorial optimization problem that requires a long time to be solved. The genetic algorithm (GA) is a search and optimization technique which works based on evolutionary principle of natural chromosomes. Specifically, the evolution of chromosomes due to the action of crossover, mutation and natural selection of chromosomes based on Darwin's survival-of-the-fittest principle, are all artificially simulated to constitute a robust search and optimization procedure. In this paper, we first state the problem as a mixed integer programming (MIP) problem. Then we design a new crossover and mutation for bus terminal location problem (BTLP). We tested the different parameters of genetic algorithm (for a sample problem) and obtained the optimal parameters for solving BTLP with numerical try and error.Keywords: Bus networks, Genetic algorithm (GA), Locationproblem, Mixed integer programming (MIP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23051227 Aircraft Selection Problem Using Decision Uncertainty Distance in Fuzzy Multiple Criteria Decision Making Analysis
Authors: C. Ardil
Abstract:
Aircraft have different capabilities and specifications according to the required strategic goals and objectives in operations. With various types on the market with different aircraft characteristics, it becomes difficult to select a suitable aircraft for certain operations and requirements. The entropy weighting method (EWM) is a useful, highly consistent, and reliable method for obtaining the weights of the criteria and is worth integrating with the decision uncertainty distance (DUD) method, which is more applicable and requires less computation than other methods. An illustrative example is presented to demonstrate the validity and usability of the proposed methodology. Comparing the ranking results matches the distance-based approach, which is the technique for order preference by similarity to ideal solution (TOPSIS) method, which shows the robustness of the entropy DUD hybrid method. Validity analysis shows that the proposed hybrid multiple criteria decision-making analysis (MCDMA) methodology is quantitatively stable and reliable.
Keywords: aircraft selection, decision uncertainty distance (DUD), multiple criteria decision making analysis, MCDMA, TOPSIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5421226 Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit
Authors: Ahmed Elrewainy
Abstract:
Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.
Keywords: Basis pursuit, blind source separation, hyperspectral imaging, spectral unmixing, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8371225 Efficacy of Recovery Tech Virtual Reality Rehabilitation System for Shoulder Impingement Syndrome
Authors: Kasra Afsahi, Maryam Soheilifar, Nazanin Vahed, Omid Seyed Esmaeili, S. Hossein Hosseini
Abstract:
The most common cause of shoulder pain occurs when rotator cuff tendons become trapped under the bony area in the shoulder. This pilot study was performed to evaluate the feasibility of Virtual Reality based rehabilitation of shoulder impingement syndrome in athletes. Three consecutive patients with subacromial impingement syndrome were enrolled. The participants were rehabilitated for 5 times a week for 4 weeks, 20 sessions in total (with duration of each session being 60 minutes). In addition to the conventional rehabilitation program, a 10-minute game-based virtual reality exercise was administered. Primary outcome measures were range of motion evaluated with goniometer, pain sensation, disability intensity using ‘The Disabilities of the Arm, Shoulder and Hand Questionnaire’, muscle strength using ‘dynamometer’; pain threshold with 'algometer' and level of satisfaction. There were significant improvements in the range of motion, pain sensation, disability, pain threshold and muscle strength compared to basis (P < 0.05). There were no major adverse effects. This study showed the usefulness of VR therapy as an adjunct to conventional physiotherapy in improving function in patients with shoulder impingement syndrome.
Keywords: Shoulder impingement syndrome, VR therapy, feasibility, rehabilitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4031224 Topic Modeling Using Latent Dirichlet Allocation and Latent Semantic Indexing on South African Telco Twitter Data
Authors: Phumelele P. Kubheka, Pius A. Owolawi, Gbolahan Aiyetoro
Abstract:
Twitter is one of the most popular social media platforms where users share their opinions on different subjects. Twitter can be considered a great source for mining text due to the high volumes of data generated through the platform daily. Many industries such as telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model in this experiment. A higher topic coherence score indicates better performance of the model.
Keywords: Big data, latent Dirichlet allocation, latent semantic indexing, Telco, topic modeling, Twitter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4611223 Mining Association Rules from Unstructured Documents
Authors: Hany Mahgoub
Abstract:
This paper presents a system for discovering association rules from collections of unstructured documents called EART (Extract Association Rules from Text). The EART system treats texts only not images or figures. EART discovers association rules amongst keywords labeling the collection of textual documents. The main characteristic of EART is that the system integrates XML technology (to transform unstructured documents into structured documents) with Information Retrieval scheme (TF-IDF) and Data Mining technique for association rules extraction. EART depends on word feature to extract association rules. It consists of four phases: structure phase, index phase, text mining phase and visualization phase. Our work depends on the analysis of the keywords in the extracted association rules through the co-occurrence of the keywords in one sentence in the original text and the existing of the keywords in one sentence without co-occurrence. Experiments applied on a collection of scientific documents selected from MEDLINE that are related to the outbreak of H5N1 avian influenza virus.Keywords: Association rules, information retrieval, knowledgediscovery in text, text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24421222 Computer Verification in Cryptography
Authors: Markus Kaiser, Johannes Buchmann
Abstract:
In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.
Keywords: prime numbers, primality tests, (conditional) proba¬bility distributions, formal proof system, higher-order logic, formal verification, Bayes' Formula, Miller-Rabin primality test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21811221 Mitigation of Electromagnetic Interference Generated by GPIB Control-Network in AC-DC Transfer Measurement System
Authors: M. M. Hlakola, E. Golovins, D. V. Nicolae
Abstract:
The field of instrumentation electronics is undergoing an explosive growth, due to its wide range of applications. The proliferation of electrical devices in a close working proximity can negatively influence each other’s performance. The degradation in the performance is due to electromagnetic interference (EMI). This paper investigates the negative effects of electromagnetic interference originating in the General Purpose Interface Bus (GPIB) control-network of the AC-DC transfer measurement system. Remedial measures of reducing measurement errors and failure of range of industrial devices due to EMI have been explored. The ACDC transfer measurement system was analysed for the commonmode (CM) EMI effects. Further investigation of coupling path as well as much accurate identification of noise propagation mechanism has been outlined. To prevent the occurrence of common-mode (ground loops) which was identified between the GPIB system control circuit and the measurement circuit, a microcontroller-driven GPIB switching isolator device was designed, prototyped, programmed and validated. This mitigation technique has been explored to reduce EMI effectively.Keywords: CM, EMI, GPIB, ground loops.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18251220 An Advanced Nelder Mead Simplex Method for Clustering of Gene Expression Data
Authors: M. Pandi, K. Premalatha
Abstract:
The DNA microarray technology concurrently monitors the expression levels of thousands of genes during significant biological processes and across the related samples. The better understanding of functional genomics is obtained by extracting the patterns hidden in gene expression data. It is handled by clustering which reveals natural structures and identify interesting patterns in the underlying data. In the proposed work clustering gene expression data is done through an Advanced Nelder Mead (ANM) algorithm. Nelder Mead (NM) method is a method designed for optimization process. In Nelder Mead method, the vertices of a triangle are considered as the solutions. Many operations are performed on this triangle to obtain a better result. In the proposed work, the operations like reflection and expansion is eliminated and a new operation called spread-out is introduced. The spread-out operation will increase the global search area and thus provides a better result on optimization. The spread-out operation will give three points and the best among these three points will be used to replace the worst point. The experiment results are analyzed with optimization benchmark test functions and gene expression benchmark datasets. The results show that ANM outperforms NM in both benchmarks.
Keywords: Spread out, simplex, multi-minima, fitness function, optimization, search area, monocyte, solution, genomes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25581219 A Pairwise-Gaussian-Merging Approach: Towards Genome Segmentation for Copy Number Analysis
Authors: Chih-Hao Chen, Hsing-Chung Lee, Qingdong Ling, Hsiao-Jung Chen, Sun-Chong Wang, Li-Ching Wu, H.C. Lee
Abstract:
Segmentation, filtering out of measurement errors and identification of breakpoints are integral parts of any analysis of microarray data for the detection of copy number variation (CNV). Existing algorithms designed for these tasks have had some successes in the past, but they tend to be O(N2) in either computation time or memory requirement, or both, and the rapid advance of microarray resolution has practically rendered such algorithms useless. Here we propose an algorithm, SAD, that is much faster and much less thirsty for memory – O(N) in both computation time and memory requirement -- and offers higher accuracy. The two key ingredients of SAD are the fundamental assumption in statistics that measurement errors are normally distributed and the mathematical relation that the product of two Gaussians is another Gaussian (function). We have produced a computer program for analyzing CNV based on SAD. In addition to being fast and small it offers two important features: quantitative statistics for predictions and, with only two user-decided parameters, ease of use. Its speed shows little dependence on genomic profile. Running on an average modern computer, it completes CNV analyses for a 262 thousand-probe array in ~1 second and a 1.8 million-probe array in 9 secondsKeywords: Cancer, pathogenesis, chromosomal aberration, copy number variation, segmentation analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14771218 Analytical Based Truncation Principle of Higher-Order Solution for a x1/3 Force Nonlinear Oscillator
Authors: Md. Alal Hosen
Abstract:
In this paper, a modified harmonic balance method based an analytical technique has been developed to determine higher-order approximate periodic solutions of a conservative nonlinear oscillator for which the elastic force term is proportional to x1/3. Usually, a set of nonlinear algebraic equations is solved in this method. However, analytical solutions of these algebraic equations are not always possible, especially in the case of a large oscillation. In this article, different parameters of the same nonlinear problems are found, for which the power series produces desired results even for the large oscillation. We find a modified harmonic balance method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. Besides these, a suitable truncation formula is found in which the solution measures better results than existing solutions. The method is mainly illustrated by the x1/3 force nonlinear oscillator but it is also useful for many other nonlinear problems.
Keywords: Approximate solutions, Harmonic balance method, Nonlinear oscillator, Perturbation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14321217 Parallel Distributed Computational Microcontroller System for Adaptive Antenna Downlink Transmitter Power Optimization
Authors: K. Prajindra Sankar, S.K. Tiong, S.P. Johnny Koh
Abstract:
This paper presents a tested research concept that implements a complex evolutionary algorithm, genetic algorithm (GA), in a multi-microcontroller environment. Parallel Distributed Genetic Algorithm (PDGA) is employed in adaptive beam forming technique to reduce power usage of adaptive antenna at WCDMA base station. Adaptive antenna has dynamic beam that requires more advanced beam forming algorithm such as genetic algorithm which requires heavy computation and memory space. Microcontrollers are low resource platforms that are normally not associated with GAs, which are typically resource intensive. The aim of this project was to design a cooperative multiprocessor system by expanding the role of small scale PIC microcontrollers to optimize WCDMA base station transmitter power. Implementation results have shown that PDGA multi-microcontroller system returned optimal transmitted power compared to conventional GA.Keywords: Microcontroller, Genetic Algorithm, Adaptiveantenna, Power optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17841216 Simulation on Influence of Environmental Conditions on Part Distortion in Fused Deposition Modelling
Authors: Anto Antony Samy, Atefeh Golbang, Edward Archer, Alistair McIlhagger
Abstract:
Fused Deposition Modelling (FDM) is one of the additive manufacturing techniques that has become highly attractive in the industrial and academic sectors. However, parts fabricated through FDM are highly susceptible to geometrical defects such as warpage, shrinkage, and delamination that can severely affect their function. Among the thermoplastic polymer feedstock for FDM, semi-crystalline polymers are highly prone to part distortion due to polymer crystallization. In this study, the influence of FDM processing conditions such as chamber temperature and print bed temperature on the induced thermal residual stress and resulting warpage are investigated using 3D transient thermal model for a semi-crystalline polymer. The thermo-mechanical properties and the viscoelasticity of the polymer, as well as the crystallization physics which considers the crystallinity of the polymer, are coupled with the evolving temperature gradient of the print model. From the results it was observed that increasing the chamber temperature from 25 °C to 75 °C leads to a decrease of 3.3% residual stress and increase of 0.4% warpage, while decreasing bed temperature from 100 °C to 60 °C resulted in 27% increase in residual stress and a significant rise of 137% in warpage. The simulated warpage data are validated by comparing it with the measured warpage values of the samples using 3D scanning.
Keywords: Finite Element Analysis, FEA, Fused Deposition Modelling, residual stress, warpage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4831215 Elman Neural Network for Diagnosis of Unbalance in a Rotor-Bearing System
Authors: S. Sendhilkumar, N. Mohanasundaram, M. Senthilkumar, S. N. Sivanandam
Abstract:
The operational life of rotating machines has to be extended using a predictive condition maintenance tool. Among various condition monitoring techniques, vibration analysis is most widely used technique in industry. Signals are extracted for evaluating the condition of machine; further diagnostics is carried out with detected signals to extend the life of machine. With help of detected signals, further interpretations are done to predict the occurrence of defects. To study the problem of defects, a test rig with various possibilities of defects is constructed and experiments are performed considering the unbalanced condition. Further, this paper presents an approach for fault diagnosis of unbalance condition using Elman neural network and frequency-domain vibration analysis. Amplitudes with variation in acceleration are fed to Elman neural network to classify fault or no-fault condition. The Elman network is trained, validated and tested with experimental readings. Results illustrate the effectiveness of Elman network in rotor-bearing system.Keywords: Elman neural network, fault detection, rotating machines, unbalance, vibration analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14711214 The Buffer Gas Influence Rate on Absolute Cu Atoms Density with regard to Deposition
Authors: S. Sobhanian, H. Naghshara, N. Sadeghi, S. Khorram
Abstract:
The absolute Cu atoms density in Cu(2S1/2ÔåÉ2P1/2) ground state has been measured by Resonance Optical Absorption (ROA) technique in a DC magnetron sputtering deposition with argon. We measured these densities under variety of operation conditions: pressure from 0.6 μbar to 14 μbar, input power from 10W to 200W and N2 mixture from 0% to 100%. For measuring the gas temperature, we used the simulation of N2 rotational spectra with a special computer code. The absolute number density of Cu atoms decreases with increasing the N2 percentage of buffer gas at any conditions of this work. But the deposition rate, is not decreased with the same manner. The deposition rate variation is very small and in the limit of quartz balance measuring equipment accuracy. So we conclude that decrease in the absolute number density of Cu atoms in magnetron plasma has not a big effect on deposition rate, because the diffusion of Cu atoms to the chamber volume and deviation of Cu atoms from direct path (towards the substrate) decreases with increasing of N2 percentage of buffer gas. This is because of the lower mass of N2 atoms compared to the argon ones.Keywords: Deposition rate, Resonance Optical Absorption, Sputtering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13661213 Well-Being Inequality Using Superimposing Satisfaction Waves: Heisenberg Uncertainty in Behavioural Economics and Econometrics
Authors: Okay Gunes
Abstract:
In this article, a new method is proposed for the measuring of well-being inequality through a model composed of superimposing satisfaction waves. The displacement of households’ satisfactory state (i.e. satisfaction) is defined in a satisfaction string. The duration of the satisfactory state for a given period is measured in order to determine the relationship between utility and total satisfactory time, itself dependent on the density and tension of each satisfaction string. Thus, individual cardinal total satisfaction values are computed by way of a one-dimensional form for scalar sinusoidal (harmonic) moving wave function, using satisfaction waves with varying amplitudes and frequencies which allow us to measure wellbeing inequality. One advantage to using satisfaction waves is the ability to show that individual utility and consumption amounts would probably not commute; hence, it is impossible to measure or to know simultaneously the values of these observables from the dataset. Thus, we crystallize the problem by using a Heisenberg-type uncertainty resolution for self-adjoint economic operators. We propose to eliminate any estimation bias by correlating the standard deviations of selected economic operators; this is achieved by replacing the aforementioned observed uncertainties with households’ perceived uncertainties (i.e. corrected standard deviations) obtained through the logarithmic psychophysical law proposed by Weber and Fechner.
Keywords: Heisenberg Uncertainty Principle, superimposing satisfaction waves, Weber–Fechner law, well-being inequality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2055