Search results for: prefix code decomposing.
699 Scale Time Offset Robust Modulation (STORM) in a Code Division Multiaccess Environment
Authors: David M. Jenkins Jr.
Abstract:
Scale Time Offset Robust Modulation (STORM) [1]– [3] is a high bandwidth waveform design that adds time-scale to embedded reference modulations using only time-delay [4]. In an environment where each user has a specific delay and scale, identification of the user with the highest signal power and that user-s phase is facilitated by the STORM processor. Both of these parameters are required in an efficient multiuser detection algorithm. In this paper, the STORM modulation approach is evaluated with a direct sequence spread quadrature phase shift keying (DS-QPSK) system. A misconception of the STORM time scale modulation is that a fine temporal resolution is required at the receiver. STORM will be applied to a QPSK code division multiaccess (CDMA) system by modifying the spreading codes. Specifically, the in-phase code will use a typical spreading code, and the quadrature code will use a time-delayed and time-scaled version of the in-phase code. Subsequently, the same temporal resolution in the receiver is required before and after the application of STORM. In this paper, the bit error performance of STORM in a synchronous CDMA system is evaluated and compared to theory, and the bit error performance of STORM incorporated in a single user WCDMA downlink is presented to demonstrate the applicability of STORM in a modern communication system.Keywords: Pseudonoise coded communication, Cyclic codes, Code division multiaccess
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629698 Direct Simulation Monte Carlo (DSMC) Algorithm – A Comparison of Mathematica Code with FLUENT 6.2 for Low Knudsen Number
Authors: Nabeel A. Qazi, Absaar ul Jabbar, Khalid Parvez
Abstract:
A code has been developed in Mathematica using Direct Simulation Monte Carlo (DSMC) technique. The code was tested for 2-D air flow around a circular cylinder. Same geometry and flow properties were used in FLUENT 6.2 for comparison. The results obtained from Mathematica simulation indicated significant agreement with FLUENT calculations, hence providing insight into particle nature of fluid flows.Keywords: DSMC algorithm, non continuum gas flows, Monte Carlo methods
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3420697 Anticipation of Bending Reinforcement Based on Iranian Concrete Code Using Meta-Heuristic Tools
Authors: Seyed Sadegh Naseralavi, Najmeh Bemani
Abstract:
In this paper, different concrete codes including America, New Zealand, Mexico, Italy, India, Canada, Hong Kong, Euro Code and Britain are compared with the Iranian concrete design code. First, by using Adaptive Neuro Fuzzy Inference System (ANFIS), the codes having the most correlation with the Iranian ninth issue of the national regulation are determined. Consequently, two anticipated methods are used for comparing the codes: Artificial Neural Network (ANN) and Multi-variable regression. The results show that ANN performs better. Predicting is done by using only tensile steel ratio and with ignoring the compression steel ratio.
Keywords: Concrete design code, anticipate method, artificial neural network, multi-variable regression, adaptive neuro fuzzy inference system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 817696 Using the V-Sphere Code for the Passive Scalar in the Wake of a Bluff Body
Authors: Y. Obikane, T. Nemoto , K. Ogura, M. Iwata, K. Ono
Abstract:
The objective of this research was to find the diffusion properties of vehicles on the road by using the V-Sphere Code. The diffusion coefficient and the size of the height of the wake were estimated with the LES option and the third order MUSCL scheme. We evaluated the code with the changes in the moments of Reynolds Stress along the mean streamline. The results show that at the leading part of a bluff body the LES has some advantages over the RNS since the changes in the strain rates are larger for the leading part. We estimated that the diffusion coefficient with the computed Reynolds stress (non-dimensional) was about 0.96 times the mean velocity.
Keywords: Wake , bluff body, V-CAD, turbulence diffusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1453695 JaCoText: A Pretrained Model for Java Code-Text Generation
Authors: Jessica Lòpez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid Dahhane, El Hassane Ettifouri
Abstract:
Pretrained transformer-based models have shown high performance in natural language generation task. However, a new wave of interest has surged: automatic programming language generation. This task consists of translating natural language instructions to a programming code. Despite the fact that well-known pretrained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformers neural network. It aims to generate java source code from natural language text. JaCoText leverages advantages of both natural language and code generation models. More specifically, we study some findings from the state of the art and use them to (1) initialize our model from powerful pretrained models, (2) explore additional pretraining on our java dataset, (3) carry out experiments combining the unimodal and bimodal data in the training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results.
Keywords: Java code generation, Natural Language Processing, Sequence-to-sequence Models, Transformers Neural Networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 855694 Biometric Technology in Securing the Internet Using Large Neural Network Technology
Authors: B. Akhmetov, A. Doszhanova, A. Ivanov, T. Kartbayev, A. Malygin
Abstract:
The article examines the methods of protection of citizens' personal data on the Internet using biometric identity authentication technology. It`s celebrated their potential danger due to the threat of loss of base biometric templates. To eliminate the threat of compromised biometric templates is proposed to use neural networks large and extra-large sizes, which will on the one hand securely (Highly reliable) to authenticate a person by his biometrics, and on the other hand make biometrics a person is not available for observation and understanding. This article also describes in detail the transformation of personal biometric data access code. It`s formed the requirements for biometrics converter code for his work with the images of "Insider," "Stranger", all the "Strangers". It`s analyzed the effect of the dimension of neural networks on the quality of converters mystery of biometrics in access code.
Keywords: Biometric security technologies, Conversion of personal biometric data access code, Electronic signature, Large neural networks, quality of converters "Biometrics - the code", the Egovernment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2179693 Performance Evaluation of One and Two Dimensional Prime Codes for Optical Code Division Multiple Access Systems
Authors: Gurjit Kaur, Neena Gupta
Abstract:
In this paper, we have analyzed and compared the performance of various coding schemes. The basic ID prime sequence codes are unique in only dimension, i.e. time slots, whereas 2D coding techniques are not unique by their time slots but with their wavelengths also. In this research, we have evaluated and compared the performance of 1D and 2D coding techniques constructed using prime sequence coding pattern for Optical Code Division Multiple Access (OCDMA) system on a single platform. Analysis shows that 2D prime code supports lesser number of active users than 1D codes, but they are having large code family and are the most secure codes compared to other codes. The performance of all these codes is analyzed on basis of number of active users supported at a Bit Error Rate (BER) of 10-9.Keywords: CDMA, OCDMA, BER, OOC, PC, EPC, MPC, 2-D PC/PC, λc, λa.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123692 Four Phase Methodology for Developing Secure Software
Authors: Carlos Gonzalez-Flores, Ernesto Liñan-García
Abstract:
A simple and robust approach for developing secure software. A Four Phase methodology consists in developing the non-secure software in phase one, and for the next three phases, one phase for each of the secure developing types (i.e. self-protected software, secure code transformation, and the secure shield). Our methodology requires first the determination and understanding of the type of security level needed for the software. The methodology proposes the use of several teams to accomplish this task. One Software Engineering Developing Team, a Compiler Team, a Specification and Requirements Testing Team, and for each of the secure software developing types: three teams of Secure Software Developing, three teams of Code Breakers, and three teams of Intrusion Analysis. These teams will interact among each other and make decisions to provide a secure software code protected against a required level of intruder.
Keywords: Secure Software, Four Phase Methodology, Software Engineering, Code Breakers, Intrusion Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834691 Enhancing the Network Security with Gray Code
Authors: Thomas Adi Purnomo Sidhi
Abstract:
Nowadays, network is an essential need in almost every part of human daily activities. People now can seamlessly connect to others through the Internet. With advanced technology, our personal data now can be more easily accessed. One of many components we are concerned for delivering the best network is a security issue. This paper is proposing a method that provides more options for security. This research aims to improve network security by focusing on the physical layer which is the first layer of the OSI model. The layer consists of the basic networking hardware transmission technologies of a network. With the use of observation method, the research produces a schematic design for enhancing the network security through the gray code converter.
Keywords: Network, network security, gray code, physical layer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2168690 Investigation of the Capability of REALP5 to Solve Complex Fuel Geometry
Authors: D. Abdelrazek, M. NaguibAly, A. A. Badawi, Asmaa G. Abo Elnour, A. A. El-Kafas
Abstract:
This work is developed within IAEA Coordinated Research Program 1496, “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal-hydraulic computational methods and tools for operation and safety analysis of research reactors”.
The study investigates the capability of Code RELAP5/Mod3.4 to solve complex geometry complexity. Its results are compared to the results of PARET, a common code in thermal hydraulic analysis for research reactors, belonging to MTR-PC groups.
The WWR-SM reactor at the Institute of Nuclear Physics (INP) in the Republic of Uzbekistan is simulated using both PARET and RELAP5 at steady state. Results from the two codes are compared.
REALP5 code succeeded in solving the complex fuel geometry. The PARET code needed some calculations to obtain the final result. Although the final results from the PARET are more accurate, the small differences in both results makes using RELAP5 code recommended in case of complex fuel assemblies.
Keywords: Complex fuel geometry, PARET, RELAP5, WWR-SM reactor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2252689 Refinement of Object-Z Specifications Using Morgan-s Refinement Calculus
Authors: Mehrnaz Najafi, Hassan Haghighi
Abstract:
Morgan-s refinement calculus (MRC) is one of the well-known methods allowing the formality presented in the program specification to be continued all the way to code. On the other hand, Object-Z (OZ) is an extension of Z adding support for classes and objects. There are a number of methods for obtaining code from OZ specifications that can be categorized into refinement and animation methods. As far as we know, only one refinement method exists which refines OZ specifications into code. However, this method does not have fine-grained refinement rules and thus cannot be automated. On the other hand, existing animation methods do not present mapping rules formally and do not support the mapping of several important constructs of OZ, such as all cases of operation expressions and most of constructs in global paragraph. In this paper, with the aim of providing an automatic path from OZ specifications to code, we propose an approach to map OZ specifications into their counterparts in MRC in order to use fine-grained refinement rules of MRC. In this way, having counterparts of our specifications in MRC, we can refine them into code automatically using MRC tools such as RED. Other advantages of our work pertain to proposing mapping rules formally, supporting the mapping of all important constructs of Object-Z, and considering dynamic instantiation of objects while OZ itself does not cover this facility.Keywords: Formal method, Formal specification, Formalprogram development, Morgan's Refinement Calculus, Object-Z
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1319688 JConqurr - A Multi-Core Programming Toolkit for Java
Authors: G.A.C.P. Ganegoda, D.M.A. Samaranayake, L.S. Bandara, K.A.D.N.K. Wimalawarne
Abstract:
With the popularity of the multi-core and many-core architectures there is a great requirement for software frameworks which can support parallel programming methodologies. In this paper we introduce an Eclipse toolkit, JConqurr which is easy to use and provides robust support for flexible parallel progrmaming. JConqurr is a multi-core and many-core programming toolkit for Java which is capable of providing support for common parallel programming patterns which include task, data, divide and conquer and pipeline parallelism. The toolkit uses an annotation and a directive mechanism to convert the sequential code into parallel code. In addition to that we have proposed a novel mechanism to achieve the parallelism using graphical processing units (GPU). Experiments with common parallelizable algorithms have shown that our toolkit can be easily and efficiently used to convert sequential code to parallel code and significant performance gains can be achieved.
Keywords: Multi-core, parallel programming patterns, GPU, Java, Eclipse plugin, toolkit,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110687 Metaheuristics Methods (GA and ACO) for Minimizing the Length of Freeman Chain Code from Handwritten Isolated Characters
Authors: Dewi Nasien, Habibollah Haron, Siti SophiayatiYuhaniz
Abstract:
This paper presents a comparison of metaheuristic algorithms, Genetic Algorithm (GA) and Ant Colony Optimization (ACO), in producing freeman chain code (FCC). The main problem in representing characters using FCC is the length of the FCC depends on the starting points. Isolated characters, especially the upper-case characters, usually have branches that make the traversing process difficult. The study in FCC construction using one continuous route has not been widely explored. This is our motivation to use the population-based metaheuristics. The experimental result shows that the route length using GA is better than ACO, however, ACO is better in computation time than GA.Keywords: Handwriting Recognition, Feature Extraction, Freeman Chain Code, Genetic Algorithm and Ant ColonyOptimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057686 Calculus-based Runtime Verification
Authors: Xuan Qi, Changzhi Zhao
Abstract:
In this paper, a uniform calculus-based approach for synthesizing monitors checking correctness properties specified by a large variety of logics at runtime is provided, including future and past time logics, interval logics, state machine and parameterized temporal logics. We present a calculus mechanism to synthesize monitors from the logical specification for the incremental analysis of execution traces during test and real run. The monitor detects both good and bad prefix of a particular kind, namely those that are informative for the property under investigation. We elaborate the procedure of calculus as monitors.Keywords: calculus, eagle logic, monitor synthesis, runtime verification
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257685 The Effect of Program Type on Mutation Testing: Comparative Study
Authors: B. Falah, N. E. Abakouy
Abstract:
Due to its high computational cost, mutation testing has been neglected by researchers. Recently, many cost and mutants’ reduction techniques have been developed, improved, and experimented, but few of them has relied the possibility of reducing the cost of mutation testing on the program type of the application under test. This paper is a comparative study between four operators’ selection techniques (mutants sampling, class level operators, method level operators, and all operators’ selection) based on the program code type of each application under test. It aims at finding an alternative approach to reveal the effect of code type on mutation testing score. The result of our experiment shows that the program code type can affect the mutation score and that the programs using polymorphism are best suited to be tested with mutation testing.Keywords: Equivalent mutant, killed mutant, mutation score, mutation testing, program code type.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1416684 Programming Aid Tool for Detecting Common Mistakes of Novice Programmers in OpenMP Code
Authors: Jae Young Park, Seung Wook Lee, Jong Tae Kim
Abstract:
OpenMP is an API for parallel programming model of shared memory multiprocessors. Novice OpenMP programmers often produce the code that compiler cannot find human errors. It was investigated how compiler coped with the common mistakes that can occur in OpenMP code. The latest version(4.4.3) of GCC is used for this research. It was found that GCC compiled the codes without any errors or warnings. In this paper the programming aid tool is presented for OpenMP programs. It can check 12 common mistakes that novice programmer can commit during the programming of OpenMP. It was demonstrated that the programming aid tool can detect the various common mistakes that GCC failed to detect.
Keywords: Parallel programming, OpenMP, programming aid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552683 A Mapping Approach of Code Generation for Arinc653-Based Avionics Software
Authors: Lu Zou, Dianfu MA, Ying Wang, Xianqi Zhao
Abstract:
Avionic software architecture has transit from a federated avionics architecture to an integrated modular avionics (IMA) .ARINC 653 (Avionics Application Standard Software Interface) is a software specification for space and time partitioning in Safety-critical avionics Real-time operating systems. Methods to transform the abstract avionics application logic function to the executable model have been brought up, however with less consideration about the code generating input and output model specific for ARINC 653 platform and inner-task synchronous dynamic interaction order sequence. In this paper, we proposed an AADL-based model-driven design methodology to fulfill the purpose to automatically generating Cµ executable model on ARINC 653 platform from the ARINC653 architecture which defined as AADL653 in order to facilitate the development of the avionics software constructed on ARINC653 OS. This paper presents the mapping rules between the AADL653 elements and the elements in Cµ language, and define the code generating rules , designs an automatic C µ code generator .Then, we use a case to illustrate our approach. Finally, we give the related work and future research directions.Keywords: IMA, ARINC653, AADL653, code generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3038682 Development of Low-cost OCDMA Encoder Based On Arrayed Waveguide Gratings(AWGs) and Optical Switches
Authors: Mohammad Syuhaimi Ab-Rahman, Boon Chuan Ng, Norshilawati Mohamad Ibrahim, Sahbudin Shaari
Abstract:
This paper describes the development of a 16-ports optical code division multiple access (OCDMA) encoder prototype based on Arrayed Waveguide Grating (AWG) and optical switches. It is potentially to provide a high security for data transmission due to all data will be transmitted in binary code form. The output signals from AWG are coded with a binary code that given to an optical switch before it signal modulate with the carrier and transmitted to the receiver. The 16-ports encoder used 16 double pole double throw (DPDT) toggle switches to control the polarization of voltage source from +5 V to -5 V for 16 optical switches. When +5 V is given, the optical switch will give code '1' and vice versa. The experimental results showed the insertion loss, crosstalk, uniformity, and optical signal-noise-ratio (OSNR) for the developed prototype are <12 dB, 9.77 dB, <1.63dB, and ≥20dB.
Keywords: AWG, encoder, OCDMA, optical switch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581681 A Simplified Single Correlator Rake Receiver for CDMA Communications
Authors: K. Murali Krishna, Abhijit Mitra, C. Ardil
Abstract:
This paper presents a single correlator RAKE receiver for direct sequence code division multiple access (DS-CDMA) systems. In conventional RAKE receivers, multiple correlators are used to despread the multipath signals and then to align and combine those signals in a later stage before making a bit decision. The simplified receiver structure presented here uses a single correlator and single code sequence generator to recover the multipaths. Modified Walsh- Hadamard codes are used here for data spreading that provides better uncorrelation properties for the multipath signals. The main advantage of this receiver structure is that it requires only a single correlator and a code generator in contrary to the conventional RAKE receiver concept with multiple correlators. It is shown in results that the proposed receiver achieves better bit error rates in comparison with the conventional one for more than one multipaths.
Keywords: RAKE receiver, Code division multiple access, ModifiedWalsh-Hadamard codes, Single correlator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3642680 Simulation Based VLSI Implementation of Fast Efficient Lossless Image Compression System Using Adjusted Binary Code & Golumb Rice Code
Authors: N. Muthukumaran, R. Ravi
Abstract:
The Simulation based VLSI Implementation of FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression and is implemented in simulation oriented VLSI (Very Large Scale Integrated). To analysis the performance of Lossless image compression and to reduce the image without losing image quality and then implemented in VLSI based FELICS algorithm. In FELICS algorithm, which consists of simplified adjusted binary code for Image compression and these compression image is converted in pixel and then implemented in VLSI domain. This parameter is used to achieve high processing speed and minimize the area and power. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. This method can be further enhanced by multilevel parallelisms.
Keywords: Image compression, Pixel, Compression Ratio, Adjusted Binary code, Golumb Rice code, High Definition display, VLSI Implementation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073679 Determining Cluster Boundaries Using Particle Swarm Optimization
Authors: Anurag Sharma, Christian W. Omlin
Abstract:
Self-organizing map (SOM) is a well known data reduction technique used in data mining. Data visualization can reveal structure in data sets that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOMs, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of a generic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOMs. The application of our method to unlabeled call data for a mobile phone operator demonstrates its feasibility. PSO algorithm utilizes U-matrix of SOMs to determine cluster boundaries; the results of this novel automatic method correspond well to boundary detection through visual inspection of code vectors and k-means algorithm.
Keywords: Particle swarm optimization, self-organizing maps, clustering, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717678 The Concentration Analysis of CO2 Using ALOHA Code for Kuosheng Nuclear Power Plant
Authors: W. S. Hsu, Y. Chiang, H. C. Chen, J. R. Wang, S. W. Chen, J. H. Yang, C. Shih
Abstract:
Not only radiation materials, but also the normal chemical material stored in the power plant can cause a risk to the residents. In this research, the ALOHA code was used to perform the concentration analysis under the CO2 storage burst or leakage conditions for Kuosheng nuclear power plant (NPP). The Final Safety Analysis Report (FSAR) and data were used in this study. Additionally, the analysis results of ALOHA code were compared with the R.G. 1.78 failure criteria in order to confirm the control room habitability. The comparison results show that the ALOHA result for burst case was 0.923 g/m3 which was below the criteria. However, the ALOHA results for leakage case was 11.3 g/m3.
Keywords: BWR, ALOHA, habitability, Kuosheng.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966677 Performance Evaluation of the OCDM/WDM Technique for Optical Packet Switches
Authors: V. Eramo, L. Piazzo, M. Listanti, A. Germoni, A Cianfrani
Abstract:
The performance of the Optical Code Division Multiplexing/ Wavelength Division Multiplexing (WDM/OCDM) technique for Optical Packet Switch is investigated. The impact on the performance of the impairment due to both Multiple Access Interference and Beat noise is studied. The Packet Loss Probability due to output packet contentions is evaluated as a function of the main switch and traffic parameters when Gold coherent optical codes are adopted. The Packet Loss Probability of the OCDM/WDM switch can reach 10-9 when M=16 wavelengths, Gold code of length L=511 and only 24 wavelength converters are used in the switch.
Keywords: Optical code division multiplexing, bufferless optical packet switch, performance evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443676 System Performance Comparison of Turbo and Trellis Coded Optical CDMA Systems
Authors: M. Kulkarni, R. K. Sinha, D. R. Bhaskar
Abstract:
In this paper, we have compared the performance of a Turbo and Trellis coded optical code division multiple access (OCDMA) system. The comparison of the two codes has been accomplished by employing optical orthogonal codes (OOCs). The Bit Error Rate (BER) performances have been compared by varying the code weights of address codes employed by the system. We have considered the effects of optical multiple access interference (OMAI), thermal noise and avalanche photodiode (APD) detector noise. Analysis has been carried out for the system with and without double optical hard limiter (DHL). From the simulation results it is observed that a better and distinct comparison can be drawn between the performance of Trellis and Turbo coded systems, at lower code weights of optical orthogonal codes for a fixed number of users. The BER performance of the Turbo coded system is found to be better than the Trellis coded system for all code weights that have been considered for the simulation. Nevertheless, the Trellis coded OCDMA system is found to be better than the uncoded OCDMA system. Trellis coded OCDMA can be used in systems where decoding time has to be kept low, bandwidth is limited and high reliability is not a crucial factor as in local area networks. Also the system hardware is less complex in comparison to the Turbo coded system. Trellis coded OCDMA system can be used without significant modification of the existing chipsets. Turbo-coded OCDMA can however be employed in systems where high reliability is needed and bandwidth is not a limiting factor.
Keywords: avalanche photodiode, optical code division multipleaccess, optical multiple access interference, Trellis codedmodulation, Turbo code
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896675 A Study on Early Prediction of Fault Proneness in Software Modules using Genetic Algorithm
Authors: Parvinder S. Sandhu, Sunil Khullar, Satpreet Singh, Simranjit K. Bains, Manpreet Kaur, Gurvinder Singh
Abstract:
Fault-proneness of a software module is the probability that the module contains faults. To predict faultproneness of modules different techniques have been proposed which includes statistical methods, machine learning techniques, neural network techniques and clustering techniques. The aim of proposed study is to explore whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules using Genetic Algorithm technique. This approach has been tested with real time defect C Programming language datasets of NASA software projects. The results show that the fusion of requirement and code metric is the best prediction model for detecting the faults as compared with commonly used code based model.Keywords: Genetic Algorithm, Fault Proneness, Software Faultand Software Quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983674 Skew Cyclic Codes over Fq+uFq+…+uk-1Fq
Abstract:
This paper studies a special class of linear codes, called skew cyclic codes, over the ring R= Fq+uFq+…+uk-1Fq, where q is a prime power. A Gray map ɸ from R to Fq and a Gray map ɸ' from Rn to Fnq are defined, as well as an automorphism Θ over R. It is proved that the images of skew cyclic codes over R under map ɸ' and Θ are cyclic codes over Fq, and they still keep the dual relation.
Keywords: Skew cyclic code, gray map, automophism, cyclic code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1092673 A New Source Code Auditing Algorithm for Detecting LFI and RFI in PHP Programs
Authors: Seyed Ali Mir Heydari, Mohsen Sayadiharikandeh
Abstract:
Static analysis of source code is used for auditing web applications to detect the vulnerabilities. In this paper, we propose a new algorithm to analyze the PHP source code for detecting LFI and RFI potential vulnerabilities. In our approach, we first define some patterns for finding some functions which have potential to be abused because of unhandled user inputs. More precisely, we use regular expression as a fast and simple method to define some patterns for detection of vulnerabilities. As inclusion functions could be also used in a safe way, there could occur many false positives (FP). The first cause of these FP-s could be that the function does not use a usersupplied variable as an argument. So, we extract a list of usersupplied variables to be used for detecting vulnerable lines of code. On the other side, as vulnerability could spread among the variables like by multi-level assignment, we also try to extract the hidden usersupplied variables. We use the resulted list to decrease the false positives of our method. Finally, as there exist some ways to prevent the vulnerability of inclusion functions, we define also some patterns to detect them and decrease our false positives.Keywords: User-supplied Variables, hidden user-supplied variables, PHP vulnerabilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2506672 Unit Testing with Déjà-Vu Objects
Authors: Sharareh Afsharian, Andrea Bei, Marco Bianchi
Abstract:
In this paper we introduce a new unit test technique called déjà-vu object. Déjà-vu objects replace real objects used by classes under test, allowing the execution of isolated unit tests. A déjà-vu object is able to observe and record the behaviour of a real object during real sessions, and to replace it during unit tests, returning previously recorded results. Consequently déjà-vu object technique can be useful when a bottom-up development and testing strategy is adopted. In this case déjà-vu objects can increase test portability and test source code readability. At the same time they can reduce the time spent by programmers to develop test code and the risk of incompatibility during the switching between déjà-vu and production code.Keywords: Bottom-up testing approach, integration test, testportability, unit test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2594671 Performance Comparison of Particle Swarm Optimization with Traditional Clustering Algorithms used in Self-Organizing Map
Authors: Anurag Sharma, Christian W. Omlin
Abstract:
Self-organizing map (SOM) is a well known data reduction technique used in data mining. It can reveal structure in data sets through data visualization that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOM, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of an adaptive heuristic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOM. The application of our method to several standard data sets demonstrates its feasibility. PSO algorithm utilizes a so-called U-matrix of SOM to determine cluster boundaries; the results of this novel automatic method compare very favorably to boundary detection through traditional algorithms namely k-means and hierarchical based approach which are normally used to interpret the output of SOM.Keywords: cluster boundaries, clustering, code vectors, data mining, particle swarm optimization, self-organizing maps, U-matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1908670 Application of Biometrics to Obtain High Entropy Cryptographic Keys
Authors: Sanjay Kanade, Danielle Camara, Dijana Petrovska-Delacretaz, Bernadette Dorizzi
Abstract:
In this paper, a two factor scheme is proposed to generate cryptographic keys directly from biometric data, which unlike passwords, are strongly bound to the user. Hash value of the reference iris code is used as a cryptographic key and its length depends only on the hash function, being independent of any other parameter. The entropy of such keys is 94 bits, which is much higher than any other comparable system. The most important and distinct feature of this scheme is that it regenerates the reference iris code by providing a genuine iris sample and the correct user password. Since iris codes obtained from two images of the same eye are not exactly the same, error correcting codes (Hadamard code and Reed-Solomon code) are used to deal with the variability. The scheme proposed here can be used to provide keys for a cryptographic system and/or for user authentication. The performance of this system is evaluated on two publicly available databases for iris biometrics namely CBS and ICE databases. The operating point of the system (values of False Acceptance Rate (FAR) and False Rejection Rate (FRR)) can be set by properly selecting the error correction capacity (ts) of the Reed- Solomon codes, e.g., on the ICE database, at ts = 15, FAR is 0.096% and FRR is 0.76%. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2090