Search results for: source code semantics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4322

Search results for: source code semantics

4322 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 138
4321 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 90
4320 Calculation of Detection Efficiency of Horizontal Large Volume Source Using Exvol Code

Authors: M. Y. Kang, Euntaek Yoon, H. D. Choi

Abstract:

To calculate the full energy (FE) absorption peak efficiency for arbitrary volume sample, we developed and verified the EXVol (Efficiency calculator for EXtended Voluminous source) code which is based on effective solid angle method. EXVol is possible to describe the source area as a non-uniform three-dimensional (x, y, z) source. And decompose and set it into several sets of volume units. Users can equally divide (x, y, z) coordinate system to calculate the detection efficiency at a specific position of a cylindrical volume source. By determining the detection efficiency for differential volume units, the total radiative absolute distribution and the correction factor of the detection efficiency can be obtained from the nondestructive measurement of the source. In order to check the performance of the EXVol code, Si ingot of 20 cm in diameter and 50 cm in height were used as a source. The detector was moved at the collimation geometry to calculate the detection efficiency at a specific position and compared with the experimental values. In this study, the performance of the EXVol code was extended to obtain the detection efficiency distribution at a specific position in a large volume source.

Keywords: attenuation, EXVol, detection efficiency, volume source

Procedia PDF Downloads 170
4319 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures

Authors: Mariem Saied, Jens Gustedt, Gilles Muller

Abstract:

We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.

Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments

Procedia PDF Downloads 111
4318 Semantic Data Schema Recognition

Authors: Aïcha Ben Salem, Faouzi Boufares, Sebastiao Correia

Abstract:

The subject covered in this paper aims at assisting the user in its quality approach. The goal is to better extract, mix, interpret and reuse data. It deals with the semantic schema recognition of a data source. This enables the extraction of data semantics from all the available information, inculding the data and the metadata. Firstly, it consists of categorizing the data by assigning it to a category and possibly a sub-category, and secondly, of establishing relations between columns and possibly discovering the semantics of the manipulated data source. These links detected between columns offer a better understanding of the source and the alternatives for correcting data. This approach allows automatic detection of a large number of syntactic and semantic anomalies.

Keywords: schema recognition, semantic data profiling, meta-categorisation, semantic dependencies inter columns

Procedia PDF Downloads 404
4317 Simulation of 140 Kv X– Ray Tube by MCNP4C Code

Authors: Amin Sahebnasagh, Karim Adinehvand, Bakhtiar Azadbakht

Abstract:

In this study, we used Monte Carlo code (MCNP4C) that is a general method, for simulation, electron source and electric field, a disc source with 0.05 cm radius in direct of anode are used, radius of disc source show focal spot of x-ray tube that here is 0.05 cm. In this simulation, anode is from tungsten with 18.9 g/cm3 density and angle of anode is 180. we simulated x-ray tube for 140 kv. For increasing of speed data acquisition we use F5 tally. With determination the exact position of F5 tally in program, outputs are acquired. In this spectrum the start point is about 0.02 Mev, the absorption edges are about 0.06 Mev and 0.07 Mev and average energy is about 0.05 Mev.

Keywords: x-spectrum, simulation, Monte Carlo, MCNP4C code

Procedia PDF Downloads 631
4316 The Evaluation Model for the Quality of Software Based on Open Source Code

Authors: Li Donghong, Peng Fuyang, Yang Guanghua, Su Xiaoyan

Abstract:

Using open source code is a popular method of software development. How to evaluate the quality of software becomes more important. This paper introduces an evaluation model. The model evaluates the quality from four dimensions: technology, production, management, and development. Each dimension includes many indicators. The weight of indicator can be modified according to the purpose of evaluation. The paper also introduces a method of using the model. The evaluating result can provide good advice for evaluating or purchasing the software.

Keywords: evaluation model, software quality, open source code, evaluation indicator

Procedia PDF Downloads 367
4315 Code-Switching and Code Mixing among Ogba-English Bilingual Conversations

Authors: Ben-Fred Ohia

Abstract:

Code-switching and code-mixing are linguistic behaviours that arise in a bilingual situation. They limit speakers in a conversation to decide which code they should use to utter particular phrases or words in the course of carrying out their utterance. Every human society is characterized by the existence of diverse linguistic varieties. The speakers of these varieties at some points have various degrees of contact with the non-speakers of their variety, which one of the outcomes of the linguistic contact is code-switching or code-mixing. The work discusses the nature of code-switching and code-mixing in Ogba-English bilinguals’ speeches. It provides a detailed explanation of the concept of code-switching and code-mixing and explains the typology of code-switching and code-mixing and their manifestation in Ogba-English bilingual speakers’ speeches. The findings reveal that code-switching and code-mixing are functionally motivated and being triggered by various conversational contexts.

Keywords: bilinguals, code-mixing, code-switching, Ogba

Procedia PDF Downloads 165
4314 Developing a Modified Version of KIVA-3V, Enabling Gaseous Injections

Authors: Hossein Keshtkar, Ali Nasiri Toosi

Abstract:

With the growing concerns about gasoline environmental pollution and also the need for a more widely available fuel source, natural gas is finding its way to the automotive engines. But before this could happen industrially, simulations of natural gas direct injection need to take place to maximize and optimize power output. KIVA is one of the most powerful tools when it comes to engine simulation. Widely accepted by both researchers and the industry, KIVA an open-source code, offers great in-depth simulation and analyzation. KIVA can compute complex phenomena’s which can occur inside the chamber before, whilst and after ignition. One downside to KIVA, is its in-capability of simulating gaseous injections, making it useful for only liquidized fuel. In this study, we developed a numerical code, to enable the simulation of gaseous injection within the KIVA code. By introducing our code as a subroutine, we modified the original KIVA program. To ensure the correct application of gaseous fuel injection using our modified KIVA code, we simulated two different cases and compared them with their experimental data. We concluded our modified version of KIVA’s simulation results came in very close to those measured experimentally.

Keywords: gaseous injections, KIVA, natural gas direct injection, numerical code, simulation

Procedia PDF Downloads 269
4313 Static Analysis Deployment Model for Code Quality on Research and Development Projects of Software Development

Authors: Jeong-Hyun Park, Young-Sik Park, Hyo-Teag Jung

Abstract:

This paper presents static analysis deployment model for code quality on R&D Projects of SW Development. The proposed model includes the scope of R&D projects and index for static analysis of source code, operation model and execution process, environments and infrastructure system for R&D projects of SW development. There is the static analysis result of pilot project as case study based on the proposed deployment model and environment, and strategic considerations for success operation of the proposed static analysis deployment model for R&D Projects of SW Development. The proposed static analysis deployment model in this paper will be adapted and improved continuously for quality upgrade of R&D projects, and customer satisfaction of developed source codes and products.

Keywords: static analysis, code quality, coding rules, automation tool

Procedia PDF Downloads 501
4312 The Translation of Code-Switching in African Literature: Comparing the Two German Translations of Ngugi Wa Thiongo’s "Petals of Blood"

Authors: Omotayo Olalere

Abstract:

The relevance of code-switching for intercultural communication through literary translation cannot be overemphasized. The translation of code-switching and its implications for translations studies have been studied in the context of African literature. In these cases, code-switching was examined in the more general terms of its usage in source text and not particularly in Ngugi’s novels and its translations. In addition, the functions of translation and code-switching in the lyrics of some popular African songs have been studied, but this study is related more with oral performance than with written literature. As such, little has been done on the German translation of code-switching in African works. This study intends to fill this lacuna by examining the concept of code-switching in the German translations in Ngugi’s Petals of Blood. The aim is to highlight the significance of code-switching as a phenomenon in this African (Ngugi’s) novel written in English and to also focus on its representation in the two German translations. The target texts to be used are Verbrannte Blueten and Land der flammenden Blueten. “Abrogration“ as a concept will play an important role in the analysis of the data. Findings will show that the ideology of a translator plays a huge role in representing the concept of “abrogration” in the translation of code-switching in the selected source text. The study will contribute to knowledge in translation studies by bringing to limelight the need to foreground aspects of language contact in translation theory and practice, particularly in the African context. Relevant translation theories adopted for the study include Bandia’s (2008) postcolonial theory of translation and Snell-Hornby”s (1988) cultural translation theory.

Keywords: code switching, german translation, ngugi wa thiong’o, petals of blood

Procedia PDF Downloads 65
4311 Contextual Senses of Ambiguous Words Based on Cognitive Semantics

Authors: Madhavi

Abstract:

All linguistic units are context-dependent. They occur in particular settings, from which they derive much of their import, and are recognized by speakers as distinct entities only through a process of abstraction. Most of the words have several concepts associated with them and convey a number of meanings in different contexts in any language. For instance, there are different uses of the word good as an adjective from English. The adjective good expresses many senses like (1) ‘high quality of someone or something’ (2) ‘efficient’ (3) ‘virtuous’ (4) ‘reliable’ etc. These senses will be analyzed by using cognitive semantics framework. The context has the power to insulate one meaning from all the other meanings in communication. This paper will provide a cognitive semantic analysis. The basic tenet of cognitive semantics is the sense of a word is the way we conceptualize it. Our conceptualization is based on the physical experience we go through. Cognitive semantics tries to capture this conceptualization in terms of some categories like schema, frame, and domain. Cognitive semantics is a subfield of cognitive linguistics. Cognitive linguistics studies the language creation, learning, and usage by the reference to human cognition. The semantic structure is conceptual structure which is related to the concepts which are the elements of reason and constitute the meanings of words and linguistic expressions. Cognitive semantics studies how our mind works for the meaning of any word and how it perceives meaning from the environment through senses and works to map with the knowledge which already exists in our mind through experience. In the present paper, the senses are further classified into some categories.

Keywords: cognitive, contexts, semantics, senses

Procedia PDF Downloads 203
4310 BodeACD: Buffer Overflow Vulnerabilities Detecting Based on Abstract Syntax Tree, Control Flow Graph, and Data Dependency Graph

Authors: Xinghang Lv, Tao Peng, Jia Chen, Junping Liu, Xinrong Hu, Ruhan He, Minghua Jiang, Wenli Cao

Abstract:

As one of the most dangerous vulnerabilities, effective detection of buffer overflow vulnerabilities is extremely necessary. Traditional detection methods are not accurate enough and consume more resources to meet complex and enormous code environment at present. In order to resolve the above problems, we propose the method for Buffer overflow detection based on Abstract syntax tree, Control flow graph, and Data dependency graph (BodeACD) in C/C++ programs with source code. Firstly, BodeACD constructs the function samples of buffer overflow that are available on Github, then represents them as code representation sequences, which fuse control flow, data dependency, and syntax structure of source code to reduce information loss during code representation. Finally, BodeACD learns vulnerability patterns for vulnerability detection through deep learning. The results of the experiments show that BodeACD has increased the precision and recall by 6.3% and 8.5% respectively compared with the latest methods, which can effectively improve vulnerability detection and reduce False-positive rate and False-negative rate.

Keywords: vulnerability detection, abstract syntax tree, control flow graph, data dependency graph, code representation, deep learning

Procedia PDF Downloads 156
4309 Characterization of Onboard Reliable Error Correction Code FORSDRAM Controller

Authors: N. Pitcheswara Rao

Abstract:

In the process of conveying the information there may be a chance of signal being corrupted which leads to the erroneous bits in the message. The message may consist of single, double and multiple bit errors. In high-reliability applications, memory can sustain multiple soft errors due to single or multiple event upsets caused by environmental factors. The traditional hamming code with SEC-DED capability cannot be address these types of errors. It is possible to use powerful non-binary BCH code such as Reed-Solomon code to address multiple errors. However, it could take at least a couple dozen cycles of latency to complete first correction and run at a relatively slow speed. In order to overcome this drawback i.e., to increase speed and latency we are using reed-Muller code.

Keywords: SEC-DED, BCH code, Reed-Solomon code, Reed-Muller code

Procedia PDF Downloads 409
4308 Characterization of Onboard Reliable Error Correction Code for SDRAM Controller

Authors: Pitcheswara Rao Nelapati

Abstract:

In the process of conveying the information there may be a chance of signal being corrupted which leads to the erroneous bits in the message. The message may consist of single, double and multiple bit errors. In high-reliability applications, memory can sustain multiple soft errors due to single or multiple event upsets caused by environmental factors. The traditional hamming code with SEC-DED capability cannot be address these types of errors. It is possible to use powerful non-binary BCH code such as Reed-Solomon code to address multiple errors. However, it could take at least a couple dozen cycles of latency to complete first correction and run at a relatively slow speed. In order to overcome this drawback i.e., to increase speed and latency we are using reed-Muller code.

Keywords: SEC-DED, BCH code, Reed-Solomon code, Reed-Muller code

Procedia PDF Downloads 407
4307 A Resistant-Based Comparative Study between Iranian Concrete Design Code and Some Worldwide Ones

Authors: Seyed Sadegh Naseralavi, Najmeh Bemani

Abstract:

The design in most counties should be inevitably carried out by their native code such as Iran. Since the Iranian concrete code does not exist in structural design software, most engineers in this country analyze the structures using commercial software but design the structural members manually. This point motivated us to make a communication between Iranian code and some other well-known ones to create facility for the engineers. Finally, this paper proposes the so-called interpretation charts which help specify the position of Iranian code in comparison of some worldwide ones.

Keywords: beam, concrete code, strength, interpretation charts

Procedia PDF Downloads 507
4306 Analysis of Joint Source Channel LDPC Coding for Correlated Sources Transmission over Noisy Channels

Authors: Marwa Ben Abdessalem, Amin Zribi, Ammar Bouallègue

Abstract:

In this paper, a Joint Source Channel coding scheme based on LDPC codes is investigated. We consider two concatenated LDPC codes, one allows to compress a correlated source and the second to protect it against channel degradations. The original information can be reconstructed at the receiver by a joint decoder, where the source decoder and the channel decoder run in parallel by transferring extrinsic information. We investigate the performance of the JSC LDPC code in terms of Bit-Error Rate (BER) in the case of transmission over an Additive White Gaussian Noise (AWGN) channel, and for different source and channel rate parameters. We emphasize how JSC LDPC presents a performance tradeoff depending on the channel state and on the source correlation. We show that, the JSC LDPC is an efficient solution for a relatively low Signal-to-Noise Ratio (SNR) channel, especially with highly correlated sources. Finally, a source-channel rate optimization has to be applied to guarantee the best JSC LDPC system performance for a given channel.

Keywords: AWGN channel, belief propagation, joint source channel coding, LDPC codes

Procedia PDF Downloads 344
4305 A Guide to User-Friendly Bash Prompt: Adding Natural Language Processing Plus Bash Explanation to the Command Interface

Authors: Teh Kean Kheng, Low Soon Yee, Burra Venkata Durga Kumar

Abstract:

In 2022, as the future world becomes increasingly computer-related, more individuals are attempting to study coding for themselves or in school. This is because they have discovered the value of learning code and the benefits it will provide them. But learning coding is difficult for most people. Even senior programmers that have experience for a decade year still need help from the online source while coding. The reason causing this is that coding is not like talking to other people; it has the specific syntax to make the computer understand what we want it to do, so coding will be hard for normal people if they don’t have contact in this field before. Coding is hard. If a user wants to learn bash code with bash prompt, it will be harder because if we look at the bash prompt, we will find that it is just an empty box and waiting for a user to tell the computer what we want to do, if we don’t refer to the internet, we will not know what we can do with the prompt. From here, we can conclude that the bash prompt is not user-friendly for new users who are learning bash code. Our goal in writing this paper is to give an idea to implement a user-friendly Bash prompt in Ubuntu OS using Artificial Intelligent (AI) to lower the threshold of learning in Bash code, to make the user use their own words and concept to write and learn Bash code.

Keywords: user-friendly, bash code, artificial intelligence, threshold, semantic similarity, lexical similarity

Procedia PDF Downloads 123
4304 Code-Switching in Facebook Chatting Among Maldivian Teenagers

Authors: Aaidha Hammad

Abstract:

This study examines the phenomenon of code switching among teenagers in the Maldives while they carry out conversations through Facebook in the form of “Facebook Chatting”. The current study aims at evaluating the frequency of code-switching and it investigates between what languages code-switching occurs. Besides the study identifies the types of words that are often codeswitched and the triggers for code switching. The methodology used in this study is mixed method of qualitative and quantitative approach. In this regard, the chat log of a group conversation between 10 teenagers was collected and analyzed. A questionnaire was also administered through online to 24 different teenagers from different corners of the Maldives. The age of teenagers ranged between 16 and 19 years. The findings of the current study revealed that while Maldivian teenagers chat in Facebook they very often code switch and these switches are most commonly between Dhivehi and English, but some other languages are also used to some extent. It also identified the different types of words that are being often code switched among the teenagers. Most importantly it explored different reasons behind code switching among the Maldivian teenagers in Facebook chatting.

Keywords: code-switching, Facebook, Facebook chatting Maldivian teenagers

Procedia PDF Downloads 229
4303 Event Monitoring Based On Web Services for Heterogeneous Event Sources

Authors: Arne Koschel

Abstract:

This article discusses event monitoring options for heterogeneous event sources as they are given in nowadays heterogeneous distributed information systems. It follows the central assumption, that a fully generic event monitoring solution cannot provide complete support for event monitoring; instead, event source specific semantics such as certain event types or support for certain event monitoring techniques have to be taken into account. Following from this, the core result of the work presented here is the extension of a configurable event monitoring (Web) service for a variety of event sources. A service approach allows us to trade genericity for the exploitation of source specific characteristics. It thus delivers results for the areas of SOA, Web services, CEP and EDA.

Keywords: event monitoring, ECA, CEP, SOA, web services

Procedia PDF Downloads 720
4302 Code Switching: A Case Study Of Lebanon

Authors: Wassim Bekai

Abstract:

Code switching, as its name states, is altering between two or more languages in one sentence. The speaker tends to use code switching in his/her speech for better clarification of his/her message to the receiver. It is commonly used in sociocultural countries such as Lebanon because of the various cultures that have come across its lands through history, considering Lebanon is geographically located in the heart of the world, and hence between many cultures and languages. In addition, Lebanon was occupied by Turkish authorities for about 400 years, and later on by the French mandate, where both of these countries forced their languages in official papers and in the Lebanese educational system. In this paper, the importance of code switching in the Lebanese workplace will be examined, stressing the efficiency and amount of the production resulting from code switching in the workplace (factories, universities among other places) in addition to exploring the social, education, religious and cultural factors behind this phenomenon in Lebanon.

Keywords: code switching, Lebanon, cultural, factors

Procedia PDF Downloads 261
4301 Features of Testing of the Neuronetwork Converter Biometrics-Code with Correlation Communications between Bits of the Output Code

Authors: B. S. Akhmetov, A. I. Ivanov, T. S. Kartbayev, A. Y. Malygin, K. Mukapil, S. D. Tolybayev

Abstract:

The article examines the testing of the neural network converter of biometrics code. Determined the main reasons that prevented the use adopted in the works of foreign researchers classical a Binomial Law when describing distribution of measures of Hamming "Alien" codes-responses.

Keywords: biometrics, testing, neural network, converter of biometrics-code, Hamming's measure

Procedia PDF Downloads 1119
4300 Optical Multicast over OBS Networks: An Approach Based on Code-Words and Tunable Decoders

Authors: Maha Sliti, Walid Abdallah, Noureddine Boudriga

Abstract:

In the frame of this work, we present an optical multicasting approach based on optical code-words. Our approach associates, in the edge node, an optical code-word to a group multicast address. In the core node, a set of tunable decoders are used to send a traffic data to multiple destinations based on the received code-word. The use of code-words, which correspond to the combination of an input port and a set of output ports, allows the implementation of an optical switching matrix. At the reception of a burst, it will be delayed in an optical memory. And, the received optical code-word is split to a set of tunable optical decoders. When it matches a configured code-word, the delayed burst is switched to a set of output ports.

Keywords: optical multicast, optical burst switching networks, optical code-words, tunable decoder, virtual optical memory

Procedia PDF Downloads 587
4299 Software Evolution Based Activity Diagrams

Authors: Zine-Eddine Bouras, Abdelouaheb Talai

Abstract:

During the last two decades, the software evolution community has intensively tackled the software merging issue whose main objective is to merge in a consistent way different versions of software in order to obtain a new version. Well-established approaches, mainly based on the dependence analysis techniques, have been used to bring suitable solutions. These approaches concern the source code or software architectures. However, these solutions are more expensive due to the complexity and size. In this paper, we overcome this problem by operating at a high level of abstraction. The objective of this paper is to investigate the software merging at the level of UML activity diagrams, which is a new interesting issue. Its purpose is to merge activity diagrams instead of source code. The proposed approach, based on dependence analysis techniques, is illustrated through an appropriate case study.

Keywords: activity diagram, activity diagram slicing, dependency analysis, software merging

Procedia PDF Downloads 309
4298 Searching Linguistic Synonyms through Parts of Speech Tagging

Authors: Faiza Hussain, Usman Qamar

Abstract:

Synonym-based searching is recognized to be a complicated problem as text mining from unstructured data of web is challenging. Finding useful information which matches user need from bulk of web pages is a cumbersome task. In this paper, a novel and practical synonym retrieval technique is proposed for addressing this problem. For replacement of semantics, user intent is taken into consideration to realize the technique. Parts-of-Speech tagging is applied for pattern generation of the query and a thesaurus for this experiment was formed and used. Comparison with Non-Context Based Searching, Context Based searching proved to be a more efficient approach while dealing with linguistic semantics. This approach is very beneficial in doing intent based searching. Finally, results and future dimensions are presented.

Keywords: natural language processing, text mining, information retrieval, parts-of-speech tagging, grammar, semantics

Procedia PDF Downloads 292
4297 Contextual Distribution for Textual Alignment

Authors: Yuri Bizzoni, Marianne Reboul

Abstract:

Our program compares French and Italian translations of Homer’s Odyssey, from the XVIth to the XXth century. We focus on the third point, showing how distributional semantics systems can be used both to improve alignment between different French translations as well as between the Greek text and a French translation. Although we focus on French examples, the techniques we display are completely language independent.

Keywords: classical receptions, computational linguistics, distributional semantics, Homeric poems, machine translation, translation studies, text alignment

Procedia PDF Downloads 418
4296 Statistical Study and Simulation of 140 Kv X– Ray Tube by Monte Carlo

Authors: Mehdi Homayouni, Karim Adinehvand, Bakhtiar Azadbakht

Abstract:

In this study, we used Monte Carlo code (MCNP4C) that is a general method, for simulation, electron source and electric field, a disc source with 0.05 cm radius in direct of anode are used, radius of disc source show focal spot of X-ray tube that here is 0.05 cm. In this simulation, the anode is from tungsten with 18.9 g/cm3 density and angle of the anode is 18°. We simulated X-ray tube for 140 kv. For increasing of speed data acquisition, we use F5 tally. With determination the exact position of F5 tally in the program, outputs are acquired. In this spectrum the start point is about 0.02 Mev, the absorption edges are about 0.06 Mev and 0.07 Mev, and average energy is about 0.05 Mev.

Keywords: X-spectrum, simulation, Monte Carlo, tube

Procedia PDF Downloads 706
4295 Rest API Based System-level Test Automation for Mobile Applications

Authors: Jisoo Song

Abstract:

Today’s mobile applications are communicating with servers more and more in order to access external services or information. Also, server-side code changes are more frequent than client-side code changes in a mobile application. The frequent changes lead to an increase in testing cost increase. To reduce costs, UI based test automation can be one of the solutions. It is a common automation technique in system-level testing. However, it can be unsuitable for mobile applications. When you automate tests based on UI elements for mobile applications, there are some limitations such as the overhead of script maintenance or the difficulty of finding invisible defects that UI elements cannot represent. To overcome these limitations, we present a new automation technique based on Rest API. You can automate system-level tests through test scripts that you write. These scripts call a series of Rest API in a user’s action sequence. This technique does not require testers to know the internal implementation details, only input and expected output of Rest API. You can easily modify test cases by modifying Rest API input values and also find problems that might not be evident from the UI level by validating output values. For example, when an application receives price information from a payment server and user cannot see it at UI level, Rest API based scripts can check whether price information is correct or not. More than 10 mobile applications at our company are being tested automatically based on Rest API scripts whenever application source code, mostly server source code, is built. We are finding defects right away by setting a script as a build job in CI server. The build job starts when application code builds are completed. This presentation will also include field cases from our company.

Keywords: case studies at SK Planet, introduction of rest API based test automation, limitations of UI based test automation

Procedia PDF Downloads 428
4294 Aspects of Semantics of Standard British English and Nigerian English: A Contrastive Study

Authors: Chris Adetuyi, Adeola Adeniran

Abstract:

The concept of meaning is a complex one in language study when cultural features are added. This is mandatory because language cannot be completely separated from the culture in which case language and culture complement each other. When there are two varieties of a language in a society, i.e. two varieties functioning side by side in a speech community, there is a tendency to view one of the varieties with each other. There is, therefore, the need to make a linguistic comparative study of varieties of such languages. In this paper, a semantic contrastive study is made between Standard British English (SBE) and Nigerian English (NB). The semantic study is limited to aspects of semantics: semantic extension (Kinship terms, metaphors), semantic shift (lexical items considered are ‘drop’ ‘befriend’ ‘dowry’ and escort) acronyms (NEPA, JAMB, NTA) linguistic borrowing or loan words (Seriki, Agbada, Eba, Dodo, Iroko) coinages (long leg, bush meat; bottom power and juju). In the study of these aspects of semantics of SBE and NE lexical terms, conservative statements are made, problems areas and hierarchy of difficulties are highlighted with a view to bringing out areas of differences are highlighted in this paper are concerned. The study will also serve as a guide in further contrastive studies in some other area of languages.

Keywords: aspect, British, English, Nigeria, semantics

Procedia PDF Downloads 330
4293 Code Refactoring Using Slice-Based Cohesion Metrics and AOP

Authors: Jagannath Singh, Durga Prasad Mohapatra

Abstract:

Software refactoring is very essential for maintaining the software quality. It is an usual practice that we first design the software and then go for coding. But after coding is completed, if the requirement changes slightly or our expected output is not achieved, then we change the codes. For each small code change, we cannot change the design. In course of time, due to these small changes made to the code, the software design decays. Software refactoring is used to restructure the code in order to improve the design and quality of the software. In this paper, we propose an approach for performing code refactoring. We use slice-based cohesion metrics to identify the target methods which requires refactoring. After identifying the target methods, we use program slicing to divide the target method into two parts. Finally, we have used the concepts of Aspects to adjust the code structure so that the external behaviour of the original module does not change.

Keywords: software refactoring, program slicing, AOP, cohesion metrics, code restructure, AspectJ

Procedia PDF Downloads 488