Search results for: code embeddings
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1427

Search results for: code embeddings

1277 Protection of Stakeholders under the Transitional Commercial Code of Eritrea: Comparative Analysis with the 2018 Company Law of Peoples Republic of China

Authors: Hayle Makda Gebru

Abstract:

Companies are inevitable for society. They are the building blocks of every development in a country aimed at producing continuous goods and services for the people and, in turn, obliged to pay taxes, which enhances the economy of the nation. For the proper functioning of companies, their relationship with their stakeholders must be secure. The major stakeholders are suppliers, consumers, employees, creditors, etc. The law plays an important role in enhancing the relationship between these different stakeholders. If the law fails to keep track of the relationship, both the company and stakeholders remain unprotected. As a result, the potential benefits are prejudiced. This paper makes a comparative analysis of the types and formation of companies under the Transitional Commercial Code of Eritrea and the Company Law of the Peoples Republic of China. In particular, the paper addresses the legal lacuna under the TCrCE on handling the failure of shareholders to pay the promised capital. So, the methodology of the study is entirely analyzing the two countries' laws using practical cases. After analyzing the practical problems on the ground using real cases, this paper calls on Eritrea to update its outdated Commercial Code to give proper protection to the stakeholders.

Keywords: companies, company law of the People's Republic of China, transitional commercial code of Eritrea, protection of stakeholders, failure to pay the promised capital

Procedia PDF Downloads 69
1276 Frequency of Nosocomial Infections in a Tertiary Hospital in Isfahan, Iran

Authors: Zahra Tolou-Ghamari

Abstract:

Objective: Health care associated with multiresistant pathogens is rising globally. It is well known that nosocomial infections increase hospital stay, morbidity, mortality, and disability. Therefore, the aim of this study was to define the occurrence of nosocomial infections in a tertiary hospital in Isfahan/Iran. Materials and Methods: The data were extracted from the official database of hospital nosocomial infections records that included 9152 vertical rows. For each patient, the reported infections were coded by number as UTI-SUTI; Code 55, VAE-PVAP; Code 56, BSI-LCBI Code 19, SSI-DIP; Code 14, and so on. For continuous variables, mean ± standard deviation and for categorical variables, the frequency was used. Results: The study population was 5542 patients, comprised of males (n=3282) and females (n=2260). With a minimum of 15 and a maximum of 99, the mean age in 5313 patients was 58.5 ± 19.1 years old. The highest reported nosocomial infections (n= 77%) were associated with the ages 30-80 years old. Sites of nosocomial infections in 87% were as: VAE-PVAP; 27.3%, VAE-IVAC; 7.7, UTI-SUTI; 29.5%, BSI-LCBI; 12.9%, SSI-DIP; 9.5% and other individual infection (13%) with the main pathogens klebsiella pneumonia, acinetobacter baumannii and staphylococcus. Conclusions: For an efficient surveillance system, adopting pharmacotherapy used antibiotics in terms of monotherapy or polypharmacy control policy, in addition to advanced infection control programs at regional and national levels in Iran recommended.

Keywords: infection, nosocomial, ventilator, blood stream, Isfahan, Iran

Procedia PDF Downloads 78
1275 Mobile Platform’s Attitude Determination Based on Smoothed GPS Code Data and Carrier-Phase Measurements

Authors: Mohamed Ramdani, Hassen Abdellaoui, Abdenour Boudrassen

Abstract:

Mobile platform’s attitude estimation approaches mainly based on combined positioning techniques and developed algorithms; which aim to reach a fast and accurate solution. In this work, we describe the design and the implementation of an attitude determination (AD) process, using only measurements from GPS sensors. The major issue is based on smoothed GPS code data using Hatch filter and raw carrier-phase measurements integrated into attitude algorithm based on vectors measurement using least squares (LSQ) estimation method. GPS dataset from a static experiment is used to investigate the effectiveness of the presented approach and consequently to check the accuracy of the attitude estimation algorithm. Attitude results from GPS multi-antenna over short baselines are introduced and analyzed. The 3D accuracy of estimated attitude parameters using smoothed measurements is over 0.27°.

Keywords: attitude determination, GPS code data smoothing, hatch filter, carrier-phase measurements, least-squares attitude estimation

Procedia PDF Downloads 155
1274 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering

Authors: R. Nandhini, Gaurab Mudbhari

Abstract:

Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.

Keywords: machine learning, deep learning, image classification, image clustering

Procedia PDF Downloads 10
1273 Effect of Modeling of Hydraulic Form Loss Coefficient to Break on Emergency Core Coolant Bypass

Authors: Young S. Bang, Dong H. Yoon, Seung H. Yoo

Abstract:

Emergency Core Coolant Bypass (ECC Bypass) has been regarded as an important phenomenon to peak cladding temperature of large-break loss-of-coolant-accidents (LBLOCA) in nuclear power plants (NPP). A modeling scheme to address the ECC Bypass phenomena and the calculation of LBLOCA using that scheme are discussed in the present paper. A hydraulic form loss coefficient (HFLC) from the reactor vessel downcomer to the broken cold leg is predicted by the computational fluid dynamics (CFD) code with a variation of the void fraction incoming from the downcomer. The maximum, mean, and minimum values of FLC are derived from the CFD results and are incorporated into the LBLOCA calculation using a system thermal-hydraulic code, MARS-KS. As a relevant parameter addressing the ECC Bypass phenomena, the FLC to the break and its range are proposed.

Keywords: CFD analysis, ECC bypass, hydraulic form loss coefficient, system thermal-hydraulic code

Procedia PDF Downloads 230
1272 Establish a Company in Turkey for Foreigners

Authors: Mucahit Unal, Ibrahim Arslan

Abstract:

The New Turkish Commercial Code (TCC) No. 6102 was published in the Official Gazette on February 14, 2011. As stated in the New Turkish Commercial Code No. 6102 and Law No. 6103 on Validity and Application of the Turkish Commercial Code, TCC came into effect on July 1, 2012. The basic purpose of the TCC is to form corporate governance coherent with the international standards; to provide transparency in company management; to adjust the Turkish Commercial Code rules with European Union legislations and to simplify establishing a company for foreigner investors to move investments to Turkish market. In this context according to TCC, joint stock companies and limited liability companies can establish with only one single shareholder; the one single shareholder can be foreigner; all board of director members can be foreigner, also all shareholders and board of director members can be non-resident foreigners. Additionally, TCC does not require physical participation to the general shareholders and board members meetings. TCC allows that the general shareholders and board members meetings can hold in an electronic form and resolution of these meetings may also be approved via electronic signatures. Through this amendment, foreign investors no longer have to deal with red tapes. This amendment also means the TCC prevents foreign companies from incurring unnecessary travel expenses. In accordance with all this amendments about TCC, to invest in Turkish market is easy, simple and transparent for foreigner investors and also investors can establish a company in Turkey, irrespective of nationality or place of residence. This article aims to analyze ‘Establish a Company in Turkey for Foreigners’ and inform investors about investing (especially establishing a company) in the Turkish market.

Keywords: establish a company, foreigner investors, invest in Turkish market, Turkish commercial code

Procedia PDF Downloads 263
1271 Investigating the Use of English Arabic Codeswitching in EFL classroom Oral Discourse Case study: Middle school pupils of Ain Fekroun, Wilaya of Oum El Bouaghi Algeria

Authors: Fadila Hadjeris

Abstract:

The study aims at investigating the functions of English-Arabic code switching in English as a foreign language classroom oral discourse and the extent to which they can contribute to the flow of classroom interaction. It also seeks to understand the views, beliefs, and perceptions of teachers and learners towards this practice. We hypothesized that code switching is a communicative strategy which facilitates classroom interaction. Due to this fact, both teachers and learners support its use. The study draws on a key body of literature in bilingualism, second language acquisition, and classroom discourse in an attempt to provide a framework for considering the research questions. It employs a combination of qualitative and quantitative research methods which include classroom observations and questionnaires. The analysis of the recordings shows that teachers’ code switching to Arabic is not only used for academic and classroom management reasons. Rather, the data display instances in which code switching is used for social reasons. The analysis of the questionnaires indicates that teachers and pupils have different attitudes towards this phenomenon. Teachers reported their deliberate switching during EFL teaching, yet the majority was against this practice. According to them, the use of the mother has detrimental effects on the acquisition and the practice of the target language. In contrast, pupils showed their preference to their teachers’ code switching because it enhances and facilitates their understanding. These findings support the fact that the shift to pupils’ mother tongue is a strategy which aids and facilitates the teaching and the learning of the target language. This, in turn, necessitates recommendations which are suggested to teachers and course designers.

Keywords: bilingualism, codeswitching, classroom interaction, classroom discourse, EFL learning/ teaching, SLA

Procedia PDF Downloads 478
1270 Bit Error Rate (BER) Performance of Coherent Homodyne BPSK-OCDMA Network for Multimedia Applications

Authors: Morsy Ahmed Morsy Ismail

Abstract:

In this paper, the structure of a coherent homodyne receiver for the Binary Phase Shift Keying (BPSK) Optical Code Division Multiple Access (OCDMA) network is introduced based on the Multi-Length Weighted Modified Prime Code (ML-WMPC) for multimedia applications. The Bit Error Rate (BER) of this homodyne detection is evaluated as a function of the number of active users and the signal to noise ratio for different code lengths according to the multimedia application such as audio, voice, and video. Besides, the Mach-Zehnder interferometer is used as an external phase modulator in homodyne detection. Furthermore, the Multiple Access Interference (MAI) and the receiver noise in a shot-noise limited regime are taken into consideration in the BER calculations.

Keywords: OCDMA networks, bit error rate, multiple access interference, binary phase-shift keying, multimedia

Procedia PDF Downloads 175
1269 Performance Comparison of Non-Binary RA and QC-LDPC Codes

Authors: Ni Wenli, He Jing

Abstract:

Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels.

Keywords: non-binary RA codes, QC-LDPC codes, performance comparison, BP algorithm

Procedia PDF Downloads 376
1268 Tertiary Level Teachers' Beliefs about Codeswitching

Authors: Hoa Pham

Abstract:

Code switching, which can be described as the use of students’ first language in second language classrooms, has long been a controversial topic in the area of language teaching and second language acquisition. While this has been widely investigated across different contexts, little empirical research has been undertaken in Vietnam. The findings of this study contribute to our understanding of bilingual discourse and code switching practices in content and language integrated classrooms, which has significant implications for language teaching and learning in general and in particular for language pedagogy at tertiary level in Vietnam. This study examines the accounts the teachers articulated for their code switching practices in content-based Business English in Vietnam. Data were collected from five teachers through the use of stimulated recall interviews facilitated by the video data to garner the teachers' cognitive reflection, and allowed them to vocalise the motivations behind their code switching behaviour in particular contexts. The literature has recommended that when participants are provided with a large amount of stimuli or cues, they will experience an original situation again in their imagination with great accuracy. This technique can also provide a valuable "insider" perspective on the phenomenon under investigation which complements the researcher’s "outsider" observation. This can create a relaxed atmosphere during the interview process, which in turn promotes the collection of rich and diverse data. Also, participants can be empowered by this technique as they can raise their own concerns and discuss instances which they find important or interesting. The data generated through this study were analysed using a constant comparative approach. The study found that the teachers indicated their support for the use of code switching in their pedagogical practices. Particularly, as a pedagogical resource, the teachers saw code switching to the L1 playing a key role in facilitating the students' comprehension of both content knowledge and the target language. They believed the use of the L1 accommodates the students' current language competence and content knowledge. They also expressed positive opinions about the role that code switching plays in stimulating students' schematic language and content knowledge, encouraging retention and interest in learning and promoting a positive affective environment in the classroom. The teachers perceived that their use of code switching to the L1 helps them meet the students' language needs and prepares them for their study in subsequent courses and addresses functional needs so that students can cope with English language use outside the classroom. Several factors shaped the teachers' perceptions of their code switching practices, including their accumulated teaching experience, their previous experience as language learners, their theoretical understanding of language teaching and learning, and their knowledge of the teaching context. Code switching was a typical phenomenon in the observed classes and was supported by the teachers in certain contexts. This study reinforces the call in the literature to recognise this practice as a useful instructional resource.

Keywords: codeswitching, language teaching, teacher beliefs, tertiary level

Procedia PDF Downloads 451
1267 GPU Based High Speed Error Protection for Watermarked Medical Image Transmission

Authors: Md Shohidul Islam, Jongmyon Kim, Ui-pil Chong

Abstract:

Medical image is an integral part of e-health care and e-diagnosis system. Medical image watermarking is widely used to protect patients’ information from malicious alteration and manipulation. The watermarked medical images are transmitted over the internet among patients, primary and referred physicians. The images are highly prone to corruption in the wireless transmission medium due to various noises, deflection, and refractions. Distortion in the received images leads to faulty watermark detection and inappropriate disease diagnosis. To address the issue, this paper utilizes error correction code (ECC) with (8, 4) Hamming code in an existing watermarking system. In addition, we implement the high complex ECC on a graphics processing units (GPU) to accelerate and support real-time requirement. Experimental results show that GPU achieves considerable speedup over the sequential CPU implementation, while maintaining 100% ECC efficiency.

Keywords: medical image watermarking, e-health system, error correction, Hamming code, GPU

Procedia PDF Downloads 290
1266 A Qualitative Evidence of the Markedness of Code Switching during Commercial Bank Service Encounters in Ìbàdàn Metropolis

Authors: A. Robbin

Abstract:

In a multilingual setting like Nigeria, the success of service encounters is enhanced by the use of a language that ensures the linguistic and persuasive demands of the interlocutors. This study examined motivations for code switching as a negotiation strategy in bank-hall desk service encounters in Ìbàdàn metropolis using Myers-Scotton’s exploration on markedness in language use. The data consisted of transcribed audio recording of bank-hall service encounters, and direct observation of bank interactions in two purposively sampled commercial banks in Ìbàdàn metropolis. The data was subjected to descriptive linguistic analysis using Myers Scotton’s Markedness Model.  Findings reveal that code switching is frequently employed during different stages of service encounter: greeting, transaction and closing to fulfil relational, bargaining and referential functions. Bank staff and customers code switch to make unmarked, marked and explanatory choices. A strategy used to identify with customer’s cultural affiliation, close status gap, and appeal to begrudged customer; or as an explanatory choice with non-literate customers for ease of communication. Bankers select English to maintain customers’ perceptions of prestige which is retained or diverged from depending on their linguistic preference or ability.  Yoruba is seen as an efficient negotiation strategy with both bankers and their customers, making choices within conversation to achieve desired conversational and functional aims.

Keywords: banking, bilingualism, code-switching, markedness, service encounter

Procedia PDF Downloads 206
1265 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features

Authors: Kyi Pyar Zaw, Zin Mar Kyu

Abstract:

Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.

Keywords: chain code frequency, character recognition, feature extraction, features matching, segmentation

Procedia PDF Downloads 320
1264 The Connection between the Schwartz Theory of Basic Values and Ethical Principles in Clinical Psychology

Authors: Matej Stritesky

Abstract:

The research deals with the connection between the Schwartz Theory of Basic Values and the ethical principles in psychology, on which the meta-code of ethics the European Federation of Psychological Associations is based. The research focuses on ethically problematic situations in clinical psychology in the Czech Republic. Based on the analysis of papers that identified ethically problematic situations faced by clinical psychologists, a questionnaire of ethically problematic situations in clinical psychology (EPSCP) was created for the purposes of the research. The questionnaire was created to represent situations that correspond to the 4 principles on which the meta-code of ethics the European Federation of Psychological Associations is based. The questionnaire EPSCP consists of descriptions of 32 situations that respondents evaluate on a scale from 1 (psychologist's behaviour is ethically perfectly fine) to 10 (psychologist's behaviour is ethically completely unacceptable). The EPSCP questionnaire, together with Schwartz's PVQ questionnaire, will be presented to 60 psychology students. The relationship between principles in clinical psychology and the values on Schwartz´s value continuum will be described using multidimensional scaling. A positive correlation is assumed between the higher-order value of openness to change and problematic ethical situations related to the principle of integrity; a positive correlation between the value of the higher order of self-transcendence and the principle of respect and responsibility; a positive correlation between the value of the higher order of conservation and the principle of competence; and negative correlation between the value of the higher order of ego strengthening and sensitivity to ethically problematic situations. The research also includes an experimental part. The first half of the students are presented with the code of ethics of the Czech Association of Clinical Psychologists before completing the questionnaires, and to the second half of the students is the code of ethics presented after completing the questionnaires. In addition to reading the code of ethics, students describe the three rules of the code of ethics that they consider most important and state why they chose these rules. The output of the experimental part will be to determine whether the presentation of the code of ethics leads to greater sensitivity to ethically problematic situations.

Keywords: clinical psychology, ethically problematic situations in clinical psychology, ethical principles in psychology, Schwartz theory of basic values

Procedia PDF Downloads 112
1263 Talent-to-Vec: Using Network Graphs to Validate Models with Data Sparsity

Authors: Shaan Khosla, Jon Krohn

Abstract:

In a recruiting context, machine learning models are valuable for recommendations: to predict the best candidates for a vacancy, to match the best vacancies for a candidate, and compile a set of similar candidates for any given candidate. While useful to create these models, validating their accuracy in a recommendation context is difficult due to a sparsity of data. In this report, we use network graph data to generate useful representations for candidates and vacancies. We use candidates and vacancies as network nodes and designate a bi-directional link between them based on the candidate interviewing for the vacancy. After using node2vec, the embeddings are used to construct a validation dataset with a ranked order, which will help validate new recommender systems.

Keywords: AI, machine learning, NLP, recruiting

Procedia PDF Downloads 84
1262 Detecting Heartbeat Architectural Tactic in Source Code Using Program Analysis

Authors: Ananta Kumar Das, Sujit Kumar Chakrabarti

Abstract:

Architectural tactics such as heartbeat, ping-echo, encapsulate, encrypt data are techniques that are used to achieve quality attributes of a system. Detecting architectural tactics has several benefits: it can aid system comprehension (e.g., legacy systems) and in the estimation of quality attributes such as safety, security, maintainability, etc. Architectural tactics are typically spread over the source code and are implicit. For large codebases, manual detection is often not feasible. Therefore, there is a need for automated methods of detection of architectural tactics. This paper presents a formalization of the heartbeat architectural tactic and a program analytic approach to detect this tactic in source code. The experiment of the proposed method is done on a set of Java applications. The outcome of the experiment strongly suggests that the method compares well with a manual approach in terms of its sensitivity and specificity, and far supersedes a manual exercise in terms of its scalability.

Keywords: software architecture, architectural tactics, detecting architectural tactics, program analysis, AST, alias analysis

Procedia PDF Downloads 160
1261 Prompt Design for Code Generation in Data Analysis Using Large Language Models

Authors: Lu Song Ma Li Zhi

Abstract:

With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.

Keywords: large language models, prompt design, data analysis, code generation

Procedia PDF Downloads 39
1260 Simulation of Reflectometry in Alborz Tokamak

Authors: S. Kohestani, R. Amrollahi, P. Daryabor

Abstract:

Microwave diagnostics such as reflectometry are receiving growing attention in magnetic confinement fusionresearch. In order to obtain the better understanding of plasma confinement physics, more detailed measurements on density profile and its fluctuations might be required. A 2D full-wave simulation of ordinary mode propagation has been written in an effort to model effects seen in reflectometry experiment. The code uses the finite-difference-time-domain method with a perfectly-matched-layer absorption boundary to solve Maxwell’s equations.The code has been used to simulate the reflectometer measurement in Alborz Tokamak.

Keywords: reflectometry, simulation, ordinary mode, tokamak

Procedia PDF Downloads 420
1259 Redefining Infrastructure as Code Orchestration Using AI

Authors: Georges Bou Ghantous

Abstract:

This research delves into the transformative impact of Artificial Intelligence (AI) on Infrastructure as Code (IaaC) practices, specifically focusing on the redefinition of infrastructure orchestration. By harnessing AI technologies such as machine learning algorithms and predictive analytics, organizations can achieve unprecedented levels of efficiency and optimization in managing their infrastructure resources. AI-driven IaaC introduces proactive decision-making through predictive insights, enabling organizations to anticipate and address potential issues before they arise. Dynamic resource scaling, facilitated by AI, ensures that infrastructure resources can seamlessly adapt to fluctuating workloads and changing business requirements. Through case studies and best practices, this paper sheds light on the tangible benefits and challenges associated with AI-driven IaaC transformation, providing valuable insights for organizations navigating the evolving landscape of digital infrastructure management.

Keywords: artificial intelligence, infrastructure as code, efficiency optimization, predictive insights, dynamic resource scaling, proactive decision-making

Procedia PDF Downloads 34
1258 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 89
1257 Clinch Process Simulation Using Diffuse Elements

Authors: Benzegaou Ali, Brani Benabderrahmane

Abstract:

This work describes a numerical study of the TOX–clinching process using diffuse elements. A computer code baptized SEMA "Static Explicit Method Analysis" is developed to simulate the clinch joining process. The FE code is based on an Updated Lagrangian scheme. The used resolution method is based on an explicit static approach. The integration of the elasto-plastic behavior law is realized with an algorithm of Simo and Taylor. The tools are represented by plane facets.

Keywords: diffuse elements, numerical simulation, clinching, contact, large deformation

Procedia PDF Downloads 363
1256 Regional Review of Outcome of Cervical Smears Reported with Cytological Features of Non Cervical Glandular Neoplasia

Authors: Uma Krishnamoorthy, Vivienne Beavers, Janet Marshall

Abstract:

Introduction: Cervical cytology showing features raising the suspicion of non cervical glandular neoplasia are reported as code 0 under the United Kingdom National Health Service Cervical screening programme ( NHSCSP). As the suspicion is regarding non cervical neoplasia, smear is reported as normal and patient informed that cervical screening result is normal. GP receives copy of results where it states further referral is indicated in small font within text of report. Background: There were several incidents of delayed diagnosis of endometrial cancer in Lancashire which prompted this Northwest Regional review to enable an understanding of underlying pathology outcome of code zero smears to raise awareness and also to review whether further action on wording of smear results was indicated to prevent such delay. Methodology: All Smears reported at the Manchester cytology centre who process cytology for Lancashire population from March 2013 to March 2014 were reviewed and histological diagnosis outcome of women in whom smear was reported as code zero was reviewed retrospectively . Results: Total smears reported by the cytology centre during this period was approximately 109400. Reports issued with result code 0 among this during this time period was 49.Results revealed that among three fourth (37) of women with code zero smear (N=49), evidence of underlying pathology of non cervical origin was confirmed. Of this, 73 % (36) were due to endometrial pathology with 49 % (24) endometrial carcinoma, 12 % (6)polyp, 4 % atypical endometrial hyperplasia (2), 6 % endometrial hyperplasia without atypia (3), and 2 % adenomyosis (1 case) and 2 % ( 1 case) due to ovarian adenocarcinoma. Conclusion: This review demonstrated that more than half (51 %) of women with a code 0 smear report were diagnosed with underlying carcinoma and 75 % had a confirmed underlying pathology contributory to code 0 smear findings. Recommendations and Action Plan: A local rapid access referral and management pathway for this group of women was implemented as a result of this in our unit. The findings and Pathway were shared with other regional units served by the cytology centre through the Pan Lancashire cervical screening board and through the Cytology centre. Locally, the smear report wording was updated to include a rubber stamp/ print in "Red Bold letters" stating that " URGENT REFERRAL TO GYNAECOLOGY IS INDICATED". Findings were also shared through the Pan Lancashire board with National cervical screening programme board, and revisions to wording of code zero smear reports to highlight the need for Urgent referral has now been agreed at National level to be implemented.

Keywords: code zero smears, endometrial cancer, non cervical glandular neoplasia, ovarian cancer

Procedia PDF Downloads 297
1255 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 144
1254 Steady State Modeling and Simulation of an Industrial Steam Boiler

Authors: Amina Lyria Deghal Cheridi, Abla Chaker, Ahcene Loubar

Abstract:

Relap5 system code is one among powerful tools, which is used in the area of design and safety evaluation. This work aims to simulate the behavior of a radiant steam boiler at the steady-state conditions using Relap5 code system. To perform this study, a detailed Relap5 model is built including all the parts of the steam boiler. The control and regulation systems are also considered. To reproduce the most important parameters and phenomena with an acceptable accuracy and fidelity, a strong qualification work is undertaken concerning the facility nodalization. It consists of making a comparison between the code results and the plant available data in steady-state operation mode. Therefore, the model qualification results at the steady-state are in good agreement with the steam boiler experimental data. The steam boiler Relap5 model has proved satisfactory; and the model was capable of predicting the main thermal-hydraulic steady-state conditions of the steam boiler.

Keywords: industrial steam boiler, model qualification, natural circulation, relap5/mod3.2, steady state simulation

Procedia PDF Downloads 271
1253 Running the Athena Vortex Lattice Code in JAVA through the Java Native Interface

Authors: Paul Okonkwo, Howard Smith

Abstract:

This paper describes a methodology to integrate the Athena Vortex Lattice Aerodynamic Software for automated operation in a multivariate optimisation of the Blended Wing Body Aircraft. The Athena Vortex Lattice code developed at the Massachusetts Institute of Technology allows for the aerodynamic analysis of aircraft using the vortex lattice method. Ordinarily, the Athena Vortex Lattice operation requires a text file containing the aircraft geometry to be loaded into the AVL solver in order to determine the aerodynamic forces and moments. However, automated operation will be required to enable integration into a multidisciplinary optimisation framework. Automated AVL operation within the JAVA design environment will nonetheless require a modification and recompilation of AVL source code into an executable file capable of running on windows and other platforms without the –X11 libraries. This paper describes the procedure for the integrating the FORTRAN written AVL software for automated operation within the multivariate design synthesis optimisation framework for the conceptual design of the BWB aircraft.

Keywords: aerodynamics, automation, optimisation, AVL, JNI

Procedia PDF Downloads 565
1252 ROOP: Translating Sequential Code Fragments to Distributed Code Fragments Using Deep Reinforcement Learning

Authors: Arun Sanjel, Greg Speegle

Abstract:

Every second, massive amounts of data are generated, and Data Intensive Scalable Computing (DISC) frameworks have evolved into effective tools for analyzing such massive amounts of data. Since the underlying architecture of these distributed computing platforms is often new to users, building a DISC application can often be time-consuming and prone to errors. The automated conversion of a sequential program to a DISC program will consequently significantly improve productivity. However, synthesizing a user’s intended program from an input specification is complex, with several important applications, such as distributed program synthesizing and code refactoring. Existing works such as Tyro and Casper rely entirely on deductive synthesis techniques or similar program synthesis approaches. Our approach is to develop a data-driven synthesis technique to identify sequential components and translate them to equivalent distributed operations. We emphasize using reinforcement learning and unit testing as feedback mechanisms to achieve our objectives.

Keywords: program synthesis, distributed computing, reinforcement learning, unit testing, DISC

Procedia PDF Downloads 106
1251 Rest API Based System-level Test Automation for Mobile Applications

Authors: Jisoo Song

Abstract:

Today’s mobile applications are communicating with servers more and more in order to access external services or information. Also, server-side code changes are more frequent than client-side code changes in a mobile application. The frequent changes lead to an increase in testing cost increase. To reduce costs, UI based test automation can be one of the solutions. It is a common automation technique in system-level testing. However, it can be unsuitable for mobile applications. When you automate tests based on UI elements for mobile applications, there are some limitations such as the overhead of script maintenance or the difficulty of finding invisible defects that UI elements cannot represent. To overcome these limitations, we present a new automation technique based on Rest API. You can automate system-level tests through test scripts that you write. These scripts call a series of Rest API in a user’s action sequence. This technique does not require testers to know the internal implementation details, only input and expected output of Rest API. You can easily modify test cases by modifying Rest API input values and also find problems that might not be evident from the UI level by validating output values. For example, when an application receives price information from a payment server and user cannot see it at UI level, Rest API based scripts can check whether price information is correct or not. More than 10 mobile applications at our company are being tested automatically based on Rest API scripts whenever application source code, mostly server source code, is built. We are finding defects right away by setting a script as a build job in CI server. The build job starts when application code builds are completed. This presentation will also include field cases from our company.

Keywords: case studies at SK Planet, introduction of rest API based test automation, limitations of UI based test automation

Procedia PDF Downloads 448
1250 Pallet Tracking and Cost Optimization of the Flow of Goods in Logistics Operations by Serial Shipping Container Code

Authors: Dominika Crnjac Milic, Martina Martinovic, Vladimir Simovic

Abstract:

The case study method in this paper shows the implementation of Information Technology (IT) and the Serial Shipping Container Code (SSCC) in a Croatian company that deals with logistics operations and provides logistics services in the cold chain segment. This company is aware of the sensitivity of the goods entrusted to them by the user of the service, as well as of the importance of speed and accuracy in providing logistics services. To that end, it has implemented and used the latest IT to ensure the highest standard of high-quality logistics services to its customers. Looking for efficiency and optimization of supply chain management, while maintaining a high level of quality of the products that are sold, today's users of outsourced logistics services are open to the implementation of new IT products that ultimately deliver savings. By analysing the positive results and the difficulties that arise when using this technology, we aim to provide an insight into the potential of this approach of the logistics service provider.

Keywords: logistics operations, serial shipping container code, information technology, cost optimization

Procedia PDF Downloads 360
1249 Model Driven Architecture Methodologies: A Review

Authors: Arslan Murtaza

Abstract:

Model Driven Architecture (MDA) is technique presented by OMG (Object Management Group) for software development in which different models are proposed and converted them into code. The main plan is to identify task by using PIM (Platform Independent Model) and transform it into PSM (Platform Specific Model) and then converted into code. In this review paper describes some challenges and issues that are faced in MDA, type and transformation of models (e.g. CIM, PIM and PSM), and evaluation of MDA-based methodologies.

Keywords: OMG, model driven rrchitecture (MDA), computation independent model (CIM), platform independent model (PIM), platform specific model(PSM), MDA-based methodologies

Procedia PDF Downloads 458
1248 Integrating the Athena Vortex Lattice Code into a Multivariate Design Synthesis Optimisation Platform in JAVA

Authors: Paul Okonkwo, Howard Smith

Abstract:

This paper describes a methodology to integrate the Athena Vortex Lattice Aerodynamic Software for automated operation in a multivariate optimisation of the Blended Wing Body Aircraft. The Athena Vortex Lattice code developed at the Massachusetts Institute of Technology by Mark Drela allows for the aerodynamic analysis of aircraft using the vortex lattice method. Ordinarily, the Athena Vortex Lattice operation requires a text file containing the aircraft geometry to be loaded into the AVL solver in order to determine the aerodynamic forces and moments. However, automated operation will be required to enable integration into a multidisciplinary optimisation framework. Automated AVL operation within the JAVA design environment will nonetheless require a modification and recompilation of AVL source code into an executable file capable of running on windows and other platforms without the –X11 libraries. This paper describes the procedure for the integrating the FORTRAN written AVL software for automated operation within the multivariate design synthesis optimisation framework for the conceptual design of the BWB aircraft.

Keywords: aerodynamics, automation, optimisation, AVL, JNI

Procedia PDF Downloads 582