Search results for: Testing techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3406

Search results for: Testing techniques

2566 HaskellFL: A Tool for Detecting Logical Errors in Haskell

Authors: Vanessa Vasconcelos, Mariza A. S. Bigonha

Abstract:

Understanding and using the functional paradigm is a challenge for many programmers. Looking for logical errors in code may take a lot of a developer’s time when a program grows in size. In order to facilitate both processes, this paper presents HaskellFL, a tool that uses fault localization techniques to locate a logical error in Haskell code. The Haskell subset used in this work is sufficiently expressive for those studying Functional Programming to get immediate help debugging their code and to answer questions about key concepts associated with the functional paradigm. HaskellFL was tested against Functional Programming assignments submitted by students enrolled at the Functional Programming class at the Federal University of Minas Gerais and against exercises from the Exercism Haskell track that are publicly available in GitHub. This work also evaluated the effectiveness of two fault localization techniques, Tarantula and Ochiai, in the Haskell context. Furthermore, the EXAM score was chosen to evaluate the tool’s effectiveness, and results showed that HaskellFL reduced the effort needed to locate an error for all tested scenarios. The results also showed that the Ochiai method was more effective than Tarantula.

Keywords: Debug, fault localization, functional programming, Haskell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731
2565 Advanced Geolocation of IP Addresses

Authors: Robert Koch, Mario Golling, Gabi Dreo Rodosek

Abstract:

Tracing and locating the geographical location of users (Geolocation) is used extensively in todays Internet. Whenever we, e.g., request a page from google we are - unless there was a specific configuration made - automatically forwarded to the page with the relevant language and amongst others, dependent on our location identified, specific commercials are presented. Especially within the area of Network Security, Geolocation has a significant impact. Because of the way the Internet works, attacks can be executed from almost everywhere. Therefore, for an attribution, knowledge of the origination of an attack - and thus Geolocation - is mandatory in order to be able to trace back an attacker. In addition, Geolocation can also be used very successfully to increase the security of a network during operation (i.e. before an intrusion actually has taken place). Similar to greylisting in emails, Geolocation allows to (i) correlate attacks detected with new connections and (ii) as a consequence to classify traffic a priori as more suspicious (thus particularly allowing to inspect this traffic in more detail). Although numerous techniques for Geolocation are existing, each strategy is subject to certain restrictions. Following the ideas of Endo et al., this publication tries to overcome these shortcomings with a combined solution of different methods to allow improved and optimized Geolocation. Thus, we present our architecture for improved Geolocation, by designing a new algorithm, which combines several Geolocation techniques to increase the accuracy.

Keywords: IP geolocation, prosecution of computer fraud, attack attribution, target-analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4726
2564 Importance of Public Communication Campaigns and Art Activities in Social Education

Authors: Bilgehan Gültekin, Tuba Gültekin

Abstract:

Universities have an important role in social education in many aspects. In terms of creating awareness and convincing public about social issues, universities take a leading position for public. The best way to provide public support for social education is to develop public communication campaigns. The aim of this study is to present a public communication model which will be guided in social education practices. The study titled “Importance of public communication campaigns and art activities in Social Education “is based on the following topics: Effects of public communication campaigns on social education, Public relations techniques for education, communication strategies, Steps of public relations campaigns in social education, making persuasive messages for public communication campaigns, developing artistic messages and organizing art activities in social education. In addition to these topics, media planning for social education, forming a team as campaign managers, dialogues with opinion leaders in education and preparing creative communication models for social education will be taken into consideration. This study also aims to criticize social education Case studies in Turkey. At the same time, some communicative methods and principles will be given in the light of communication campaigns within the context of this notice.

Keywords: Art activities in social education, Persuasive communication, Public communication campaigns, Public relations techniques for education

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597
2563 Carbon-Based Electrochemical Detection of Pharmaceuticals from Water

Authors: M. Ardelean, F. Manea, A. Pop, J. Schoonman

Abstract:

The presence of pharmaceuticals in the environment and especially in water has gained increasing attention. They are included in emerging class of pollutants, and for most of them, legal limits have not been set-up due to their impact on human health and ecosystem was not determined and/or there is not the advanced analytical method for their quantification. In this context, the development of various advanced analytical methods for the quantification of pharmaceuticals in water is required. The electrochemical methods are known to exhibit the great potential for high-performance analytical methods but their performance is in direct relation to the electrode material and the operating techniques. In this study, two types of carbon-based electrodes materials, i.e., boron-doped diamond (BDD) and carbon nanofiber (CNF)-epoxy composite electrodes have been investigated through voltammetric techniques for the detection of naproxen in water. The comparative electrochemical behavior of naproxen (NPX) on both BDD and CNF electrodes was studied by cyclic voltammetry, and the well-defined peak corresponding to NPX oxidation was found for each electrode. NPX oxidation occurred on BDD electrode at the potential value of about +1.4 V/SCE (saturated calomel electrode) and at about +1.2 V/SCE for CNF electrode. The sensitivities for NPX detection were similar for both carbon-based electrode and thus, CNF electrode exhibited superiority in relation to the detection potential. Differential-pulsed voltammetry (DPV) and square-wave voltammetry (SWV) techniques were exploited to improve the electroanalytical performance for the NPX detection, and the best results related to the sensitivity of 9.959 µA·µM-1 were achieved using DPV. In addition, the simultaneous detection of NPX and fluoxetine -a very common antidepressive drug, also present in water, was studied using CNF electrode and very good results were obtained. The detection potential values that allowed a good separation of the detection signals together with the good sensitivities were appropriate for the simultaneous detection of both tested pharmaceuticals. These results reclaim CNF electrode as a valuable tool for the individual/simultaneous detection of pharmaceuticals in water.

Keywords: Boron-doped diamond electrode, carbon nanofiber-epoxy composite electrode, emerging pollutants, pharmaceuticals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1267
2562 A Fast Replica Placement Methodology for Large-scale Distributed Computing Systems

Authors: Samee Ullah Khan, C. Ardil

Abstract:

Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.

Keywords: Data replication, auctions, static allocation, pricing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1694
2561 Questions Categorization in E-Learning Environment Using Data Mining Technique

Authors: Vilas P. Mahatme, K. K. Bhoyar

Abstract:

Nowadays, education cannot be imagined without digital technologies. It broadens the horizons of teaching learning processes. Several universities are offering online courses. For evaluation purpose, e-examination systems are being widely adopted in academic environments. Multiple-choice tests are extremely popular. Moving away from traditional examinations to e-examination, Moodle as Learning Management Systems (LMS) is being used. Moodle logs every click that students make for attempting and navigational purposes in e-examination. Data mining has been applied in various domains including retail sales, bioinformatics. In recent years, there has been increasing interest in the use of data mining in e-learning environment. It has been applied to discover, extract, and evaluate parameters related to student’s learning performance. The combination of data mining and e-learning is still in its babyhood. Log data generated by the students during online examination can be used to discover knowledge with the help of data mining techniques. In web based applications, number of right and wrong answers of the test result is not sufficient to assess and evaluate the student’s performance. So, assessment techniques must be intelligent enough. If student cannot answer the question asked by the instructor then some easier question can be asked. Otherwise, more difficult question can be post on similar topic. To do so, it is necessary to identify difficulty level of the questions. Proposed work concentrate on the same issue. Data mining techniques in specific clustering is used in this work. This method decide difficulty levels of the question and categories them as tough, easy or moderate and later this will be served to the desire students based on their performance. Proposed experiment categories the question set and also group the students based on their performance in examination. This will help the instructor to guide the students more specifically. In short mined knowledge helps to support, guide, facilitate and enhance learning as a whole.

Keywords: Data mining, e-examination, e-learning, moodle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075
2560 The Effect of Addition of Dioctyl Terephthalate and Calcite on the Tensile Properties of Organoclay/Linear Low Density Polyethylene Nanocomposites

Authors: A. Gürses, Z. Eroğlu, E. Şahin, K. Güneş, Ç. Doğar

Abstract:

In recent years, polymer/clay nanocomposites have generated great interest in the polymer industry as a new type of composite material because of their superior properties, which includes high heat deflection temperature, gas barrier performance, dimensional stability, enhanced mechanical properties, optical clarity and flame retardancy when compared with the pure polymer or conventional composites. The investigation of change of the tensile properties of organoclay/linear low density polyethylene (LLDPE) nanocomposites with the use of Dioctyl terephthalate (DOTP) (as plasticizer) and calcite (as filler) has been aimed. The composites and organoclay synthesized were characterized using the techniques such as XRD, HRTEM and FTIR techniques. The spectroscopic results indicate that platelets of organoclay were well dispersed within the polymeric matrix. The tensile properties of the composites were compared considering the stress-strain curve drawn for each composite and pure polymer. It was observed that the composites prepared by adding the plasticizer at different ratios and a certain amount of calcite exhibited different tensile behaviors compared to pure polymer.

Keywords: Linear low density polyethylene, nanocomposite, organoclay, plasticizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1445
2559 High-Frequency Spectrum Analysis of VFTO Generated inside Gas Insulated Substations

Authors: M. A. Abd-Allah, A. Said, Ebrahim A. Badran

Abstract:

Worldwide many electrical equipment insulation failures have been reported caused by switching operations, while those equipments had previously passed all the standard tests and complied with all quality requirements. The problem is mostly associated with high-frequency overvoltages generated during opening or closing of a switching device. The transients generated during switching operations in a Gas Insulated Substation (GIS) are associated with high frequency components in the order of few tens of MHz. The frequency spectrum of the VFTO generated in the 220/66 kV Wadi-Hoff GIS is analyzed using Fast Fourier Transform technique. The main frequency with high voltage amplitude due to the operation of disconnector (DS5) is 5 to 10 MHz, with the highest amplitude at 9 MHz. The main frequency with high voltage amplitude due to the operation of circuit breaker (CB5) is 1 to 25 MHz, with the highest amplitude at 2 MHz. Mitigating techniques damped the oscillating frequencies effectively. The using of cable terminal reduced the frequency oscillation effectively than that of OHTL terminal. The using of a shunt capacitance results in vanishing the high frequency components. Ferrite rings reduces the high frequency components effectively especially in the range 2 to 7 MHz. The using of RC and RL filters results in vanishing the high frequency components.

Keywords: GIS, VFTO, Mitigation Techniques, Frequency spectrum, FFT, EMTP/ATP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2527
2558 Hybrid Intelligent Intrusion Detection System

Authors: Norbik Bashah, Idris Bharanidharan Shanmugam, Abdul Manan Ahmed

Abstract:

Intrusion Detection Systems are increasingly a key part of systems defense. Various approaches to Intrusion Detection are currently being used, but they are relatively ineffective. Artificial Intelligence plays a driving role in security services. This paper proposes a dynamic model Intelligent Intrusion Detection System, based on specific AI approach for intrusion detection. The techniques that are being investigated includes neural networks and fuzzy logic with network profiling, that uses simple data mining techniques to process the network data. The proposed system is a hybrid system that combines anomaly, misuse and host based detection. Simple Fuzzy rules allow us to construct if-then rules that reflect common ways of describing security attacks. For host based intrusion detection we use neural-networks along with self organizing maps. Suspicious intrusions can be traced back to its original source path and any traffic from that particular source will be redirected back to them in future. Both network traffic and system audit data are used as inputs for both.

Keywords: Intrusion Detection, Network Security, Data mining, Fuzzy Logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2131
2557 EEIA: Energy Efficient Indexed Aggregation in Smart Wireless Sensor Networks

Authors: Mohamed Watfa, William Daher, Hisham Al Azar

Abstract:

The main idea behind in network aggregation is that, rather than sending individual data items from sensors to sinks, multiple data items are aggregated as they are forwarded by the sensor network. Existing sensor network data aggregation techniques assume that the nodes are preprogrammed and send data to a central sink for offline querying and analysis. This approach faces two major drawbacks. First, the system behavior is preprogrammed and cannot be modified on the fly. Second, the increased energy wastage due to the communication overhead will result in decreasing the overall system lifetime. Thus, energy conservation is of prime consideration in sensor network protocols in order to maximize the network-s operational lifetime. In this paper, we give an energy efficient approach to query processing by implementing new optimization techniques applied to in-network aggregation. We first discuss earlier approaches in sensors data management and highlight their disadvantages. We then present our approach “Energy Efficient Indexed Aggregation" (EEIA) and evaluate it through several simulations to prove its efficiency, competence and effectiveness.

Keywords: Sensor Networks, Data Base, Data Fusion, Aggregation, Indexing, Energy Efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
2556 Concept for Knowledge out of Sri Lankan Non-State Sector: Performances of Higher Educational Institutes and Successes of Its Sector

Authors: S. Jeyarajan

Abstract:

Concept of knowledge is discovered from conducted study for successive Competition in Sri Lankan Non-State Higher Educational Institutes. The Concept discovered out of collected Knowledge Management Practices from Emerald inside likewise reputed literatures and of Non-State Higher Educational sector. A test is conducted to reveal existences and its reason behind of these collected practices in Sri Lankan Non-State Higher Education Institutes. Further, unavailability of such study and uncertain on number of participants for data collection in the Sri Lankan context contributed selection of research method as qualitative method, which used attributes of Delphi Method to manage those likewise uncertainty. Data are collected under Dramaturgical Method, which contributes efficient usage of the Delphi method. Grounded theory is selected as data analysis techniques, which is conducted in intermixed discourse to manage different perspectives of data that are collected systematically through perspective and modified snowball sampling techniques. Data are then analysed using Grounded Theory Development Techniques in Intermix discourses to manage differences in Data. Consequently, Agreement in the results of Grounded theories and of finding in the Foreign Study is discovered in the analysis whereas present study conducted as Qualitative Research and The Foreign Study conducted as Quantitative Research. As such, the Present study widens the discovery in the Foreign Study. Further, having discovered reason behind of the existences, the Present result shows Concept for Knowledge from Sri Lankan Non-State sector to manage higher educational Institutes in successful manner.

Keywords: Adherence of snowball sampling into perspective sampling, Delphi method in qualitative method, grounded theory development in intermix discourses of analysis, knowledge management for success of higher educational institutes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 772
2555 Early Recognition and Grading of Cataract Using a Combined Log Gabor/Discrete Wavelet Transform with ANN and SVM

Authors: Hadeer R. M. Tawfik, Rania A. K. Birry, Amani A. Saad

Abstract:

Eyes are considered to be the most sensitive and important organ for human being. Thus, any eye disorder will affect the patient in all aspects of life. Cataract is one of those eye disorders that lead to blindness if not treated correctly and quickly. This paper demonstrates a model for automatic detection, classification, and grading of cataracts based on image processing techniques and artificial intelligence. The proposed system is developed to ease the cataract diagnosis process for both ophthalmologists and patients. The wavelet transform combined with 2D Log Gabor Wavelet transform was used as feature extraction techniques for a dataset of 120 eye images followed by a classification process that classified the image set into three classes; normal, early, and advanced stage. A comparison between the two used classifiers, the support vector machine SVM and the artificial neural network ANN were done for the same dataset of 120 eye images. It was concluded that SVM gave better results than ANN. SVM success rate result was 96.8% accuracy where ANN success rate result was 92.3% accuracy.

Keywords: Cataract, classification, detection, feature extraction, grading, log-gabor, neural networks, support vector machines, wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993
2554 Functionality of Negotiation Agent on Value-based Design Decision

Authors: Arazi Idrus, Christiono Utomo

Abstract:

This paper presents functionality of negotiation agent on value-based design decision. The functionality is based on the characteristics of the system and goal specification. A Prometheus Design Tool model was used for developing the system. Group functionality will be the attribute for negotiation agents, which comprises a coordinator agent and decision- maker agent. The results of the testing of the system to a building system selection on valuebased decision environment are also presented.

Keywords: Functionality, negotiation agent, value-baseddecision

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
2553 Contextual SenSe Model: Word Sense Disambiguation Using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural Language Processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a method to create an affinity matrix to calculate the affinity between any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an algorithm to create the sense clusters of tokens using affinity matrix under hierarchy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contextual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and challenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: Word Sense Disambiguation, WSD, Contextual SenSe Model, Most Frequent Sense, part of speech, POS, Natural Language Processing, NLP, OOV, out of vocabulary, ELMo, Embeddings from Language Model, BERT, Bidirectional Encoder Representations from Transformers, Word2Vec, lemma_POS, Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 383
2552 Using Non-Linear Programming Techniques in Determination of the Most Probable Slip Surface in 3D Slopes

Authors: M. M. Toufigh, A. R. Ahangarasr, A. Ouria

Abstract:

Among many different methods that are used for optimizing different engineering problems mathematical (numerical) optimization techniques are very important because they can easily be used and are consistent with most of engineering problems. Many studies and researches are done on stability analysis of three dimensional (3D) slopes and the relating probable slip surfaces and determination of factors of safety, but in most of them force equilibrium equations, as in simplified 2D methods, are considered only in two directions. In other words for decreasing mathematical calculations and also for simplifying purposes the force equilibrium equation in 3rd direction is omitted. This point is considered in just a few numbers of previous studies and most of them have only given a factor of safety and they haven-t made enough effort to find the most probable slip surface. In this study shapes of the slip surfaces are modeled, and safety factors are calculated considering the force equilibrium equations in all three directions, and also the moment equilibrium equation is satisfied in the slip direction, and using nonlinear programming techniques the shape of the most probable slip surface is determined. The model which is used in this study is a 3D model that is composed of three upper surfaces which can cover all defined and probable slip surfaces. In this research the meshing process is done in a way that all elements are prismatic with quadrilateral cross sections, and the safety factor is defined on this quadrilateral surface in the base of the element which is a part of the whole slip surface. The method that is used in this study to find the most probable slip surface is the non-linear programming method in which the objective function that must get optimized is the factor of safety that is a function of the soil properties and the coordinates of the nodes on the probable slip surface. The main reason for using non-linear programming method in this research is its quick convergence to the desired responses. The final results show a good compatibility with the previously used classical and 2D methods and also show a reasonable convergence speed.

Keywords: Non-linear programming, numerical optimization, slope stability, 3D analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
2551 An Improved k Nearest Neighbor Classifier Using Interestingness Measures for Medical Image Mining

Authors: J. Alamelu Mangai, Satej Wagle, V. Santhosh Kumar

Abstract:

The exponential increase in the volume of medical image database has imposed new challenges to clinical routine in maintaining patient history, diagnosis, treatment and monitoring. With the advent of data mining and machine learning techniques it is possible to automate and/or assist physicians in clinical diagnosis. In this research a medical image classification framework using data mining techniques is proposed. It involves feature extraction, feature selection, feature discretization and classification. In the classification phase, the performance of the traditional kNN k nearest neighbor classifier is improved using a feature weighting scheme and a distance weighted voting instead of simple majority voting. Feature weights are calculated using the interestingness measures used in association rule mining. Experiments on the retinal fundus images show that the proposed framework improves the classification accuracy of traditional kNN from 78.57 % to 92.85 %.

Keywords: Medical Image Mining, Data Mining, Feature Weighting, Association Rule Mining, k nearest neighbor classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3308
2550 Comparative Study on Swarm Intelligence Techniques for Biclustering of Microarray Gene Expression Data

Authors: R. Balamurugan, A. M. Natarajan, K. Premalatha

Abstract:

Microarray gene expression data play a vital in biological processes, gene regulation and disease mechanism. Biclustering in gene expression data is a subset of the genes indicating consistent patterns under the subset of the conditions. Finding a biclustering is an optimization problem. In recent years, swarm intelligence techniques are popular due to the fact that many real-world problems are increasingly large, complex and dynamic. By reasons of the size and complexity of the problems, it is necessary to find an optimization technique whose efficiency is measured by finding the near optimal solution within a reasonable amount of time. In this paper, the algorithmic concepts of the Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL) and Cuckoo Search (CS) algorithms have been analyzed for the four benchmark gene expression dataset. The experiment results show that CS outperforms PSO and SFL for 3 datasets and SFL give better performance in one dataset. Also this work determines the biological relevance of the biclusters with Gene Ontology in terms of function, process and component.

Keywords: Particle swarm optimization, Shuffled frog leaping, Cuckoo search, biclustering, gene expression data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2663
2549 Application of Data Mining Tools to Predicate Completion Time of a Project

Authors: Seyed Hossein Iranmanesh, Zahra Mokhtari

Abstract:

Estimation time and cost of work completion in a project and follow up them during execution are contributors to success or fail of a project, and is very important for project management team. Delivering on time and within budgeted cost needs to well managing and controlling the projects. To dealing with complex task of controlling and modifying the baseline project schedule during execution, earned value management systems have been set up and widely used to measure and communicate the real physical progress of a project. But it often fails to predict the total duration of the project. In this paper data mining techniques is used predicting the total project duration in term of Time Estimate At Completion-EAC (t). For this purpose, we have used a project with 90 activities, it has updated day by day. Then, it is used regular indexes in literature and applied Earned Duration Method to calculate time estimate at completion and set these as input data for prediction and specifying the major parameters among them using Clem software. By using data mining, the effective parameters on EAC and the relationship between them could be extracted and it is very useful to manage a project with minimum delay risks. As we state, this could be a simple, safe and applicable method in prediction the completion time of a project during execution.

Keywords: Data Mining Techniques, Earned Duration Method, Earned Value, Estimate At Completion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803
2548 Exploiting Machine Learning Techniques for the Enhancement of Acceptance Sampling

Authors: Aikaterini Fountoulaki, Nikos Karacapilidis, Manolis Manatakis

Abstract:

This paper proposes an innovative methodology for Acceptance Sampling by Variables, which is a particular category of Statistical Quality Control dealing with the assurance of products quality. Our contribution lies in the exploitation of machine learning techniques to address the complexity and remedy the drawbacks of existing approaches. More specifically, the proposed methodology exploits Artificial Neural Networks (ANNs) to aid decision making about the acceptance or rejection of an inspected sample. For any type of inspection, ANNs are trained by data from corresponding tables of a standard-s sampling plan schemes. Once trained, ANNs can give closed-form solutions for any acceptance quality level and sample size, thus leading to an automation of the reading of the sampling plan tables, without any need of compromise with the values of the specific standard chosen each time. The proposed methodology provides enough flexibility to quality control engineers during the inspection of their samples, allowing the consideration of specific needs, while it also reduces the time and the cost required for these inspections. Its applicability and advantages are demonstrated through two numerical examples.

Keywords: Acceptance Sampling, Neural Networks, Statistical Quality Control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
2547 Bootstrap and MLS Methods-based Individual Bioequivalence Assessment

Authors: Kongsheng Zhang, Li Ge

Abstract:

It is a one-sided hypothesis testing process for assessing bioequivalence. Bootstrap and modified large-sample(MLS) methods are considered to study individual bioequivalence(IBE), type I error and power of hypothesis tests are simulated and compared with FDA(2001). The results show that modified large-sample method is equivalent to the method of FDA(2001) .

Keywords: Individual bioequivalence, bootstrap, Bayesian bootstrap, modified large-sample.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
2546 Constant Factor Approximation Algorithm for p-Median Network Design Problem with Multiple Cable Types

Authors: Chaghoub Soraya, Zhang Xiaoyan

Abstract:

This research presents the first constant approximation algorithm to the p-median network design problem with multiple cable types. This problem was addressed with a single cable type and there is a bifactor approximation algorithm for the problem. To the best of our knowledge, the algorithm proposed in this paper is the first constant approximation algorithm for the p-median network design with multiple cable types. The addressed problem is a combination of two well studied problems which are p-median problem and network design problem. The introduced algorithm is a random sampling approximation algorithm of constant factor which is conceived by using some random sampling techniques form the literature. It is based on a redistribution Lemma from the literature and a steiner tree problem as a subproblem. This algorithm is simple, and it relies on the notions of random sampling and probability. The proposed approach gives an approximation solution with one constant ratio without violating any of the constraints, in contrast to the one proposed in the literature. This paper provides a (21 + 2)-approximation algorithm for the p-median network design problem with multiple cable types using random sampling techniques.

Keywords: Approximation algorithms, buy-at-bulk, combinatorial optimization, network design, p-median.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595
2545 A Design Framework for Event Recommendation in Novice Low-Literacy Communities

Authors: Yimeng Deng, Klarissa T.T. Chang

Abstract:

The proliferation of user-generated content (UGC) results in huge opportunities to explore event patterns. However, existing event recommendation systems primarily focus on advanced information technology users. Little work has been done to address novice and low-literacy users. The next billion users providing and consuming UGC are likely to include communities from developing countries who are ready to use affordable technologies for subsistence goals. Therefore, we propose a design framework for providing event recommendations to address the needs of such users. Grounded in information integration theory (IIT), our framework advocates that effective event recommendation is supported by systems capable of (1) reliable information gathering through structured user input, (2) accurate sense making through spatial-temporal analytics, and (3) intuitive information dissemination through interactive visualization techniques. A mobile pest management application is developed as an instantiation of the design framework. Our preliminary study suggests a set of design principles for novice and low-literacy users.

Keywords: Event recommendation, iconic interface, information integration, spatial-temporal clustering, user-generated content, visualization techniques

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
2544 Real-time Target Tracking Using a Pan and Tilt Platform

Authors: Moulay A. Akhloufi

Abstract:

In recent years, we see an increase of interest for efficient tracking systems in surveillance applications. Many of the proposed techniques are designed for static cameras environments. When the camera is moving, tracking moving objects become more difficult and many techniques fail to detect and track the desired targets. The problem becomes more complex when we want to track a specific object in real-time using a moving Pan and Tilt camera system to keep the target within the image. This type of tracking is of high importance in surveillance applications. When a target is detected at a certain zone, the possibility of automatically tracking it continuously and keeping it within the image until action is taken is very important for security personnel working in very sensitive sites. This work presents a real-time tracking system permitting the detection and continuous tracking of targets using a Pan and Tilt camera platform. A novel and efficient approach for dealing with occlusions is presented. Also a new intelligent forget factor is introduced in order to take into account target shape variations and avoid learning non desired objects. Tests conducted in outdoor operational scenarios show the efficiency and robustness of the proposed approach.

Keywords: Tracking, surveillance, target detection, Pan and tilt.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788
2543 Complexity of Component-based Development of Embedded Systems

Authors: M. Zheng, V. S. Alagar

Abstract:

The paper discusses complexity of component-based development (CBD) of embedded systems. Although CBD has its merits, it must be augmented with methods to control the complexities that arise due to resource constraints, timeliness, and run-time deployment of components in embedded system development. Software component specification, system-level testing, and run-time reliability measurement are some ways to control the complexity.

Keywords: Components, embedded systems, complexity, softwaredevelopment, traffic controller system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
2542 Harmonic Analysis and Performance Improvement of a Wind Energy Conversions System with Double Output Induction Generator

Authors: M. Sedighizadeh, A. Rezazadeh

Abstract:

Wind turbines with double output induction generators can operate at variable speed permitting conversion efficiency maximization over a wide range of wind velocities. This paper presents the performance analysis of a wind driven double output induction generator (DOIG) operating at varying shafts speed. A periodic transient state analysis of DOIG equipped with two converters is carried out using a hybrid induction machine model. This paper simulates the harmonic content of waveforms in various points of drive at different speeds, based on the hybrid model (dqabc). Then the sinusoidal and trapezoidal pulse-width–modulation control techniques are used in order to improve the power factor of the machine and to weaken the injected low order harmonics to the supply. Based on the frequency spectrum, total harmonics distortion, distortion factor and power factor. Finally advantages of sinusoidal and trapezoidal pulse width modulation techniques are compared.

Keywords: DOIG, Harmonic Analysis, Wind.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1801
2541 Optimization of Process Parameters Affecting on Spring-Back in V-Bending Process for High Strength Low Alloy Steel HSLA 420 Using FEA (HyperForm) and Taguchi Technique

Authors: Navajyoti Panda, R. S. Pawar

Abstract:

In this study, process parameters like punch angle, die opening, grain direction, and pre-bend condition of the strip for deep draw of high strength low alloy steel HSLA 420 are investigated. The finite element method (FEM) in association with the Taguchi and the analysis of variance (ANOVA) techniques are carried out to investigate the degree of importance of process parameters in V-bending process for HSLA 420&ST12 grade material. From results, it is observed that punch angle had a major influence on the spring-back. Die opening also showed very significant role on spring back. On the other hand, it is revealed that grain direction had the least impact on spring back; however, if strip from flat sheet is taken, then it is less prone to spring back as compared to the strip from sheet metal coil. HyperForm software is used for FEM simulation and experiments are designed using Taguchi method. Percentage contribution of the parameters is obtained through the ANOVA techniques.

Keywords: Bending, V-bending, FEM, spring-back, Taguchi, HyperForm, profile projector, HSLA 420 & St12 materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
2540 Improved Computational Efficiency of Machine Learning Algorithms Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning (ML) archetypal that could forecast the COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID-19 cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organization (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data are split into 8:2 ratio for training and testing purposes to forecast future new COVID-19 cases. Support Vector Machine (SVM), Random Forest (RF), and linear regression (LR) algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID-19 cases is evaluated. RF outperformed the other two ML algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n = 30. The mean square error obtained for RF is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis, RF algorithm can perform more effectively and efficiently in predicting the new COVID-19 cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172
2539 Adopting Artificial Intelligence and Deep Learning Techniques in Cloud Computing for Operational Efficiency

Authors: Sandesh Achar

Abstract:

Artificial intelligence (AI) is being increasingly incorporated into many applications across various sectors such as health, education, security, and agriculture. Recently, there has been rapid development in cloud computing technology, resulting in AI’s implementation into cloud computing to enhance and optimize the technology service rendered. The deployment of AI in cloud-based applications has brought about autonomous computing, whereby systems achieve stated results without human intervention. Despite the amount of research into autonomous computing, work incorporating AI/ML into cloud computing to enhance its performance and resource allocation remains a fundamental challenge. This paper highlights different manifestations, roles, trends, and challenges related to AI-based cloud computing models. This work reviews and highlights investigations and progress in the domain. Future directions are suggested for leveraging AI/ML in next-generation computing for emerging computing paradigms such as cloud environments. Adopting AI-based algorithms and techniques to increase operational efficiency, cost savings, automation, reducing energy consumption and solving complex cloud computing issues are the major findings outlined in this paper.

Keywords: Artificial intelligence, AI, cloud computing, deep learning, machine learning, ML, internet of things, IoT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 627
2538 Role of Feedbacks in Simulation-Based Learning

Authors: Usman Ghani

Abstract:

Feedback is a vital element for improving student learning in a simulation-based training as it guides and refines learning through scaffolding. A number of studies in literature have shown that students’ learning is enhanced when feedback is provided with personalized tutoring that offers specific guidance and adapts feedback to the learner in a one-to-one environment. Thus, emulating these adaptive aspects of human tutoring in simulation provides an effective methodology to train individuals. This paper presents the results of a study that investigated the effectiveness of automating different types of feedback techniques such as Knowledge-of-Correct-Response (KCR) and Answer-Until- Correct (AUC) in software simulation for learning basic information technology concepts. For the purpose of comparison, techniques like simulation with zero or no-feedback (NFB) and traditional hands-on (HON) learning environments are also examined. The paper presents the summary of findings based on quantitative analyses which reveal that the simulation based instructional strategies are at least as effective as hands-on teaching methodologies for the purpose of learning of IT concepts. The paper also compares the results of the study with the earlier studies and recommends strategies for using feedback mechanism to improve students’ learning in designing and simulation-based IT training.

Keywords: Simulation, feedback, training, hands-on, labs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
2537 Cluster Algorithm for Genetic Diversity

Authors: Manpreet Singh, Keerat Kaur, Bhavdeep Singh

Abstract:

With the hardware technology advancing, the cost of storing is decreasing. Thus there is an urgent need for new techniques and tools that can intelligently and automatically assist us in transferring this data into useful knowledge. Different techniques of data mining are developed which are helpful for handling these large size databases [7]. Data mining is also finding its role in the field of biotechnology. Pedigree means the associated ancestry of a crop variety. Genetic diversity is the variation in the genetic composition of individuals within or among species. Genetic diversity depends upon the pedigree information of the varieties. Parents at lower hierarchic levels have more weightage for predicting genetic diversity as compared to the upper hierarchic levels. The weightage decreases as the level increases. For crossbreeding, the two varieties should be more and more genetically diverse so as to incorporate the useful characters of the two varieties in the newly developed variety. This paper discusses the searching and analyzing of different possible pairs of varieties selected on the basis of morphological characters, Climatic conditions and Nutrients so as to obtain the most optimal pair that can produce the required crossbreed variety. An algorithm was developed to determine the genetic diversity between the selected wheat varieties. Cluster analysis technique is used for retrieving the results.

Keywords: Genetic diversity, pedigree, nutrients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802