Search results for: hybrid rule-extraction algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2146

Search results for: hybrid rule-extraction algorithms

1216 Protein Secondary Structure Prediction Using Parallelized Rule Induction from Coverings

Authors: Leong Lee, Cyriac Kandoth, Jennifer L. Leopold, Ronald L. Frank

Abstract:

Protein 3D structure prediction has always been an important research area in bioinformatics. In particular, the prediction of secondary structure has been a well-studied research topic. Despite the recent breakthrough of combining multiple sequence alignment information and artificial intelligence algorithms to predict protein secondary structure, the Q3 accuracy of various computational prediction algorithms rarely has exceeded 75%. In a previous paper [1], this research team presented a rule-based method called RT-RICO (Relaxed Threshold Rule Induction from Coverings) to predict protein secondary structure. The average Q3 accuracy on the sample datasets using RT-RICO was 80.3%, an improvement over comparable computational methods. Although this demonstrated that RT-RICO might be a promising approach for predicting secondary structure, the algorithm-s computational complexity and program running time limited its use. Herein a parallelized implementation of a slightly modified RT-RICO approach is presented. This new version of the algorithm facilitated the testing of a much larger dataset of 396 protein domains [2]. Parallelized RTRICO achieved a Q3 score of 74.6%, which is higher than the consensus prediction accuracy of 72.9% that was achieved for the same test dataset by a combination of four secondary structure prediction methods [2].

Keywords: data mining, protein secondary structure prediction, parallelization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559
1215 Improved Computational Efficiency of Machine Learning Algorithms Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning (ML) archetypal that could forecast the COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID-19 cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organization (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data are split into 8:2 ratio for training and testing purposes to forecast future new COVID-19 cases. Support Vector Machine (SVM), Random Forest (RF), and linear regression (LR) algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID-19 cases is evaluated. RF outperformed the other two ML algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n = 30. The mean square error obtained for RF is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis, RF algorithm can perform more effectively and efficiently in predicting the new COVID-19 cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 110
1214 Optimisation of Structural Design by Integrating Genetic Algorithms in the Building Information Modelling Environment

Authors: Tofigh Hamidavi, Sepehr Abrishami, Pasquale Ponterosso, David Begg

Abstract:

Structural design and analysis is an important and time-consuming process, particularly at the conceptual design stage. Decisions made at this stage can have an enormous effect on the entire project, as it becomes ever costlier and more difficult to alter the choices made early on in the construction process. Hence, optimisation of the early stages of structural design can provide important efficiencies in terms of cost and time. This paper suggests a structural design optimisation (SDO) framework in which Genetic Algorithms (GAs) may be used to semi-automate the production and optimisation of early structural design alternatives. This framework has the potential to leverage conceptual structural design innovation in Architecture, Engineering and Construction (AEC) projects. Moreover, this framework improves the collaboration between the architectural stage and the structural stage. It will be shown that this SDO framework can make this achievable by generating the structural model based on the extracted data from the architectural model. At the moment, the proposed SDO framework is in the process of validation, involving the distribution of an online questionnaire among structural engineers in the UK.

Keywords: Building Information Modelling, BIM, Genetic Algorithm, GA, architecture-engineering-construction, AEC, Optimisation, structure, design, population, generation, selection, mutation, crossover, offspring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
1213 Financing - Scheduling Optimization for Construction Projects by using Genetic Algorithms

Authors: Hesham Abdel-Khalek, Sherif M. Hafez, Abdel-Hamid M. el-Lakany, Yasser Abuel-Magd

Abstract:

Investment in a constructed facility represents a cost in the short term that returns benefits only over the long term use of the facility. Thus, the costs occur earlier than the benefits, and the owners of facilities must obtain the capital resources to finance the costs of construction. A project cannot proceed without an adequate financing, and the cost of providing an adequate financing can be quite large. For these reasons, the attention to the project finance is an important aspect of project management. Finance is also a concern to the other organizations involved in a project such as the general contractor and material suppliers. Unless an owner immediately and completely covers the costs incurred by each participant, these organizations face financing problems of their own. At a more general level, the project finance is the only one aspect of the general problem of corporate finance. If numerous projects are considered and financed together, then the net cash flow requirements constitute the corporate financing problem for capital investment. Whether project finance is performed at the project or at the corporate level does not alter the basic financing problem .In this paper, we will first consider facility financing from the owner's perspective, with due consideration for its interaction with other organizations involved in a project. Later, we discuss the problems of construction financing which are crucial to the profitability and solvency of construction contractors. The objective of this paper is to present the steps utilized to determine the best combination of minimum project financing. The proposed model considers financing; schedule and maximum net area .The proposed model is called Project Financing and Schedule Integration using Genetic Algorithms "PFSIGA". This model intended to determine more steps (maximum net area) for any project with a subproject. An illustrative example will demonstrate the feature of this technique. The model verification and testing are put into consideration.

Keywords: Project Management, Large-scale ConstructionProjects, Cash flow, Interest, Investment, Loan, Optimization, Scheduling, Financing and Genetic Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173
1212 Modelling Sudoku Puzzles as Block-world Problems

Authors: Cecilia Nugraheni, Luciana Abednego

Abstract:

Sudoku is a kind of logic puzzles. Each puzzle consists of a board, which is a 9×9 cells, divided into nine 3×3 subblocks and a set of numbers from 1 to 9. The aim of this puzzle is to fill in every cell of the board with a number from 1 to 9 such that in every row, every column, and every subblock contains each number exactly one. Sudoku puzzles belong to combinatorial problem (NP complete). Sudoku puzzles can be solved by using a variety of techniques/algorithms such as genetic algorithms, heuristics, integer programming, and so on. In this paper, we propose a new approach for solving Sudoku which is by modelling them as block-world problems. In block-world problems, there are a number of boxes on the table with a particular order or arrangement. The objective of this problem is to change this arrangement into the targeted arrangement with the help of two types of robots. In this paper, we present three models for Sudoku. We modellized Sudoku as parameterized multi-agent systems. A parameterized multi-agent system is a multi-agent system which consists of several uniform/similar agents and the number of the agents in the system is stated as the parameter of this system. We use Temporal Logic of Actions (TLA) for formalizing our models.

Keywords: Sudoku puzzle, block world problem, parameterized multi agent systems modelling, Temporal Logic of Actions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
1211 A Review: Comparative Analysis of Different Categorical Data Clustering Ensemble Methods

Authors: S. Sarumathi, N. Shanthi, M. Sharmila

Abstract:

Over the past epoch a rampant amount of work has been done in the data clustering research under the unsupervised learning technique in Data mining. Furthermore several algorithms and methods have been proposed focusing on clustering different data types, representation of cluster models, and accuracy rates of the clusters. However no single clustering algorithm proves to be the most efficient in providing best results. Accordingly in order to find the solution to this issue a new technique, called Cluster ensemble method was bloomed. This cluster ensemble is a good alternative approach for facing the cluster analysis problem. The main hope of the cluster ensemble is to merge different clustering solutions in such a way to achieve accuracy and to improve the quality of individual data clustering. Due to the substantial and unremitting development of new methods in the sphere of data mining and also the incessant interest in inventing new algorithms, makes obligatory to scrutinize a critical analysis of the existing techniques and the future novelty. This paper exposes the comparative study of different cluster ensemble methods along with their features, systematic working process and the average accuracy and error rates of each ensemble methods. Consequently this speculative and comprehensive analysis will be very useful for the community of clustering practitioners and also helps in deciding the most suitable one to rectify the problem in hand.

Keywords: Clustering, Cluster Ensemble methods, Co-association matrix, Consensus function, Median partition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2564
1210 Multimedia Firearms Training System

Authors: Aleksander Nawrat, Karol Jędrasiak, Artur Ryt, Dawid Sobel

Abstract:

The goal of the article is to present a novel Multimedia Firearms Training System. The system was developed in order to compensate for major problems of existing shooting training systems. The designed and implemented solution can be characterized by five major advantages: algorithm for automatic geometric calibration, algorithm of photometric recalibration, firearms hit point detection using thermal imaging camera, IR laser spot tracking algorithm for after action review analysis, and implementation of ballistics equations. The combination of the abovementioned advantages in a single multimedia firearms training system creates a comprehensive solution for detecting and tracking of the target point usable for shooting training systems and improving intervention tactics of uniformed services. The introduced algorithms of geometric and photometric recalibration allow the use of economically viable commercially available projectors for systems that require long and intensive use without most of the negative impacts on color mapping of existing multi-projector multimedia shooting range systems. The article presents the results of the developed algorithms and their application in real training systems.

Keywords: Firearms shot detection, geometric recalibration, photometric recalibration, IR tracking algorithm, thermography, ballistics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1303
1209 Comparative Study of IC and Perturb and Observe Method of MPPT Algorithm for Grid Connected PV Module

Authors: Arvind Kumar, Manoj Kumar, Dattatraya H. Nagaraj, Amanpreet Singh, Jayanthi Prattapati

Abstract:

The purpose of this paper is to study and compare two maximum power point tracking (MPPT) algorithms in a photovoltaic simulation system and also show a simulation study of maximum power point tracking (MPPT) for photovoltaic systems using perturb and observe algorithm and Incremental conductance algorithm. Maximum power point tracking (MPPT) plays an important role in photovoltaic systems because it maximize the power output from a PV system for a given set of conditions, and therefore maximize the array efficiency and minimize the overall system cost. Since the maximum power point (MPP) varies, based on the irradiation and cell temperature, appropriate algorithms must be utilized to track the (MPP) and maintain the operation of the system in it. MATLAB/Simulink is used to establish a model of photovoltaic system with (MPPT) function. This system is developed by combining the models established of solar PV module and DC-DC Boost converter. The system is simulated under different climate conditions. Simulation results show that the photovoltaic simulation system can track the maximum power point accurately.

Keywords: Incremental conductance Algorithm, Perturb and Observe Algorithm, Photovoltaic System and Simulation Results.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1213
1208 State Estimation of a Biotechnological Process Using Extended Kalman Filter and Particle Filter

Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte, V. Grincas

Abstract:

This paper deals with advanced state estimation algorithms for estimation of biomass concentration and specific growth rate in a typical fed-batch biotechnological process. This biotechnological process was represented by a nonlinear mass-balance based process model. Extended Kalman Filter (EKF) and Particle Filter (PF) was used to estimate the unmeasured state variables from oxygen uptake rate (OUR) and base consumption (BC) measurements. To obtain more general results, a simplified process model was involved in EKF and PF estimation algorithms. This model doesn’t require any special growth kinetic equations and could be applied for state estimation in various bioprocesses. The focus of this investigation was concentrated on the comparison of the estimation quality of the EKF and PF estimators by applying different measurement noises. The simulation results show that Particle Filter algorithm requires significantly more computation time for state estimation but gives lower estimation errors both for biomass concentration and specific growth rate. Also the tuning procedure for Particle Filter is simpler than for EKF. Consequently, Particle Filter should be preferred in real applications, especially for monitoring of industrial bioprocesses where the simplified implementation procedures are always desirable.

Keywords: Biomass concentration, Extended Kalman Filter, Particle Filter, State estimation, Specific growth rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2907
1207 Graph Codes-2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval

Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje

Abstract:

Multimedia Indexing and Retrieval is generally de-signed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, espe-cially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelisation. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.

Keywords: indexing, retrieval, multimedia, graph code, graph algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378
1206 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study

Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple

Abstract:

There is a dramatic surge in the adoption of Machine Learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. Artificial Intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and two defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt ML techniques without rigorous testing, since they may be vulnerable to adversarial attacks, especially in security-critical areas such as the nuclear industry. We observed that while the adopted defence methods can effectively defend against different attacks, none of them could protect against all five adversarial attacks entirely.

Keywords: Resilient Machine Learning, attacks, defences, nuclear industry, crack detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 424
1205 The Significance of Embodied Energy in Certified Passive Houses

Authors: Robert H. Crawford, André Stephan

Abstract:

Certifications such as the Passive House Standard aim to reduce the final space heating energy demand of residential buildings. Space conditioning, notably heating, is responsible for nearly 70% of final residential energy consumption in Europe. There is therefore significant scope for the reduction of energy consumption through improvements to the energy efficiency of residential buildings. However, these certifications totally overlook the energy embodied in the building materials used to achieve this greater operational energy efficiency. The large amount of insulation and the triple-glazed high efficiency windows require a significant amount of energy to manufacture. While some previous studies have assessed the life cycle energy demand of passive houses, including their embodied energy, these rely on incomplete assessment techniques which greatly underestimate embodied energy and can lead to misleading conclusions. This paper analyses the embodied and operational energy demands of a case study passive house using a comprehensive hybrid analysis technique to quantify embodied energy. Results show that the embodied energy is much more significant than previously thought. Also, compared to a standard house with the same geometry, structure, finishes and number of people, a passive house can use more energy over 80 years, mainly due to the additional materials required. Current building energy efficiency certifications should widen their system boundaries to include embodied energy in order to reduce the life cycle energy demand of residential buildings.

Keywords: Embodied energy, Hybrid analysis, Life cycle energy analysis, Passive house.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2833
1204 An Efficient Architecture for Interleaved Modular Multiplication

Authors: Ahmad M. Abdel Fattah, Ayman M. Bahaa El-Din, Hossam M.A. Fahmy

Abstract:

Modular multiplication is the basic operation in most public key cryptosystems, such as RSA, DSA, ECC, and DH key exchange. Unfortunately, very large operands (in order of 1024 or 2048 bits) must be used to provide sufficient security strength. The use of such big numbers dramatically slows down the whole cipher system, especially when running on embedded processors. So far, customized hardware accelerators - developed on FPGAs or ASICs - were the best choice for accelerating modular multiplication in embedded environments. On the other hand, many algorithms have been developed to speed up such operations. Examples are the Montgomery modular multiplication and the interleaved modular multiplication algorithms. Combining both customized hardware with an efficient algorithm is expected to provide a much faster cipher system. This paper introduces an enhanced architecture for computing the modular multiplication of two large numbers X and Y modulo a given modulus M. The proposed design is compared with three previous architectures depending on carry save adders and look up tables. Look up tables should be loaded with a set of pre-computed values. Our proposed architecture uses the same carry save addition, but replaces both look up tables and pre-computations with an enhanced version of sign detection techniques. The proposed architecture supports higher frequencies than other architectures. It also has a better overall absolute time for a single operation.

Keywords: Montgomery multiplication, modular multiplication, efficient architecture, FPGA, RSA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403
1203 High-Accuracy Satellite Image Analysis and Rapid DSM Extraction for Urban Environment Evaluations (Tripoli-Libya)

Authors: Abdunaser Abduelmula, Maria Luisa M. Bastos, José A. Gonçalves

Abstract:

Modelling of the earth's surface and evaluation of urban environment, with 3D models, is an important research topic. New stereo capabilities of high resolution optical satellites images, such as the tri-stereo mode of Pleiades, combined with new image matching algorithms, are now available and can be applied in urban area analysis. In addition, photogrammetry software packages gained new, more efficient matching algorithms, such as SGM, as well as improved filters to deal with shadow areas, can achieve more dense and more precise results. This paper describes a comparison between 3D data extracted from tri-stereo and dual stereo satellite images, combined with pixel based matching and Wallis filter. The aim was to improve the accuracy of 3D models especially in urban areas, in order to assess if satellite images are appropriate for a rapid evaluation of urban environments. The results showed that 3D models achieved by Pleiades tri-stereo outperformed, both in terms of accuracy and detail, the result obtained from a Geo-eye pair. The assessment was made with reference digital surface models derived from high resolution aerial photography. This could mean that tri-stereo images can be successfully used for the proposed urban change analyses.

Keywords: 3D Models, Environment, Matching, Pleiades.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2651
1202 Inner Quality Parameters of Rapeseed (Brassica napus) Populations in Different Sowing Technology Models

Authors: É. Vincze

Abstract:

Demand on plant oils has increased to an enormous extent that is due to the change of human nutrition habits on the one hand, while on the other hand to the increase of raw material demand of some industrial sectors, just as to the increase of biofuel production. Besides the determining importance of sunflower in Hungary the production area, just as in part the average yield amount of rapeseed has increased among the produced oil crops. The variety/hybrid palette has changed significantly during the past decade. The available varieties’/hybrids’ palette has been extended to a significant extent. It is agreed that rapeseed production demands professionalism and local experience. Technological elements are successive; high yield amounts cannot be produced without system-based approach. The aim of the present work was to execute the complex study of one of the most critical production technology element of rapeseed production, that was sowing technology. Several sowing technology elements are studied in this research project that are the following: biological basis (the hybrid Arkaso is studied in this regard), sowing time (sowing time treatments were set so that they represent the wide period used in industrial practice: early, optimal and late sowing time) plant density (in this regard reaction of rare, optimal and too dense populations) were modelled. The multifactorial experimental system enables the single and complex evaluation of rapeseed sowing technology elements, just as their modelling using experimental result data. Yield quality and quantity have been determined as well in the present experiment, just as the interactions between these factors. The experiment was set up in four replications at the Látókép Plant Production Research Site of the University of Debrecen. Two different sowing times were sown in the first experimental year (2014), while three in the second (2015). Three different plant densities were set in both years: 200, 350 and 500 thousand plants ha-1. Uniform nutrient supply and a row spacing of 45 cm were applied. Winter wheat was used as pre-crop. Plant physiological measurements were executed in the populations of the Arkaso rapeseed hybrid that were: relative chlorophyll content analysis (SPAD) and leaf area index (LAI) measurement. Relative chlorophyll content (SPAD) and leaf area index (LAI) were monitored in 7 different measurement times.

Keywords: Inner quality, plant density, rapeseed, sowing time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
1201 ECG-Based Heartbeat Classification Using Convolutional Neural Networks

Authors: Jacqueline R. T. Alipo-on, Francesca I. F. Escobar, Myles J. T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases which are considered as one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis on the ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heart beat types. The dataset used in this work is the synthetic MIT-Beth Israel Hospital (MIT-BIH) Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.

Keywords: Heartbeat classification, convolutional neural network, electrocardiogram signals, ECG signals, generative adversarial networks, long short-term memory, LSTM, ResNet-50.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 124
1200 Analyzing The Effect of Variable Round Time for Clustering Approach in Wireless Sensor Networks

Authors: Vipin Pal, Girdhari Singh, R P Yadav

Abstract:

As wireless sensor networks are energy constraint networks so energy efficiency of sensor nodes is the main design issue. Clustering of nodes is an energy efficient approach. It prolongs the lifetime of wireless sensor networks by avoiding long distance communication. Clustering algorithms operate in rounds. Performance of clustering algorithm depends upon the round time. A large round time consumes more energy of cluster heads while a small round time causes frequent re-clustering. So existing clustering algorithms apply a trade off to round time and calculate it from the initial parameters of networks. But it is not appropriate to use initial parameters based round time value throughout the network lifetime because wireless sensor networks are dynamic in nature (nodes can be added to the network or some nodes go out of energy). In this paper a variable round time approach is proposed that calculates round time depending upon the number of active nodes remaining in the field. The proposed approach makes the clustering algorithm adaptive to network dynamics. For simulation the approach is implemented with LEACH in NS-2 and the results show that there is 6% increase in network lifetime, 7% increase in 50% node death time and 5% improvement over the data units gathered at the base station.

Keywords: Wireless Sensor Network, Clustering, Energy Efficiency, Round Time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1745
1199 Web-Based Cognitive Writing Instruction (WeCWI): A Hybrid e-Framework for Instructional Design

Authors: Boon Yih Mah

Abstract:

Web-based Cognitive Writing Instruction (WeCWI) is a hybrid e-framework for the development of a web-based instruction (WBI), which contributes towards instructional design and language development. WeCWI divides its contribution in instructional design into macro and micro perspectives. In macro perspective, being a 21st century educator by disseminating knowledge and sharing ideas with the in-class and global learners is initiated. By leveraging the virtue of technology, WeCWI aims to transform an educator into an aggregator, curator, publisher, social networker and ultimately, a web-based instructor. Since the most notable contribution of integrating technology is being a tool of teaching as well as a stimulus for learning, WeCWI focuses on the use of contemporary web tools based on the multiple roles played by the 21st century educator. The micro perspective in instructional design draws attention to the pedagogical approaches focusing on three main aspects: reading, discussion, and writing. With the effective use of pedagogical approaches through free reading and enterprises, technology adds new dimensions and expands the boundaries of learning capacity. Lastly, WeCWI also imparts the fundamental theories and models for web-based instructors’ awareness such as interactionist theory, cognitive information processing (CIP) theory, computer-mediated communication (CMC), e-learning interactionalbased model, inquiry models, sensory mind model, and leaning styles model.

Keywords: WeCWI, instructional discovery, technological discovery, pedagogical discovery, theoretical discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2204
1198 Learning to Order Terms: Supervised Interestingness Measures in Terminology Extraction

Authors: Jérôme Azé, Mathieu Roche, Yves Kodratoff, Michèle Sebag

Abstract:

Term Extraction, a key data preparation step in Text Mining, extracts the terms, i.e. relevant collocation of words, attached to specific concepts (e.g. genetic-algorithms and decisiontrees are terms associated to the concept “Machine Learning" ). In this paper, the task of extracting interesting collocations is achieved through a supervised learning algorithm, exploiting a few collocations manually labelled as interesting/not interesting. From these examples, the ROGER algorithm learns a numerical function, inducing some ranking on the collocations. This ranking is optimized using genetic algorithms, maximizing the trade-off between the false positive and true positive rates (Area Under the ROC curve). This approach uses a particular representation for the word collocations, namely the vector of values corresponding to the standard statistical interestingness measures attached to this collocation. As this representation is general (over corpora and natural languages), generality tests were performed by experimenting the ranking function learned from an English corpus in Biology, onto a French corpus of Curriculum Vitae, and vice versa, showing a good robustness of the approaches compared to the state-of-the-art Support Vector Machine (SVM).

Keywords: Text-mining, Terminology Extraction, Evolutionary algorithm, ROC Curve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
1197 A Pairwise-Gaussian-Merging Approach: Towards Genome Segmentation for Copy Number Analysis

Authors: Chih-Hao Chen, Hsing-Chung Lee, Qingdong Ling, Hsiao-Jung Chen, Sun-Chong Wang, Li-Ching Wu, H.C. Lee

Abstract:

Segmentation, filtering out of measurement errors and identification of breakpoints are integral parts of any analysis of microarray data for the detection of copy number variation (CNV). Existing algorithms designed for these tasks have had some successes in the past, but they tend to be O(N2) in either computation time or memory requirement, or both, and the rapid advance of microarray resolution has practically rendered such algorithms useless. Here we propose an algorithm, SAD, that is much faster and much less thirsty for memory – O(N) in both computation time and memory requirement -- and offers higher accuracy. The two key ingredients of SAD are the fundamental assumption in statistics that measurement errors are normally distributed and the mathematical relation that the product of two Gaussians is another Gaussian (function). We have produced a computer program for analyzing CNV based on SAD. In addition to being fast and small it offers two important features: quantitative statistics for predictions and, with only two user-decided parameters, ease of use. Its speed shows little dependence on genomic profile. Running on an average modern computer, it completes CNV analyses for a 262 thousand-probe array in ~1 second and a 1.8 million-probe array in 9 seconds

Keywords: Cancer, pathogenesis, chromosomal aberration, copy number variation, segmentation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1428
1196 Development of Software Complex for Digitalization of Enterprise Activities

Authors: G. T. Balakayeva, K. K. Nurlybayeva, M. B. Zhanuzakov

Abstract:

In the proposed work, we have developed software and designed a software architecture for the implementation of enterprise business processes. The proposed software has a multi-level architecture using a domain-specific tool. The developed architecture is a guarantor of the availability, reliability and security of the system and the implementation of business processes, which are the basis for effective enterprise management. Automating business processes, automating the algorithmic stages of an enterprise, developing optimal algorithms for managing activities, controlling and monitoring, reducing risks and improving results help organizations achieve strategic goals quickly and efficiently. The software described in this article can connect to the corporate information system via two methods: a desktop client and a web client. With an appeal to the application server, the desktop client program connects to the information system on the company's work PCs over a local network. Outside the organization, the user can interact with the information system via a web browser, which acts as a web client and connects to a web server. The developed software consists of several integrated modules that share resources and interact with each other through an API. The following technology stack was used during development: Node js, React js, MongoDB, Ngnix, Cloud Technologies, Python.

Keywords: Algorithms, document processing, automation, integrated modules, software architecture, software design, information system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 119
1195 The Effects of Weather Anomalies on the Quantitative and Qualitative Parameters of Maize Hybrids of Different Genetic Traits in Hungary

Authors: Zs. J. Becze, Á. Krivián, M. Sárvári

Abstract:

Hybrid selection and the application of hybrid specific production technologies are important in terms of the increase of the yield and crop safety of maize. The main explanation for this is climate change, since weather extremes are going on and seem to accelerate in Hungary too.

The biological bases, the selection of appropriate hybrids will be of greater importance in the future. The issue of the adaptability of hybrids will be considerably appreciated. Its good agronomical traits and stress bearing against climatic factors and agrotechnical elements (e.g. different types of herbicides) will be important. There have been examples of 3-4 consecutive droughty years in the past decades, e.g. 1992-1993-1994 or 2009-2011-2012, which made the results of crop production critical. Irrigation cannot be the solution for the problem since currently only the 2% of the arable land is irrigated. Temperatures exceeding the multi-year average are characteristic mainly to the July and August in Hungary, which significantly increase the soil surface evaporation, thus further enhance water shortage. In terms of the yield and crop safety of maize, the weather of these two months is crucial, since the extreme high temperature in July decreases the viability of the pollen and the pistil of maize, decreases the extent of fertilization and makes grain-filling tardy. Consequently, yield and crop safety decrease.

Keywords: Abiotic factors, drought, nutrition content, yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
1194 Hybrid Recovery of Copper and Silver from PV Ribbon and Ag Finger of EOL Solar Panels

Authors: T. Patcharawit, C. Kansomket, N. Wongnaree, W. Kritsrikan, T. Yingnakorn, S. Khumkoa

Abstract:

Recovery of pure copper and silver from end-of-life photovoltaic (PV) panels was investigated in this paper using an effective hybrid pyro-hydrometallurgical process. In the first step of waste treatment, solar panel waste was first dismantled to obtain a PV sheet to be cut and calcined at 500 °C, to separate out PV ribbon from glass cullet, ash, and volatile while the silicon wafer containing silver finger was collected for recovery. In the second step of metal recovery, copper recovery from PV ribbon was via 1-3 M HCl leaching with SnCl₂ and H₂O₂ additions in order to remove the tin-lead coating on the ribbon. The leached copper band was cleaned and subsequently melted as an anode for the next step of electrorefining. Stainless steel was set as the cathode with CuSO₄ as an electrolyte, and at a potential of 0.2 V, high purity copper of 99.93% was obtained at 96.11% recovery after 24 hours. For silver recovery, the silicon wafer containing silver finger was leached using HNO₃ at 1-4 M in an ultrasonic bath. In the next step of precipitation, silver chloride was then obtained and subsequently reduced by sucrose and NaOH to give silver powder prior to oxy-acetylene melting to finally obtain pure silver metal. The integrated recycling process is considered to be economical, providing effective recovery of high purity metals such as copper and silver while other materials such as aluminum, copper wire, glass cullet can also be recovered to be reused commercially. Compounds such as PbCl₂ and SnO₂ obtained can also be recovered to enter the market.

Keywords: Electrorefining, leaching, calcination, PV ribbon, silver finger, solar panel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 407
1193 Time-Cost-Quality Trade-off Software by using Simplified Genetic Algorithm for Typical Repetitive Construction Projects

Authors: Refaat H. Abd El Razek, Ahmed M. Diab, Sherif M. Hafez, Remon F. Aziz

Abstract:

Time-Cost Optimization "TCO" is one of the greatest challenges in construction project planning and control, since the optimization of either time or cost, would usually be at the expense of the other. Since there is a hidden trade-off relationship between project and cost, it might be difficult to predict whether the total cost would increase or decrease as a result of the schedule compression. Recently third dimension in trade-off analysis is taken into consideration that is quality of the projects. Few of the existing algorithms are applied in a case of construction project with threedimensional trade-off analysis, Time-Cost-Quality relationships. The objective of this paper is to presents the development of a practical software system; that named Automatic Multi-objective Typical Construction Resource Optimization System "AMTCROS". This system incorporates the basic concepts of Line Of Balance "LOB" and Critical Path Method "CPM" in a multi-objective Genetic Algorithms "GAs" model. The main objective of this system is to provide a practical support for typical construction planners who need to optimize resource utilization in order to minimize project cost and duration while maximizing its quality simultaneously. The application of these research developments in planning the typical construction projects holds a strong promise to: 1) Increase the efficiency of resource use in typical construction projects; 2) Reduce construction duration period; 3) Minimize construction cost (direct cost plus indirect cost); and 4) Improve the quality of newly construction projects. A general description of the proposed software for the Time-Cost-Quality Trade-Off "TCQTO" is presented. The main inputs and outputs of the proposed software are outlined. The main subroutines and the inference engine of this software are detailed. The complexity analysis of the software is discussed. In addition, the verification, and complexity of the proposed software are proved and tested using a real case study.

Keywords: Project management, typical (repetitive) large scale projects, line of balance, multi-objective optimization, genetic algorithms, time-cost-quality trade-offs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3011
1192 FACTS Based Stabilization for Smart Grid Applications

Authors: Adel M. Sharaf, Foad H. Gandoman

Abstract:

Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PVhybrid- Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid- Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6- pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.

Keywords: AC FACTS, Smart grid, Stabilization, PV-Battery Storage, Switched Filter-Compensation (SFC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3203
1191 Efficient Program Slicing Algorithms for Measuring Functional Cohesion and Parallelism

Authors: Jehad Al Dallal

Abstract:

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.

Keywords: Backward slicing, cohesion measure, forward slicing, parallelism measure, program dependence graph, program slicing, static slicing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417
1190 Comparison of Router Intelligent and Cooperative Host Intelligent Algorithms in a Continuous Model of Fixed Telecommunication Networks

Authors: Dávid Csercsik, Sándor Imre

Abstract:

The performance of state of the art worldwide telecommunication networks strongly depends on the efficiency of the applied routing mechanism. Game theoretical approaches to this problem offer new solutions. In this paper a new continuous network routing model is defined to describe data transfer in fixed telecommunication networks of multiple hosts. The nodes of the network correspond to routers whose latency is assumed to be traffic dependent. We propose that the whole traffic of the network can be decomposed to a finite number of tasks, which belong to various hosts. To describe the different latency-sensitivity, utility functions are defined for each task. The model is used to compare router and host intelligent types of routing methods, corresponding to various data transfer protocols. We analyze host intelligent routing as a transferable utility cooperative game with externalities. The main aim of the paper is to provide a framework in which the efficiency of various routing algorithms can be compared and the transferable utility game arising in the cooperative case can be analyzed.

Keywords: Routing, Telecommunication networks, Performance evaluation, Cooperative game theory, Partition function form games

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809
1189 Enhanced Particle Swarm Optimization Approach for Solving the Non-Convex Optimal Power Flow

Authors: M. R. AlRashidi, M. F. AlHajri, M. E. El-Hawary

Abstract:

An enhanced particle swarm optimization algorithm (PSO) is presented in this work to solve the non-convex OPF problem that has both discrete and continuous optimization variables. The objective functions considered are the conventional quadratic function and the augmented quadratic function. The latter model presents non-differentiable and non-convex regions that challenge most gradient-based optimization algorithms. The optimization variables to be optimized are the generator real power outputs and voltage magnitudes, discrete transformer tap settings, and discrete reactive power injections due to capacitor banks. The set of equality constraints taken into account are the power flow equations while the inequality ones are the limits of the real and reactive power of the generators, voltage magnitude at each bus, transformer tap settings, and capacitor banks reactive power injections. The proposed algorithm combines PSO with Newton-Raphson algorithm to minimize the fuel cost function. The IEEE 30-bus system with six generating units is used to test the proposed algorithm. Several cases were investigated to test and validate the consistency of detecting optimal or near optimal solution for each objective. Results are compared to solutions obtained using sequential quadratic programming and Genetic Algorithms.

Keywords: Particle Swarm Optimization, Optimal Power Flow, Economic Dispatch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2324
1188 Assessment of Drought Tolerance Maize Hybrids at Grain Growth Stage in Mediterranean Area

Authors: Ayman El Sabagh, Celaleddin Barutçular, Hirofumi Saneoka

Abstract:

Drought is one of the most serious problems posing a grave threat to cereals production including maize. Maize improvement in drought-stress tolerance poses a great challenge as the global need for food and bio-energy increases. Thus, the current study was planned to explore the variations and determine the performance of target traits of maize hybrids at grain growth stage under drought conditions during 2014 under Adana, Mediterranean climate conditions, Turkey. Maize hybrids (Sancia, Indaco, 71May69, Aaccel, Calgary, 70May82, 72May80) were evaluated under (irrigated and water stress). Results revealed that, grain yield and yield traits had a negative effects because of water stress conditions compared with the normal irrigation. As well as, based on the result under normal irrigation, the maximum biological yield and harvest index were recorded. According to the differences among hybrids were found that, significant differences were observed among hybrids with respect to yield and yield traits under current research. Based on the results, grain weight had more effect on grain yield than grain number during grain filling growth stage under water stress conditions. In this concern, according to low drought susceptibility index (less grain yield losses), the hybrid (Indaco) was more stable in grain number and grain weight. Consequently, it may be concluded that this hybrid would be recommended for use in the future breeding programs for production of drought tolerant hybrids.

Keywords: Drought susceptibility index, grain filling, grain yield, maize, water stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2410
1187 Aggregation Scheduling Algorithms in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.

Keywords: Data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764