Search results for: cloud computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 789

Search results for: cloud computing

489 Personalizing Human Physical Life Routines Recognition over Cloud-Based Sensor Data Via Machine Learning

Authors: Kaushik Sathupadi, Sandesh Achar

Abstract:

Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS (Micro-Electro-Mechanical Systems) sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study presents state-of-the-art techniques for recognizing static and dynamic patterns and forecasting those challenging activities from multi-fused sensors. Furthermore, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, raw data were processed with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.

Keywords: Artificial intelligence, machine learning, gait analysis, local binary pattern, statistical features, micro-electro-mechanical systems, maximum relevance and minimum redundancy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6
488 Teaching Computer Programming to Diverse Students: A Comparative, Mixed-Methods, Classroom Research Study

Authors: Almudena Konrad, Tomás Galguera

Abstract:

Lack of motivation and interest is a serious obstacle to students’ learning computing skills. A need exists for a knowledge base on effective pedagogy and curricula to teach computer programming. This paper presents results from research evaluating a six-year project designed to teach complex concepts in computer programming collaboratively, while supporting students to continue developing their computer thinking and related coding skills individually. Utilizing a quasi-experimental, mixed methods design, the pedagogical approaches and methods were assessed in two contrasting groups of students with different socioeconomic status, gender, and age composition. Analyses of quantitative data from Likert-scale surveys and an evaluation rubric, combined with qualitative data from reflective writing exercises and semi-structured interviews yielded convincing evidence of the project’s success at both teaching and inspiring students.

Keywords: Computational thinking, computing education, computer programming curriculum, logic, teaching methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 791
487 Modeling the Symptom-Disease Relationship by Using Rough Set Theory and Formal Concept Analysis

Authors: Mert Bal, Hayri Sever, Oya Kalıpsız

Abstract:

Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.

Keywords: Formal Concept Analysis, Rough Set Theory, Granular Computing, Medical Decision Support System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
486 Tipover Stability Enhancement of Wheeled Mobile Manipulators Using an Adaptive Neuro- Fuzzy Inference Controller System

Authors: A. Ghaffari, A. Meghdari, D. Naderi, S. Eslami

Abstract:

In this paper an algorithm based on the adaptive neuro-fuzzy controller is provided to enhance the tipover stability of mobile manipulators when they are subjected to predefined trajectories for the end-effector and the vehicle. The controller creates proper configurations for the manipulator to prevent the robot from being overturned. The optimal configuration and thus the most favorable control are obtained through soft computing approaches including a combination of genetic algorithm, neural networks, and fuzzy logic. The proposed algorithm, in this paper, is that a look-up table is designed by employing the obtained values from the genetic algorithm in order to minimize the performance index and by using this data base, rule bases are designed for the ANFIS controller and will be exerted on the actuators to enhance the tipover stability of the mobile manipulator. A numerical example is presented to demonstrate the effectiveness of the proposed algorithm.

Keywords: Mobile Manipulator, Tipover Stability Enhancement, Adaptive Neuro-Fuzzy Inference Controller System, Soft Computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963
485 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa

Abstract:

The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.

Keywords: Neural network computing, information processing, input-output mapping, training time, computers with high memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1324
484 Employee Motivation Factors That Affect Job Performance of Suan Sunandha Rajabhat University Employee

Authors: Orawan Boriban, Phatthanan Chaiyabut

Abstract:

The purpose of this research is to study motivation factors and also to study factors relation to job performance to compare motivation factors under the personal factor classification such as gender, age, income, educational level, marital status, and working duration; and to study the relationship between Motivation Factors and Job Performance with job satisfactions. The sample groups utilized in this research were 400 Suan Sunandha Rajabhat University employees. This research is a quantitative research using questionnaires as research instrument. The statistics applied for data analysis including percentage, mean, and standard deviation. In addition, the difference analysis was conducted by t value computing, one-way analysis of variance and Pearson’s correlation coefficient computing. The findings of the study results were as follows the findings showed that the aspects of job promotion and salary were at the moderate levels. Additionally, the findings also showed that the motivations that affected the revenue branch chiefs’ job performance were job security, job accomplishment, policy and management, job promotion, and interpersonal relation.

Keywords: Motivation Factors, Job Performance, Suan Sunandha Rajabhat University Employee.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2852
483 Memory Leak Detection in Distributed System

Authors: Roohi Shabrin S., Devi Prasad B., Prabu D., Pallavi R. S., Revathi P.

Abstract:

Due to memory leaks, often-valuable system memory gets wasted and denied for other processes thereby affecting the computational performance. If an application-s memory usage exceeds virtual memory size, it can leads to system crash. Current memory leak detection techniques for clusters are reactive and display the memory leak information after the execution of the process (they detect memory leak only after it occur). This paper presents a Dynamic Memory Monitoring Agent (DMMA) technique. DMMA framework is a dynamic memory leak detection, that detects the memory leak while application is in execution phase, when memory leak in any process in the cluster is identified by DMMA it gives information to the end users to enable them to take corrective actions and also DMMA submit the affected process to healthy node in the system. Thus provides reliable service to the user. DMMA maintains information about memory consumption of executing processes and based on this information and critical states, DMMA can improve reliability and efficaciousness of cluster computing.

Keywords: Dynamic Memory Monitoring Agent (DMMA), Cluster Computing, Memory Leak, Fault Tolerant Framework, Dynamic Memory Leak Detection (DMLD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2285
482 An Ant Colony Optimization for Dynamic JobScheduling in Grid Environment

Authors: Siriluck Lorpunmanee, Mohd Noor Sap, Abdul Hanan Abdullah, Chai Chompoo-inwai

Abstract:

Grid computing is growing rapidly in the distributed heterogeneous systems for utilizing and sharing large-scale resources to solve complex scientific problems. Scheduling is the most recent topic used to achieve high performance in grid environments. It aims to find a suitable allocation of resources for each job. A typical problem which arises during this task is the decision of scheduling. It is about an effective utilization of processor to minimize tardiness time of a job, when it is being scheduled. This paper, therefore, addresses the problem by developing a general framework of grid scheduling using dynamic information and an ant colony optimization algorithm to improve the decision of scheduling. The performance of various dispatching rules such as First Come First Served (FCFS), Earliest Due Date (EDD), Earliest Release Date (ERD), and an Ant Colony Optimization (ACO) are compared. Moreover, the benefit of using an Ant Colony Optimization for performance improvement of the grid Scheduling is also discussed. It is found that the scheduling system using an Ant Colony Optimization algorithm can efficiently and effectively allocate jobs to proper resources.

Keywords: Grid computing, Distributed heterogeneous system, Ant colony optimization algorithm, Grid scheduling, Dispatchingrules.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2706
481 Power System Security Constrained Economic Dispatch Using Real Coded Quantum Inspired Evolution Algorithm

Authors: A. K. Al-Othman, F. S. Al-Fares, K. M. EL-Nagger

Abstract:

This paper presents a new optimization technique based on quantum computing principles to solve a security constrained power system economic dispatch problem (SCED). The proposed technique is a population-based algorithm, which uses some quantum computing elements in coding and evolving groups of potential solutions to reach the optimum following a partially directed random approach. The SCED problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Real Coded Quantum-Inspired Evolution Algorithm (RQIEA) is then applied to solve the constrained optimization formulation. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that RQIEA is very applicable for solving security constrained power system economic dispatch problem (SCED).

Keywords: State Estimation, Fuzzy Linear Regression, FuzzyLinear State Estimator (FLSE) and Measurements Uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
480 Software Effort Estimation Models Using Radial Basis Function Network

Authors: E. Praynlin, P. Latha

Abstract:

Software Effort Estimation is the process of estimating the effort required to develop software. By estimating the effort, the cost and schedule required to estimate the software can be determined. Accurate Estimate helps the developer to allocate the resource accordingly in order to avoid cost overrun and schedule overrun. Several methods are available in order to estimate the effort among which soft computing based method plays a prominent role. Software cost estimation deals with lot of uncertainty among all soft computing methods neural network is good in handling uncertainty. In this paper Radial Basis Function Network is compared with the back propagation network and the results are validated using six data sets and it is found that RBFN is best suitable to estimate the effort. The Results are validated using two tests the error test and the statistical test.

Keywords: Software cost estimation, Radial Basis Function Network (RBFN), Back propagation function network, Mean Magnitude of Relative Error (MMRE).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2388
479 Pattern Classification of Back-Propagation Algorithm Using Exclusive Connecting Network

Authors: Insung Jung, Gi-Nam Wang

Abstract:

The objective of this paper is to a design of pattern classification model based on the back-propagation (BP) algorithm for decision support system. Standard BP model has done full connection of each node in the layers from input to output layers. Therefore, it takes a lot of computing time and iteration computing for good performance and less accepted error rate when we are doing some pattern generation or training the network. However, this model is using exclusive connection in between hidden layer nodes and output nodes. The advantage of this model is less number of iteration and better performance compare with standard back-propagation model. We simulated some cases of classification data and different setting of network factors (e.g. hidden layer number and nodes, number of classification and iteration). During our simulation, we found that most of simulations cases were satisfied by BP based using exclusive connection network model compared to standard BP. We expect that this algorithm can be available to identification of user face, analysis of data, mapping data in between environment data and information.

Keywords: Neural network, Back-propagation, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
478 An Enhanced Distributed System to improve theTime Complexity of Binary Indexed Trees

Authors: Ahmed M. Elhabashy, A. Baes Mohamed, Abou El Nasr Mohamad

Abstract:

Distributed Computing Systems are usually considered the most suitable model for practical solutions of many parallel algorithms. In this paper an enhanced distributed system is presented to improve the time complexity of Binary Indexed Trees (BIT). The proposed system uses multi-uniform processors with identical architectures and a specially designed distributed memory system. The analysis of this system has shown that it has reduced the time complexity of the read query to O(Log(Log(N))), and the update query to constant complexity, while the naive solution has a time complexity of O(Log(N)) for both queries. The system was implemented and simulated using VHDL and Verilog Hardware Description Languages, with xilinx ISE 10.1, as the development environment and ModelSim 6.1c, similarly as the simulation tool. The simulation has shown that the overhead resulting by the wiring and communication between the system fragments could be fairly neglected, which makes it applicable to practically reach the maximum speed up offered by the proposed model.

Keywords: Binary Index Tree (BIT), Least Significant Bit (LSB), Parallel Adder (PA), Very High Speed Integrated Circuits HardwareDescription Language (VHDL), Distributed Parallel Computing System(DPCS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
477 Developing a Sustainable Educational Portal for the D-Grid Community

Authors: Viktor Achter, Sebastian Breuers, Marc Seifert, Ulrich Lang, Joachim Götze, Bernd Reuther, Paul Müller

Abstract:

Within the last years, several technologies have been developed to help building e-learning portals. Most of them follow approaches that deliver a vast amount of functionalities, suitable for class-like learning. The SuGI project, as part of the D-Grid (funded by the BMBF), targets on delivering a highly scalable and sustainable learning solution to provide materials (e.g. learning modules, training systems, webcasts, tutorials, etc.) containing knowledge about Grid computing to the D-Grid community. In this article, the process of the development of an e-learning portal focused on the requirements of this special user group is described. Furthermore, it deals with the conceptual and technical design of an e-learning portal, addressing the special needs of heterogeneous target groups. The main focus lies on the quality management of the software development process, Web templates for uploading new contents, the rich search and filter functionalities which will be described from a conceptual as well as a technical point of view. Specifically, it points out best practices as well as concepts to provide a sustainable solution to a relatively unknown and highly heterogeneous community.

Keywords: D-Grid, e-learning, e-science, Grid computing, SuGI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346
476 A Framework for Scalable Autonomous P2P Resource Discovery for the Grid Implementation

Authors: Hesham A. Ali, Mofreh M. Salem, Ahmed A. Hamza

Abstract:

Recently, there have been considerable efforts towards the convergence between P2P and Grid computing in order to reach a solution that takes the best of both worlds by exploiting the advantages that each offers. Augmenting the peer-to-peer model to the services of the Grid promises to eliminate bottlenecks and ensure greater scalability, availability, and fault-tolerance. The Grid Information Service (GIS) directly influences quality of service for grid platforms. Most of the proposed solutions for decentralizing the GIS are based on completely flat overlays. The main contributions for this paper are: the investigation of a novel resource discovery framework for Grid implementations based on a hierarchy of structured peer-to-peer overlay networks, and introducing a discovery algorithm utilizing the proposed framework. Validation of the framework-s performance is done via simulation. Experimental results show that the proposed organization has the advantage of being scalable while providing fault-isolation, effective bandwidth utilization, and hierarchical access control. In addition, it will lead to a reliable, guaranteed sub-linear search which returns results within a bounded interval of time and with a smaller amount of generated traffic within each domain.

Keywords: Grid computing, grid information service, P2P, resource discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1976
475 WhatsApp as Part of a Blended Learning Model to Help Programming Novices

Authors: Tlou J. Ramabu

Abstract:

Programming is one of the challenging subjects in the field of computing. In the higher education sphere, some programming novices’ performance, retention rate, and success rate are not improving. Most of the time, the problem is caused by the slow pace of learning, difficulty in grasping the syntax of the programming language and poor logical skills. More importantly, programming forms part of major subjects within the field of computing. As a result, specialized pedagogical methods and innovation are highly recommended. Little research has been done on the potential productivity of the WhatsApp platform as part of a blended learning model. In this article, the authors discuss the WhatsApp group as a part of blended learning model incorporated for a group of programming novices. We discuss possible administrative activities for productive utilisation of the WhatsApp group on the blended learning overview. The aim is to take advantage of the popularity of WhatsApp and the time students spend on it for their educational purpose. We believe that blended learning featuring a WhatsApp group may ease novices’ cognitive load and strengthen their foundational programming knowledge and skills. This is a work in progress as the proposed blended learning model with WhatsApp incorporated is yet to be implemented.

Keywords: Blended learning, higher education, WhatsApp, programming, novices, lecturers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1178
474 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data

Authors: Saeid Gharechelou, Ryutaro Tateishi

Abstract:

Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.

Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid monitoring, 2015-Nepal earthquake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1056
473 Using Suffix Tree Document Representation in Hierarchical Agglomerative Clustering

Authors: Daniel I. Morariu, Radu G. Cretulescu, Lucian N. Vintan

Abstract:

In text categorization problem the most used method for documents representation is based on words frequency vectors called VSM (Vector Space Model). This representation is based only on words from documents and in this case loses any “word context" information found in the document. In this article we make a comparison between the classical method of document representation and a method called Suffix Tree Document Model (STDM) that is based on representing documents in the Suffix Tree format. For the STDM model we proposed a new approach for documents representation and a new formula for computing the similarity between two documents. Thus we propose to build the suffix tree only for any two documents at a time. This approach is faster, it has lower memory consumption and use entire document representation without using methods for disposing nodes. Also for this method is proposed a formula for computing the similarity between documents, which improves substantially the clustering quality. This representation method was validated using HAC - Hierarchical Agglomerative Clustering. In this context we experiment also the stemming influence in the document preprocessing step and highlight the difference between similarity or dissimilarity measures to find “closer" documents.

Keywords: Text Clustering, Suffix tree documentrepresentation, Hierarchical Agglomerative Clustering

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1911
472 RUPSec: An Extension on RUP for Developing Secure Systems - Requirements Discipline

Authors: Mohammad Reza Ayatollahzadeh Shirazi, Pooya Jaferian, Golnaz Elahi, Hamid Baghi, Babak Sadeghian

Abstract:

The world is moving rapidly toward the deployment of information and communication systems. Nowadays, computing systems with their fast growth are found everywhere and one of the main challenges for these systems is increasing attacks and security threats against them. Thus, capturing, analyzing and verifying security requirements becomes a very important activity in development process of computing systems, specially in developing systems such as banking, military and e-business systems. For developing every system, a process model which includes a process, methods and tools is chosen. The Rational Unified Process (RUP) is one of the most popular and complete process models which is used by developers in recent years. This process model should be extended to be used in developing secure software systems. In this paper, the Requirement Discipline of RUP is extended to improve RUP for developing secure software systems. These proposed extensions are adding and integrating a number of Activities, Roles, and Artifacts to RUP in order to capture, document and model threats and security requirements of system. These extensions introduce a group of clear and stepwise activities to developers. By following these activities, developers assure that security requirements are captured and modeled. These models are used in design, implementation and test activitie

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2811
471 A Medical Images Based Retrieval System using Soft Computing Techniques

Authors: Pardeep Singh, Sanjay Sharma

Abstract:

Content-Based Image Retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of difering sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever increasing quantities and used for diagnostics and therapy. In several articles, content based access to medical images for supporting clinical decision making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into Picture Archiving and Communication Systems (PACS) have been created. This paper gives an overview of soft computing techniques. New research directions are being defined that can prove to be useful. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text based retrieval methods as they exist at the moment.

Keywords: CBIR, GA, Rough sets, CBMIR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2607
470 An Induction Motor Drive System with Intelligent Supervisory Control for Water Networks Including Storage Tank

Authors: O. S. Ebrahim, K. O. Shawky, M. A. Badr, P. K. Jain

Abstract:

This paper describes an efficient; low-cost; high-availability; induction motor (IM) drive system with intelligent supervisory control for water distribution networks including storage tank. To increase the operational efficiency and reduce cost, the IM drive system includes main pumping unit and an auxiliary voltage source inverter (VSI) fed unit. The main unit comprises smart star/delta starter, regenerative fluid clutch, switched VAR compensator, and hysteresis liquid-level controller. Three-state energy saving mode (ESM) is defined at no-load and a logic algorithm is developed for best energetic cost reduction. To reduce voltage sag, the supervisory controller operates the switched VAR compensator upon motor starting. To provide smart star/delta starter at low cost, a method based on current sensing is developed for interlocking, malfunction detection, and life–cycles counting and used to synthesize an improved fuzzy logic (FL) based availability assessment scheme. Furthermore, a recurrent neural network (RNN) full state estimator is proposed to provide sensor fault-tolerant algorithm for the feedback control. The auxiliary unit is working at low flow rates and improves the system efficiency and flexibility for distributed generation during islanding mode. Compared with doubly-fed IM, the proposed one ensures 30% working throughput under main motor/pump fault conditions, higher efficiency, and marginal cost difference. This is critically important in case of water networks. Theoretical analysis, computer simulations, cost study, as well as efficiency evaluation, using timely cascaded energy-conservative systems, are performed on IM experimental setup to demonstrate the validity and effectiveness of the proposed drive and control.

Keywords: Artificial Neural Network, ANN, Availability Assessment, Cloud Computing, Energy Saving, Induction Machine, IM, Supervisory Control, Fuzzy Logic, FL, Pumped Storage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 634
469 Improved Multi–Objective Firefly Algorithms to Find Optimal Golomb Ruler Sequences for Optimal Golomb Ruler Channel Allocation

Authors: Shonak Bansal, Prince Jain, Arun Kumar Singh, Neena Gupta

Abstract:

Recently nature–inspired algorithms have widespread use throughout the tough and time consuming multi–objective scientific and engineering design optimization problems. In this paper, we present extended forms of firefly algorithm to find optimal Golomb ruler (OGR) sequences. The OGRs have their one of the major application as unequally spaced channel–allocation algorithm in optical wavelength division multiplexing (WDM) systems in order to minimize the adverse four–wave mixing (FWM) crosstalk effect. The simulation results conclude that the proposed optimization algorithm has superior performance compared to the existing conventional computing and nature–inspired optimization algorithms to find OGRs in terms of ruler length, total optical channel bandwidth and computation time.

Keywords: Channel allocation, conventional computing, four–wave mixing, nature–inspired algorithm, optimal Golomb ruler, Lévy flight distribution, optimization, improved multi–objective Firefly algorithms, Pareto optimal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1158
468 Production of Biodiesel from Different Edible Oils

Authors: Amir Shafeeq, Ayyaz Muhammad, Noman Hassan, Rofice Dickson

Abstract:

Different vegetable oil based biodiesel (FAMES) were prepared by alkaline transesterification using refined oils as well as waste frying oil (WFO). Methanol and sodium hydroxide are used as catalyst under similar reaction conditions. To ensure the quality of biodiesel produced, a series of different ASTM Standard tests were carried out. In this context, various testwere done including viscosity, carbon residue, specific gravity, corrosion test, flash point, cloud point and pour point. Results revealed that characteristics of biodiesel depend on the feedstock and it is far better than petroleum diesel.

Keywords: Biodiesel, Edible oils, Separation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2118
467 An Ontology for Spatial Relevant Objects in a Location-aware System: Case Study: A Tourist Guide System

Authors: N. Neysani Samany, M.R. Delavar, N. Chrisman, M.R. Malek

Abstract:

Location-aware computing is a type of pervasive computing that utilizes user-s location as a dominant factor for providing urban services and application-related usages. One of the important urban services is navigation instruction for wayfinders in a city especially when the user is a tourist. The services which are presented to the tourists should provide adapted location aware instructions. In order to achieve this goal, the main challenge is to find spatial relevant objects and location-dependent information. The aim of this paper is the development of a reusable location-aware model to handle spatial relevancy parameters in urban location-aware systems. In this way we utilized ontology as an approach which could manage spatial relevancy by defining a generic model. Our contribution is the introduction of an ontological model based on the directed interval algebra principles. Indeed, it is assumed that the basic elements of our ontology are the spatial intervals for the user and his/her related contexts. The relationships between them would model the spatial relevancy parameters. The implementation language for the model is OWLs, a web ontology language. The achieved results show that our proposed location-aware model and the application adaptation strategies provide appropriate services for the user.

Keywords: Spatial relevancy, Context-aware, Ontology, Modeling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
466 A Multi-Level WEB Based Parallel Processing System A Hierarchical Volunteer Computing Approach

Authors: Abdelrahman Ahmed Mohamed Osman

Abstract:

Over the past few years, a number of efforts have been exerted to build parallel processing systems that utilize the idle power of LAN-s and PC-s available in many homes and corporations. The main advantage of these approaches is that they provide cheap parallel processing environments for those who cannot afford the expenses of supercomputers and parallel processing hardware. However, most of the solutions provided are not very flexible in the use of available resources and very difficult to install and setup. In this paper, a multi-level web-based parallel processing system (MWPS) is designed (appendix). MWPS is based on the idea of volunteer computing, very flexible, easy to setup and easy to use. MWPS allows three types of subscribers: simple volunteers (single computers), super volunteers (full networks) and end users. All of these entities are coordinated transparently through a secure web site. Volunteer nodes provide the required processing power needed by the system end users. There is no limit on the number of volunteer nodes, and accordingly the system can grow indefinitely. Both volunteer and system users must register and subscribe. Once, they subscribe, each entity is provided with the appropriate MWPS components. These components are very easy to install. Super volunteer nodes are provided with special components that make it possible to delegate some of the load to their inner nodes. These inner nodes may also delegate some of the load to some other lower level inner nodes .... and so on. It is the responsibility of the parent super nodes to coordinate the delegation process and deliver the results back to the user. MWPS uses a simple behavior-based scheduler that takes into consideration the current load and previous behavior of processing nodes. Nodes that fulfill their contracts within the expected time get a high degree of trust. Nodes that fail to satisfy their contract get a lower degree of trust. MWPS is based on the .NET framework and provides the minimal level of security expected in distributed processing environments. Users and processing nodes are fully authenticated. Communications and messages between nodes are very secure. The system has been implemented using C#. MWPS may be used by any group of people or companies to establish a parallel processing or grid environment.

Keywords: Volunteer computing, Parallel Processing, XMLWebServices, .NET Remoting, Tuplespace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
465 Efficient Program Slicing Algorithms for Measuring Functional Cohesion and Parallelism

Authors: Jehad Al Dallal

Abstract:

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.

Keywords: Backward slicing, cohesion measure, forward slicing, parallelism measure, program dependence graph, program slicing, static slicing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
464 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: J. Madureira, R. Lagido, I. Sousa

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU. 

Keywords: Inertial Measurement Unit (IMU), Global Positioning System (GPS), smartphone, surfing performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
463 Performance Assessment of Computational Gridon Weather Indices from HOAPS Data

Authors: Madhuri Bhavsar, Anupam K Singh, Shrikant Pradhan

Abstract:

Long term rainfall analysis and prediction is a challenging task especially in the modern world where the impact of global warming is creating complications in environmental issues. These factors which are data intensive require high performance computational modeling for accurate prediction. This research paper describes a prototype which is designed and developed on grid environment using a number of coupled software infrastructural building blocks. This grid enabled system provides the demanding computational power, efficiency, resources, user-friendly interface, secured job submission and high throughput. The results obtained using sequential execution and grid enabled execution shows that computational performance has enhanced among 36% to 75%, for decade of climate parameters. Large variation in performance can be attributed to varying degree of computational resources available for job execution. Grid Computing enables the dynamic runtime selection, sharing and aggregation of distributed and autonomous resources which plays an important role not only in business, but also in scientific implications and social surroundings. This research paper attempts to explore the grid enabled computing capabilities on weather indices from HOAPS data for climate impact modeling and change detection.

Keywords: Climate model, Computational Grid, GridApplication, Heterogeneous Grid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
462 Determination of Skills Gap between School-Based Learning and Laboratory-Based Learning in Omar Al-Mukhtar University

Authors: Aisha Othman, Crinela Pislaru, Ahmed Impes

Abstract:

This paper provides an identification of the existing practical skills gap between school-based learning (SBL) and laboratory based learning (LBL) in the Computing Department within the Faculty of Science at Omar Al-Mukhtar University in Libya. A survey has been conducted and the first author has elicited the responses of two groups of stakeholders, namely the academic teachers and students.

The primary goal is to review the main strands of evidence available and argue that there is a gap between laboratory and school-based learning in terms of opportunities for experiment and application of skills. In addition, the nature of experimental work within the laboratory at Omar Al-Mukhtar University needs to be reconsidered. Another goal of our study was to identify the reasons for students’ poor performance in the laboratory and to determine how this poor performance can be eliminated by the modification of teaching methods. Bloom’s taxonomy of learning outcomes has been applied in order to classify questions and problems into categories, and the survey was formulated with reference to third year Computing Department students. Furthermore, to discover students’ opinions with respect to all the issues, an exercise was conducted. The survey provided questions related to what the students had learnt and how well they had learnt. We were also interested in feedback on how to improve the course and the final question provided an opportunity for such feedback.

Keywords: Bloom’s taxonomy, e-learning, Omar Al-Mukhtar University.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2428
461 Fuzzy Relatives of the CLARANS Algorithm With Application to Text Clustering

Authors: Mohamed A. Mahfouz, M. A. Ismail

Abstract:

This paper introduces new algorithms (Fuzzy relative of the CLARANS algorithm FCLARANS and Fuzzy c Medoids based on randomized search FCMRANS) for fuzzy clustering of relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd) in which the within cluster dissimilarity of each cluster is minimized in each iteration by recomputing new medoids given current memberships, FCLARANS minimizes the same objective function minimized by FCMdd by changing current medoids in such away that that the sum of the within cluster dissimilarities is minimized. Computing new medoids may be effected by noise because outliers may join the computation of medoids while the choice of medoids in FCLARANS is dictated by the location of a predominant fraction of points inside a cluster and, therefore, it is less sensitive to the presence of outliers. In FCMRANS the step of computing new medoids in FCMdd is modified to be based on randomized search. Furthermore, a new initialization procedure is developed that add randomness to the initialization procedure used with FCMdd. Both FCLARANS and FCMRANS are compared with the robust and linearized version of fuzzy c-medoids (RFCMdd). Experimental results with different samples of the Reuter-21578, Newsgroups (20NG) and generated datasets with noise show that FCLARANS is more robust than both RFCMdd and FCMRANS. Finally, both FCMRANS and FCLARANS are more efficient and their outputs are almost the same as that of RFCMdd in terms of classification rate.

Keywords: Data Mining, Fuzzy Clustering, Relational Clustering, Medoid-Based Clustering, Cluster Analysis, Unsupervised Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403
460 Design of an Ensemble Learning Behavior Anomaly Detection Framework

Authors: Abdoulaye Diop, Nahid Emad, Thierry Winter, Mohamed Hilia

Abstract:

Data assets protection is a crucial issue in the cybersecurity field. Companies use logical access control tools to vault their information assets and protect them against external threats, but they lack solutions to counter insider threats. Nowadays, insider threats are the most significant concern of security analysts. They are mainly individuals with legitimate access to companies information systems, which use their rights with malicious intents. In several fields, behavior anomaly detection is the method used by cyber specialists to counter the threats of user malicious activities effectively. In this paper, we present the step toward the construction of a user and entity behavior analysis framework by proposing a behavior anomaly detection model. This model combines machine learning classification techniques and graph-based methods, relying on linear algebra and parallel computing techniques. We show the utility of an ensemble learning approach in this context. We present some detection methods tests results on an representative access control dataset. The use of some explored classifiers gives results up to 99% of accuracy.

Keywords: Cybersecurity, data protection, access control, insider threat, user behavior analysis, ensemble learning, high performance computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1154