Search results for: Feature Subset Selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1880

Search results for: Feature Subset Selection

1130 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: Data grid, data replication, simulation, replica selection, replica placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
1129 Identifying Factors Contributing to the Spread of Lyme Disease: A Regression Analysis of Virginia’s Data

Authors: Fatemeh Valizadeh Gamchi, Edward L. Boone

Abstract:

This research focuses on Lyme disease, a widespread infectious condition in the United States caused by the bacterium Borrelia burgdorferi sensu stricto. It is critical to identify environmental and economic elements that are contributing to the spread of the disease. This study examined data from Virginia to identify a subset of explanatory variables significant for Lyme disease case numbers. To identify relevant variables and avoid overfitting, linear poisson, and regularization regression methods such as ridge, lasso, and elastic net penalty were employed. Cross-validation was performed to acquire tuning parameters. The methods proposed can automatically identify relevant disease count covariates. The efficacy of the techniques was assessed using four criteria on three simulated datasets. Finally, using the Virginia Department of Health’s Lyme disease dataset, the study successfully identified key factors, and the results were consistent with previous studies.

Keywords: Lyme disease, Poisson generalized linear model, Ridge regression, Lasso Regression, elastic net regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 123
1128 Chose the Right Mutation Rate for Better Evolve Combinational Logic Circuits

Authors: Emanuele Stomeo, Tatiana Kalganova, Cyrille Lambert

Abstract:

Evolvable hardware (EHW) is a developing field that applies evolutionary algorithm (EA) to automatically design circuits, antennas, robot controllers etc. A lot of research has been done in this area and several different EAs have been introduced to tackle numerous problems, as scalability, evolvability etc. However every time a specific EA is chosen for solving a particular task, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade the selection of the right parameters for the EA-s components for solving different “test-problems" has been investigated. In this paper the behaviour of mutation rate for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies the number of inputs of each logic gates, the functionality (for example from AND to NOR) and the connectivity between logic gates. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates for the evolved circuits. The experimental results found provide the behaviour of the mutation rate during evolution for the design and optimization of simple logic circuits. The experimental results propose the best mutation rate to be used for designing combinational logic circuits. The research presented is particular important for those who would like to implement a dynamic mutation rate inside the evolutionary algorithm for evolving digital circuits. The researches on the mutation rate during the last 40 years are also summarized.

Keywords: Design of logic circuit, evolutionary computation, evolvable hardware, mutation rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
1127 Emergency Generator Sizing and Motor Starting Analysis

Authors: Mukesh Kumar Kirar, Ganga Agnihotri

Abstract:

This paper investigates the preliminary sizing of generator set to design electrical system at the early phase of a project, dynamic behavior of generator-unit, as well as induction motors, during start-up of the induction motor drives fed from emergency generator unit. The information in this paper simplifies generator set selection and eliminates common errors in selection. It covers load estimation, step loading capacity test, transient analysis for the emergency generator set. The dynamic behavior of the generator-unit, power, power factor, voltage, during Direct-on-Line start-up of the induction motor drives fed from stand alone gene-set is also discussed. It is important to ensure that plant generators operate safely and consistently, power system studies are required at the planning and conceptual design stage of the project. The most widely recognized and studied effect of motor starting is the voltage dip that is experienced throughout an industrial power system as the direct online result of starting large motors. Generator step loading capability and transient voltage dip during starting of largest motor is ensured with the help of Electrical Transient Analyzer Program (ETAP).

Keywords: Sizing, induction motor starting, load estimation, Transient Analyzer Program (ETAP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13977
1126 Measurement of Real Time Drive Cycle for Indian Roads and Estimation of Component Sizing for HEV using LABVIEW

Authors: Varsha Shah, Patel Pritesh, Patel Sagar, PrasantaKundu, RanjanMaheshwari

Abstract:

Performance of vehicle depends on driving patterns and vehicle drive train configuration. Driving patterns depends on traffic condition, road condition and driver behavior. HEV design is carried out under certain constrain like vehicle operating range, acceleration, decelerations, maximum speed and road grades which are directly related to the driving patterns. Therefore the detailed study on HEV performance over a different drive cycle is required for selection and sizing of HEV components. A simple hardware is design to measured velocity v/s time profile of the vehicle by operating vehicle on Indian roads under real traffic conditions. To size the HEV components, a detailed dynamic model of the vehicle is developed considering the effect of inertia of rotating components like wheels, drive chain, engine and electric motor. Using vehicle model and different Indian drive cycles data, total tractive power demanded by vehicle and power supplied by individual components has been calculated.Using above information selection and estimation of component sizing for HEV is carried out so that HEV performs efficiently under hostile driving condition. Complete analysis is carried out in LABVIEW.

Keywords: BLDC motor, Driving cycle, LABVIEW Ultracapacitors, Vehicle Dynamics,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3901
1125 Oman’s Position in U.S. Tourists’ Mind: The Use of Importance-Performance Analysis on Destination Attributes

Authors: Mohammed Gamil Montasser, Angelo Battaglia

Abstract:

Tourism is making its presence felt across the Sultanate of Oman. The story is one of the most recognized phenomena as a sustainable solid growth and is considered a remarkable outcome for any destination. The competitive situation and challenges within the tourism industry worldwide entail a better understanding of the destination position and its image to achieve Oman’s aspiration to retain its international reputation as one of the most desirable destinations in the Middle East. To access general perceptions of Oman’s attributes, their importance and their influences among U.S. tourists, an online survey was conducted with 522 American travelers who have traveled internationally, including non-visitors, virtual-visitors and visitors to Oman. This research involved a total of 36 attributes in the survey. Participants were asked to rate their agreement on how each attribute represented Oman and how important each attribute was for selecting destinations on 5- point Likert Scale. They also indicated if each attribute has a positive, neutral or negative influence on their destination selection. Descriptive statistics and importance performance analysis (IPA) were conducted. IPA illustrated U.S. tourists’ perceptions of Oman’s destination attributes and their importance in destination selection on a matrix with four quadrants, divided by actual mean value in each grid for importance (M=3.51) and performance (M=3.57). Oman tourism organizations and destination managers may use these research findings for future marketing and management efforts toward the U.S. travel market.

Keywords: Analysis of importance and performance, destination attributes, Oman’s position, U.S. tourists.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523
1124 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594
1123 The Applications of Quantum Mechanics Simulation for Solvent Selection in Chemicals Separation

Authors: Attapong T., Hong-Ming Ku, Nakarin M., Narin L., Alisa L, Jirut W.

Abstract:

The quantum mechanics simulation was applied for calculating the interaction force between 2 molecules based on atomic level. For the simple extractive distillation system, it is ternary components consisting of 2 closed boiling point components (A,lower boiling point and B, higher boiling point) and solvent (S). The quantum mechanics simulation was used to calculate the intermolecular force (interaction force) between the closed boiling point components and solvents consisting of intermolecular between A-S and B-S. The requirement of the promising solvent for extractive distillation is that solvent (S) has to form stronger intermolecular force with only one component than the other component (A or B). In this study, the systems of aromatic-aromatic, aromatic-cycloparaffin, and paraffindiolefin systems were selected as the demonstration for solvent selection. This study defined new term using for screening the solvents called relative interaction force which is calculated from the quantum mechanics simulation. The results showed that relative interaction force gave the good agreement with the literature data (relative volatilities from the experiment). The reasons are discussed. Finally, this study suggests that quantum mechanics results can improve the relative volatility estimation for screening the solvents leading to reduce time and money consuming

Keywords: Extractive distillation, Interaction force, Quamtum mechanic, Relative volatility, Solvent extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
1122 LSGENSYS - An Integrated System for Pattern Recognition and Summarisation

Authors: Hema Nair

Abstract:

This paper presents a new system developed in Java® for pattern recognition and pattern summarisation in multi-band (RGB) satellite images. The system design is described in some detail. Results of testing the system to analyse and summarise patterns in SPOT MS images and LANDSAT images are also discussed.

Keywords: Pattern recognition, image analysis, feature extraction, blackboard component, linguistic summary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547
1121 A Study about the Distribution of the Spanning Ratios of Yao Graphs

Authors: Maryam Hsaini, Mostafa Nouri-Baygi

Abstract:

A critical problem in wireless sensor networks is limited battery and memory of nodes. Therefore, each node in the network could maintain only a subset of its neighbors to communicate with. This will increase the battery usage in the network because each packet should take more hops to reach its destination. In order to tackle these problems, spanner graphs are defined. Since each node has a small degree in a spanner graph and the distance in the graph is not much greater than its actual geographical distance, spanner graphs are suitable candidates to be used for the topology of a wireless sensor network. In this paper, we study Yao graphs and their behavior for a randomly selected set of points. We generate several random point sets and compare the properties of their Yao graphs with the complete graph. Based on our data sets, we obtain several charts demonstrating how Yao graphs behave for a set of randomly chosen point set. As the results show, the stretch factor of a Yao graph follows a normal distribution. Furthermore, the stretch factor is in average far less than the worst case stretch factor proved for Yao graphs in previous results. Furthermore, we use Yao graph for a realistic point set and study its stretch factor in real world.

Keywords: Wireless sensor network, spanner graph, Yao Graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 597
1120 Low-Cost Mechatronic Design of an Omnidirectional Mobile Robot

Authors: S. Cobos-Guzman

Abstract:

This paper presents the results of a mechatronic design based on a 4-wheel omnidirectional mobile robot that can be used in indoor logistic applications. The low-level control has been selected using two open-source hardware (Raspberry Pi 3 Model B+ and Arduino Mega 2560) that control four industrial motors, four ultrasound sensors, four optical encoders, a vision system of two cameras, and a Hokuyo URG-04LX-UG01 laser scanner. Moreover, the system is powered with a lithium battery that can supply 24 V DC and a maximum current-hour of 20Ah.The Robot Operating System (ROS) has been implemented in the Raspberry Pi and the performance is evaluated with the selection of the sensors and hardware selected. The mechatronic system is evaluated and proposed safe modes of power distribution for controlling all the electronic devices based on different tests. Therefore, based on different performance results, some recommendations are indicated for using the Raspberry Pi and Arduino in terms of power, communication, and distribution of control for different devices. According to these recommendations, the selection of sensors is distributed in both real-time controllers (Arduino and Raspberry Pi). On the other hand, the drivers of the cameras have been implemented in Linux and a python program has been implemented to access the cameras. These cameras will be used for implementing a deep learning algorithm to recognize people and objects. In this way, the level of intelligence can be increased in combination with the maps that can be obtained from the laser scanner.

Keywords: Autonomous, indoor robot, mechatronic, omnidirectional robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 587
1119 Machine Learning Techniques for Short-Term Rain Forecasting System in the Northeastern Part of Thailand

Authors: Lily Ingsrisawang, Supawadee Ingsriswang, Saisuda Somchit, Prasert Aungsuratana, Warawut Khantiyanan

Abstract:

This paper presents the methodology from machine learning approaches for short-term rain forecasting system. Decision Tree, Artificial Neural Network (ANN), and Support Vector Machine (SVM) were applied to develop classification and prediction models for rainfall forecasts. The goals of this presentation are to demonstrate (1) how feature selection can be used to identify the relationships between rainfall occurrences and other weather conditions and (2) what models can be developed and deployed for predicting the accurate rainfall estimates to support the decisions to launch the cloud seeding operations in the northeastern part of Thailand. Datasets collected during 2004-2006 from the Chalermprakiat Royal Rain Making Research Center at Hua Hin, Prachuap Khiri khan, the Chalermprakiat Royal Rain Making Research Center at Pimai, Nakhon Ratchasima and Thai Meteorological Department (TMD). A total of 179 records with 57 features was merged and matched by unique date. There are three main parts in this work. Firstly, a decision tree induction algorithm (C4.5) was used to classify the rain status into either rain or no-rain. The overall accuracy of classification tree achieves 94.41% with the five-fold cross validation. The C4.5 algorithm was also used to classify the rain amount into three classes as no-rain (0-0.1 mm.), few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall accuracy of classification tree achieves 62.57%. Secondly, an ANN was applied to predict the rainfall amount and the root mean square error (RMSE) were used to measure the training and testing errors of the ANN. It is found that the ANN yields a lower RMSE at 0.171 for daily rainfall estimates, when compared to next-day and next-2-day estimation. Thirdly, the ANN and SVM techniques were also used to classify the rain amount into three classes as no-rain, few-rain, and moderate-rain as above. The results achieved in 68.15% and 69.10% of overall accuracy of same-day prediction for the ANN and SVM models, respectively. The obtained results illustrated the comparison of the predictive power of different methods for rainfall estimation.

Keywords: Machine learning, decision tree, artificial neural network, support vector machine, root mean square error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3230
1118 HaskellFL: A Tool for Detecting Logical Errors in Haskell

Authors: Vanessa Vasconcelos, Mariza A. S. Bigonha

Abstract:

Understanding and using the functional paradigm is a challenge for many programmers. Looking for logical errors in code may take a lot of a developer’s time when a program grows in size. In order to facilitate both processes, this paper presents HaskellFL, a tool that uses fault localization techniques to locate a logical error in Haskell code. The Haskell subset used in this work is sufficiently expressive for those studying Functional Programming to get immediate help debugging their code and to answer questions about key concepts associated with the functional paradigm. HaskellFL was tested against Functional Programming assignments submitted by students enrolled at the Functional Programming class at the Federal University of Minas Gerais and against exercises from the Exercism Haskell track that are publicly available in GitHub. This work also evaluated the effectiveness of two fault localization techniques, Tarantula and Ochiai, in the Haskell context. Furthermore, the EXAM score was chosen to evaluate the tool’s effectiveness, and results showed that HaskellFL reduced the effort needed to locate an error for all tested scenarios. The results also showed that the Ochiai method was more effective than Tarantula.

Keywords: Debug, fault localization, functional programming, Haskell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731
1117 Pipelined Control-Path Effects on Area and Performance of a Wormhole-Switched Network-on-Chip

Authors: Faizal A. Samman, Thomas Hollstein, Manfred Glesner

Abstract:

This paper presents design trade-off and performance impacts of the amount of pipeline phase of control path signals in a wormhole-switched network-on-chip (NoC). The numbers of the pipeline phase of the control path vary between two- and one-cycle pipeline phase. The control paths consist of the routing request paths for output selection and the arbitration paths for input selection. Data communications between on-chip routers are implemented synchronously and for quality of service, the inter-router data transports are controlled by using a link-level congestion control to avoid lose of data because of an overflow. The trade-off between the area (logic cell area) and the performance (bandwidth gain) of two proposed NoC router microarchitectures are presented in this paper. The performance evaluation is made by using a traffic scenario with different number of workloads under 2D mesh NoC topology using a static routing algorithm. By using a 130-nm CMOS standard-cell technology, our NoC routers can be clocked at 1 GHz, resulting in a high speed network link and high router bandwidth capacity of about 320 Gbit/s. Based on our experiments, the amount of control path pipeline stages gives more significant impact on the NoC performance than the impact on the logic area of the NoC router.

Keywords: Network-on-Chip, Synchronous Parallel Pipeline, Router Architecture, Wormhole Switching

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
1116 Tariff as a Determining Factor in Choosing Mobile Operators: A Case Study from Higher Learning Institution in Dodoma Municipality in Tanzania

Authors: Justinian Anatory, Ekael Stephen Manase

Abstract:

In recent years, the adoption of mobile phones has been exceptionally rapid in many parts of the world, and Tanzania is not exceptional. We are witnessing a number of new mobile network operators being licensed from time to time by Tanzania Communications Regulatory Authority (TCRA). This makes competition in the telecommunications market very stiff. All mobile phone companies are struggling to earn more new customers into their networks. This trend courses a stiff competition. The various measures are being taken by different companies including, lowering tariff, and introducing free short messages within and out of their networks, and free calls during off-peak periods. This paper is aimed at investigating the influence of tariffs on students’ mobile customers in selecting their mobile network operators. About seventy seven students from high learning institutions in Dodoma Municipality, Tanzania, participated in responding to the prepared questionnaires. The sought information was aimed at determining if tariffs influenced students into selection of their current mobile operators. The results indicate that tariffs were the major driving factor in selection of mobile operators. However, female mobile customers were found to be more easily attracted into subscribing to a mobile operator due to low tariffs, a bigger number of free short messages or discounted call charges than their fellow male customers.

Keywords: Consumer Buying, mobile operators, tariff.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241
1115 Iris Recognition Based On the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
1114 Financing Decision and Productivity Growth for the Venture Capital Industry Using High-Order Fuzzy Time Series

Authors: Shang-En Yu

Abstract:

Human society, there are many uncertainties, such as economic growth rate forecast of the financial crisis, many scholars have, since the the Song Chissom two scholars in 1993 the concept of the so-called fuzzy time series (Fuzzy Time Series)different mode to deal with these problems, a previous study, however, usually does not consider the relevant variables selected and fuzzy process based solely on subjective opinions the fuzzy semantic discrete, so can not objectively reflect the characteristics of the data set, in addition to carrying outforecasts are often fuzzy rules as equally important, failed to consider the importance of each fuzzy rule. For these reasons, the variable selection (Factor Selection) through self-organizing map (Self-Organizing Map, SOM) and proposed high-end weighted multivariate fuzzy time series model based on fuzzy neural network (Fuzzy-BPN), and using the the sequential weighted average operator (Ordered Weighted Averaging operator, OWA) weighted prediction. Therefore, in order to verify the proposed method, the Taiwan stock exchange (Taiwan Stock Exchange Corporation) Taiwan Weighted Stock Index (Taiwan Stock Exchange Capitalization Weighted Stock Index, TAIEX) as experimental forecast target, in order to filter the appropriate variables in the experiment Finally, included in other studies in recent years mode in conjunction with this study, the results showed that the predictive ability of this study further improve.

Keywords: Heterogeneity, residential mortgage loans, foreclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389
1113 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: Autonomous vehicles, deformable part model, dpm, pedestrian recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
1112 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece

Authors: Panagiotis Karadimos, Leonidas Anthopoulos

Abstract:

Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.

Keywords: Actual cost and duration, attribute selection, bridge projects, neural networks, predicting models, FANN TOOL, WEKA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1235
1111 Cloud Enterprise Application Provider Selection Model for the Small and Medium Enterprise: A Pilot Study

Authors: Rowland R. Ogunrinde, Yusmadi Y. Jusoh, Noraini Che Pa, Wan Nurhayati W. Rahman, Azizol B. Abdullah

Abstract:

Enterprise Applications (EAs) aid the organizations achieve operational excellence and competitive advantage. Over time, most Small and Medium Enterprises (SMEs), which are known to be the major drivers of most thriving global economies, use the costly on-premise versions of these applications thereby making business difficult to competitively thrive in the same market environment with their large enterprise counterparts. The advent of cloud computing presents the SMEs an affordable offer and great opportunities as such EAs can be cloud-hosted and rented on a pay-per-use basis which does not require huge initial capital. However, as there are numerous Cloud Service Providers (CSPs) offering EAs as Software-as-a-Service (SaaS), there is a challenge of choosing a suitable provider with Quality of Service (QoS) that meet the organizations’ customized requirements. The proposed model takes care of that and goes a step further to select the most affordable among a selected few of the CSPs. In the earlier stage, before developing the instrument and conducting the pilot test, the researchers conducted a structured interview with three experts to validate the proposed model. In conclusion, the validity and reliability of the instrument were tested through experts, typical respondents, and analyzed with SPSS 22. Results confirmed the validity of the proposed model and the validity and reliability of the instrument.

Keywords: Cloud service provider, enterprise applications, quality of service, selection criteria, small and medium enterprise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 790
1110 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: Autonomous surveillance, Bayesian reasoning, decision-support, interventions, patterns-of-life, predictive analytics, predictive insights.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 541
1109 Design and Implementation of Client Server Network Management System for Ethernet LAN

Authors: May Paing Paing Zaw, Su Myat Marlar Soe

Abstract:

Network Management Systems have played a great important role in information systems. Management is very important and essential in any fields. There are many managements such as configuration management, fault management, performance management, security management, accounting management and etc. Among them, configuration, fault and security management is more important than others. Because these are essential and useful in any fields. Configuration management is to monitor and maintain the whole system or LAN. Fault management is to detect and troubleshoot the system. Security management is to control the whole system. This paper intends to increase the network management functionalities including configuration management, fault management and security management. In configuration management system, this paper specially can support the USB ports and devices to detect and read devices configuration and solve to detect hardware port and software ports. In security management system, this paper can provide the security feature for the user account setting and user management and proxy server feature. And all of the history of the security such as user account and proxy server history are kept in the java standard serializable file. So the user can view the history of the security and proxy server anytime. If the user uses this system, the user can ping the clients from the network and the user can view the result of the message in fault management system. And this system also provides to check the network card and can show the NIC card setting. This system is used RMI (Remote Method Invocation) and JNI (Java Native Interface) technology. This paper is to implement the client/server network management system using Java 2 Standard Edition (J2SE). This system can provide more than 10 clients. And then this paper intends to show data or message structure of client/server and how to work using TCP/IP protocol.

Keywords: TCP/ IP based client server application

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3602
1108 Transformer Life Enhancement Using Dynamic Switching of Second Harmonic Feature in IEDs

Authors: K. N. Dinesh Babu, P. K. Gargava

Abstract:

Energization of a transformer results in sudden flow of current which is an effect of core magnetization. This current will be dominated by the presence of second harmonic, which in turn is used to segregate fault and inrush current, thus guaranteeing proper operation of the relay. This additional security in the relay sometimes obstructs or delays differential protection in a specific scenario, when the 2nd harmonic content was present during a genuine fault. This kind of scenario can result in isolation of the transformer by Buchholz and pressure release valve (PRV) protection, which is acted when fault creates more damage in transformer. Such delays involve a huge impact on the insulation failure, and chances of repairing or rectifying fault of problem at site become very dismal. Sometimes this delay can cause fire in the transformer, and this situation becomes havoc for a sub-station. Such occurrences have been observed in field also when differential relay operation was delayed by 10-15 ms by second harmonic blocking in some specific conditions. These incidences have led to the need for an alternative solution to eradicate such unwarranted delay in operation in future. Modern numerical relay, called as intelligent electronic device (IED), is embedded with advanced protection features which permit higher flexibility and better provisions for tuning of protection logic and settings. Such flexibility in transformer protection IEDs, enables incorporation of alternative methods such as dynamic switching of second harmonic feature for blocking the differential protection with additional security. The analysis and precautionary measures carried out in this case, have been simulated and discussed in this paper to ensure that similar solutions can be adopted to inhibit analogous issues in future.

Keywords: Differential protection, intelligent electronic device (IED), 2nd harmonic, inrush inhibit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1048
1107 A Monte Carlo Method to Data Stream Analysis

Authors: Kittisak Kerdprasop, Nittaya Kerdprasop, Pairote Sattayatham

Abstract:

Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.

Keywords: Data Stream, Monte Carlo, Sampling, DensityEstimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417
1106 Hybridization and Evaluation of Jatropha (Jatropha curcas L.) to Improve High Yield Varieties in Indonesia

Authors: Rully D. Purwati, Tantri D. A. Anggraeni, Bambang Heliyanto, M. Machfud, Joko Hartono

Abstract:

Jatropha curcas L. is one of the crops producing non edible oil which is potential for bio-energy. Jatropha cultivation and development program in Indonesia is facing several problems especially low seed yield resulting in inefficient crop cultivation cost. To cope with the problem, development of high yielding varieties is necessary. Development of varieties to improve seed yield was conducted by hybridization and selection, and resulted in 14 potential genotypes. The yield potential of the 14 genotypes were evaluated and compared with two check varieties. The objective of the evaluation was to find Jatropha hybrids with some characters i.e. productivity higher than check varieties, oil content > 40% and harvesting age ≤ 110 days. Hybridization and individual plant selection were carried out from 2010 to 2014. Evaluation of high yield was conducted in Asembagus experimental station, Situbondo, East Java in three years (2015-2017). The experimental designed was Randomized Complete Block Design with three replication and plot size of 10 m x 8 m. The characters observed were number of capsules per plant, dry seed yield (kg/ha) and seed oil content (%). The results of this experiment indicated that all the hybrids evaluated have higher productivity than check variety IP-3A. There were two superior hybrids i.e. HS-49xSP-65/32 and HS-49xSP-19/28 with highest seed yield per hectare and number of capsules per plant during three years.

Keywords: Jatropha, biodiesel, hybrid, high seed yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 765
1105 Behavioral Analysis of Team Members in Virtual Organization based on Trust Dimension and Learning

Authors: Indiramma M., K. R. Anandakumar

Abstract:

Trust management and Reputation models are becoming integral part of Internet based applications such as CSCW, E-commerce and Grid Computing. Also the trust dimension is a significant social structure and key to social relations within a collaborative community. Collaborative Decision Making (CDM) is a difficult task in the context of distributed environment (information across different geographical locations) and multidisciplinary decisions are involved such as Virtual Organization (VO). To aid team decision making in VO, Decision Support System and social network analysis approaches are integrated. In such situations social learning helps an organization in terms of relationship, team formation, partner selection etc. In this paper we focus on trust learning. Trust learning is an important activity in terms of information exchange, negotiation, collaboration and trust assessment for cooperation among virtual team members. In this paper we have proposed a reinforcement learning which enhances the trust decision making capability of interacting agents during collaboration in problem solving activity. Trust computational model with learning that we present is adapted for best alternate selection of new project in the organization. We verify our model in a multi-agent simulation where the agents in the community learn to identify trustworthy members, inconsistent behavior and conflicting behavior of agents.

Keywords: Collaborative Decision making, Trust, Multi Agent System (MAS), Bayesian Network, Reinforcement Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
1104 The Determination of Aflatoxins in Paddy and Milled Fractions of Rice in Guyana: Preliminary Results

Authors: Donna M. Morrison, Lambert Chester, Coretta A. N. Samuels, David R. Ledoux

Abstract:

A survey was conducted in the five rice-growing regions in Guyana to determine the presence of aflatoxins in multiple fractions of rice in June/October 2015 growing season. The fractions were paddy, steamed paddy, cargo rice, white rice and parboiled rice. Samples were analyzed by High Performance Liquid Chromatography. A subset of the samples was further analyzed by enzyme-linked immunosorbent assay (ELISA) for concurrence. All analyses were conducted at the University of Missouri, USA. Of the 186 samples tested, 16 had aflatoxin concentrations greater than 20 ppb the recommended limit for aflatoxins in food according to the United States Food and Drug Administration. An additional three samples had aflatoxin B1 concentrations greater than the European Union Commission maximum levels for aflatoxin B1 in rice at 5 µg/kg and total aflatoxins (B1, B2, G1 and G2) at 10 µg/kg. The survey indicates that there is no widespread aflatoxin problem in rice in Guyana. The incidence of aflatoxins appears to be localized.

Keywords: Aflatoxins, enzyme-linked immunosorbent assay, high-performance liquid chromatography, rice fractions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1580
1103 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: Expectation-maximization (EM) algorithm, cause of failure, intensity, linear degradation path, masked data, reliability function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
1102 E-Learning Management Systems General Framework

Authors: Hamed Fawareh

Abstract:

The recent development in learning technologies leads to emerge many learning management systems (LMS). In this study, we concentrate on the specifications and characteristics of LMSs. Furthermore, this paper emphasizes on the feature of e-learning management systems. The features take on the account main indicators to assist and evaluate the quality of e-learning systems. The proposed indicators based of ten dimensions.

Keywords: E-Learning, System Requirement, Social Requirement, Learning Management System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2522
1101 Relative Radiometric Correction of Cloudy Multitemporal Satellite Imagery

Authors: Seema Biday, Udhav Bhosle

Abstract:

Repeated observation of a given area over time yields potential for many forms of change detection analysis. These repeated observations are confounded in terms of radiometric consistency due to changes in sensor calibration over time, differences in illumination, observation angles and variation in atmospheric effects. This paper demonstrates applicability of an empirical relative radiometric normalization method to a set of multitemporal cloudy images acquired by Resourcesat1 LISS III sensor. Objective of this study is to detect and remove cloud cover and normalize an image radiometrically. Cloud detection is achieved by using Average Brightness Threshold (ABT) algorithm. The detected cloud is removed and replaced with data from another images of the same area. After cloud removal, the proposed normalization method is applied to reduce the radiometric influence caused by non surface factors. This process identifies landscape elements whose reflectance values are nearly constant over time, i.e. the subset of non-changing pixels are identified using frequency based correlation technique. The quality of radiometric normalization is statistically assessed by R2 value and mean square error (MSE) between each pair of analogous band.

Keywords: Correlation, Frequency domain, Multitemporal, Relative Radiometric Correction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981