Search results for: Distributed networks.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2550

Search results for: Distributed networks.

420 Improved Rare Species Identification Using Focal Loss Based Deep Learning Models

Authors: Chad Goldsworthy, B. Rajeswari Matam

Abstract:

The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.

Keywords: Convolutional neural networks, data imbalance, deep learning, focal loss, species classification, wildlife conservation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
419 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures

Authors: Silvina Caíno-Lores, Jesús Carretero

Abstract:

Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.

Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
418 JEWEL: A Cosmological Model Due to the Geometrical Displacement of Galactic Object Like Black, White and Worm Holes

Authors: Francesco Pia

Abstract:

Stellar objects such as black, white and worm holes can be the subject of speculative reasoning if represented in a simplified and geometric form in order to be able to move them; and the cosmological model is one of the most important contents in relation to speculations that can then open the way to other aspects that are not strictly speculative but practical, precisely in the Universe represented by us. In this work, thanks to the hypothesis of a very large number of black, white and worm holes present in our Universe, we imagine that they can be moved; it was therefore thought to align them on a plane and following a redistribution, and the boundaries of this plane were ideally joined, giving rise to a sphere that has the stellar objects examined radially distributed. Thanks to geometrical displacements of these stellar objects that do not make each one of them lose their functionality in the region in which they are located, at the end of the speculative process it is possible to highlight a spherical layer that allows a flow from the outside and inside this spherical shell allowing to relate to other external and internal spherical layers; this aspect that seems useful to describe the universe we live in, for example inside one of the spherical shells just described. The name "Jewel" was chosen because, imagining the speculative process present in this work at the end of steps, the cosmological model tends to be "luminous". This cosmological model includes, for each internal part of a generic layer, different and numerous moments of our universe thanks to an eternal flow inward. There are many aspects to explore, one of these is the connection between the outermost and the inside of the spherical layers.

Keywords: Black hole, cosmological model, cosmology, white hole.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 507
417 Construction 4.0: The Future of the Construction Industry in South Africa

Authors: Temidayo. O. Osunsanmi, Clinton Aigbavboa, Ayodeji Oke

Abstract:

The construction industry is a renowned latecomer to the efficiency offered by the adoption of information technology. Whereas, the banking, manufacturing, retailing industries have keyed into the future by using digitization and information technology as a new approach for ensuring competitive gain and efficiency. The construction industry has yet to fully realize similar benefits because the adoption of ICT is still at the infancy stage with a major concentration on the use of software. Thus, this study evaluates the awareness and readiness of construction professionals towards embracing a full digitalization of the construction industry using construction 4.0. The term ‘construction 4.0’ was coined from the industry 4.0 concept which is regarded as the fourth industrial revolution that originated from Germany. A questionnaire was utilized for sourcing data distributed to practicing construction professionals through a convenience sampling method. Using SPSS v24, the hypotheses posed were tested with the Mann Whitney test. The result revealed that there are no differences between the consulting and contracting organizations on the readiness for adopting construction 4.0 concepts in the construction industry. Using factor analysis, the study discovers that adopting construction 4.0 will improve the performance of the construction industry regarding cost and time savings and also create sustainable buildings. In conclusion, the study determined that construction professionals have a low awareness towards construction 4.0 concepts. The study recommends an increase in awareness of construction 4.0 concepts through seminars, workshops and training, while construction professionals should take hold of the benefits of adopting construction 4.0 concepts. The study contributes to the roadmap for the implementation of construction industry 4.0 concepts in the South African construction industry.

Keywords: Building information technology, Construction 4.0, Industry 4.0, Smart Site.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5702
416 Decode and Forward Cooperative Protocol Enhancement Using Interference Cancellation

Authors: Siddeeq Y. Ameen, Mohammed K. Yousif

Abstract:

Cooperative communication systems are considered to be a promising technology to improve the system capacity, reliability and performances over fading wireless channels. Cooperative relaying system with a single antenna will be able to reach the advantages of multiple antenna communication systems. It is ideally suitable for the distributed communication systems; the relays can cooperate and form virtual MIMO systems. Thus the paper will aim to investigate the possible enhancement of cooperated system using decode and forward protocol. On the decode and forward an attempt to cancel or at least reduce the interference instead of increasing the SNR values is achieved. The latter can be achieved via the use group of relays depending on the channel status from source to relay and relay to destination respectively.

In the proposed system, the transmission time has been divided into two phases to be used by the decode and forward protocol. The first phase has been allocated for the source to transmit its data whereas the relays and destination nodes are in receiving mode. On the other hand, the second phase is allocated for the first and second groups of relay nodes to relay the data to the destination node. Simulations results have shown an improvement in performance is achieved compared to the conventional decode and forward in terms of BER and transmission rate.

Keywords: Cooperative systems, decode and forward, interference cancellation, virtual MIMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3702
415 Community Detection-based Analysis of the Human Interactome Network

Authors: Razvan Bocu, Sabin Tabirca

Abstract:

The study of proteomics reached unexpected levels of interest, as a direct consequence of its discovered influence over some complex biological phenomena, such as problematic diseases like cancer. This paper presents a new technique that allows for an accurate analysis of the human interactome network. It is basically a two-step analysis process that involves, at first, the detection of each protein-s absolute importance through the betweenness centrality computation. Then, the second step determines the functionallyrelated communities of proteins. For this purpose, we use a community detection technique that is based on the edge betweenness calculation. The new technique was thoroughly tested on real biological data and the results prove some interesting properties of those proteins that are involved in the carcinogenesis process. Apart from its experimental usefulness, the novel technique is also computationally effective in terms of execution times. Based on the analysis- results, some topological features of cancer mutated proteins are presented and a possible optimization solution for cancer drugs design is suggested.

Keywords: Betweenness centrality, interactome networks, proteinprotein interactions, protein communities, cancer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1264
414 Effect of TCSR on Measured Impedance by Distance Protection in Presence Single Phase to Earth Fault

Authors: Mohamed Zellagui, Abdelaziz Chaghi

Abstract:

This paper presents the impact study of apparent reactance injected by series Flexible AC Transmission System (FACTS) i.e. Thyristor Controlled Series Reactor (TCSR) on the measured impedance of a 400 kV single electrical transmission line in the presence of phase to earth fault with fault resistance. The study deals with an electrical transmission line of Eastern Algerian transmission networks at Group Sonelgaz (Algerian Company of Electrical and Gas) compensated by TCSR connected at midpoint of the line. This compensator used to inject active and reactive powers is controlled by three TCSR-s. The simulations results investigate the impacts of the TCSR on the parameters of short circuit calculation and parameters of measured impedance by distance relay in the presence of earth fault for three cases study.

Keywords: TCSR, Transmission line, Apparent reactance, Earth fault, Symmetrical components, Distance protection, Measured impedance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2521
413 Burstiness Reduction of a Doubly Stochastic AR-Modeled Uniform Activity VBR Video

Authors: J. P. Dubois

Abstract:

Stochastic modeling of network traffic is an area of significant research activity for current and future broadband communication networks. Multimedia traffic is statistically characterized by a bursty variable bit rate (VBR) profile. In this paper, we develop an improved model for uniform activity level video sources in ATM using a doubly stochastic autoregressive model driven by an underlying spatial point process. We then examine a number of burstiness metrics such as the peak-to-average ratio (PAR), the temporal autocovariance function (ACF) and the traffic measurements histogram. We found that the former measure is most suitable for capturing the burstiness of single scene video traffic. In the last phase of this work, we analyse statistical multiplexing of several constant scene video sources. This proved, expectedly, to be advantageous with respect to reducing the burstiness of the traffic, as long as the sources are statistically independent. We observed that the burstiness was rapidly diminishing, with the largest gain occuring when only around 5 sources are multiplexed. The novel model used in this paper for characterizing uniform activity video was thus found to be an accurate model.

Keywords: AR, ATM, burstiness, doubly stochastic, statisticalmultiplexing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1386
412 An Approximate Lateral-Torsional Buckling Mode Function for Cantilever I-Beams

Authors: H. Ozbasaran

Abstract:

Lateral torsional buckling is a global buckling mode which should be considered in design of slender structural members under flexure about their strong axis. It is possible to compute the load which causes lateral torsional buckling of a beam by finite element analysis, however, closed form equations are needed in engineering practice for calculation ease which can be obtained by using energy method. In lateral torsional buckling applications of energy method, a proper function for the critical lateral torsional buckling mode should be chosen which can be thought as the variation of twisting angle along the buckled beam. Accuracy of the results depends on how close is the chosen function to the exact mode. Since critical lateral torsional buckling mode of the cantilever I-beams varies due to material properties, section properties and loading case, the hardest step is to determine a proper mode function in application of energy method. This paper presents an approximate function for critical lateral torsional buckling mode of doubly symmetric cantilever I-beams. Coefficient matrices are calculated for concentrated load at free end, uniformly distributed load and constant moment along the beam cases. Critical lateral torsional buckling modes obtained by presented function and exact solutions are compared. It is found that the modes obtained by presented function coincide with differential equation solutions for considered loading cases.

Keywords: Buckling mode, cantilever, lateral-torsional buckling, I-beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2538
411 Generalized Maximal Ratio Combining as a Supra-optimal Receiver Diversity Scheme

Authors: Jean-Pierre Dubois, Rania Minkara, Rafic Ayoubi

Abstract:

Maximal Ratio Combining (MRC) is considered the most complex combining technique as it requires channel coefficients estimation. It results in the lowest bit error rate (BER) compared to all other combining techniques. However the BER starts to deteriorate as errors are introduced in the channel coefficients estimation. A novel combining technique, termed Generalized Maximal Ratio Combining (GMRC) with a polynomial kernel, yields an identical BER as MRC with perfect channel estimation and a lower BER in the presence of channel estimation errors. We show that GMRC outperforms the optimal MRC scheme in general and we hereinafter introduce it to the scientific community as a new “supraoptimal" algorithm. Since diversity combining is especially effective in small femto- and pico-cells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to IP-based 4th generation networks.

Keywords: Bit error rate, femto-internet cells, generalized maximal ratio combining, signal-to-scattering noise ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
410 Embedded Throughput Improving of Low-rate EDR Packets for Lower-latency

Authors: M. A. M. El-Bendary, A. E. Abu El-Azm, N. A. El-Fishawy, F. Shawky, F. E. El-Samie

Abstract:

With increasing utilization of the wireless devices in different fields such as medical devices and industrial fields, the paper presents a method for simplify the Bluetooth packets with throughput enhancing. The paper studies a vital issue in wireless communications, which is the throughput of data over wireless networks. In fact, the Bluetooth and ZigBee are a Wireless Personal Area Network (WPAN). With taking these two systems competition consideration, the paper proposes different schemes for improve the throughput of Bluetooth network over a reliable channel. The proposition depends on the Channel Quality Driven Data Rate (CQDDR) rules, which determines the suitable packet in the transmission process according to the channel conditions. The proposed packet is studied over additive White Gaussian Noise (AWGN) and fading channels. The Experimental results reveal the capability of extension of the PL length by 8, 16, 24 bytes for classic and EDR packets, respectively. Also, the proposed method is suitable for the low throughput Bluetooth.

Keywords: Bluetooth, throughput, adaptive packets, EDRpackets, CQDDR, low latency. Channel condition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873
409 A P2P File Sharing Technique by Indexed-Priority Metric

Authors: Toshinori Takabatake, Yoshikazu Komano

Abstract:

Recently, the improvements in processing performance of a computer and in high speed communication of an optical fiber have been achieved, so that the amount of data which are processed by a computer and flowed on a network has been increasing greatly. However, in a client-server system, since the server receives and processes the amount of data from the clients through the network, a load on the server is increasing. Thus, there are needed to introduce a server with high processing ability and to have a line with high bandwidth. In this paper, concerning to P2P networks to resolve the load on a specific server, a criterion called an Indexed-Priority Metric is proposed and its performance is evaluated. The proposed metric is to allocate some files to each node. As a result, the load on a specific server can distribute them to each node equally well. A P2P file sharing system using the proposed metric is implemented. Simulation results show that the proposed metric can make it distribute files on the specific server.

Keywords: peer-to-peer, file-sharing system, load-balancing, dependability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
408 Optimizing the Capacity of a Convolutional Neural Network for Image Segmentation and Pattern Recognition

Authors: Yalong Jiang, Zheru Chi

Abstract:

In this paper, we study the factors which determine the capacity of a Convolutional Neural Network (CNN) model and propose the ways to evaluate and adjust the capacity of a CNN model for best matching to a specific pattern recognition task. Firstly, a scheme is proposed to adjust the number of independent functional units within a CNN model to make it be better fitted to a task. Secondly, the number of independent functional units in the capsule network is adjusted to fit it to the training dataset. Thirdly, a method based on Bayesian GAN is proposed to enrich the variances in the current dataset to increase its complexity. Experimental results on the PASCAL VOC 2010 Person Part dataset and the MNIST dataset show that, in both conventional CNN models and capsule networks, the number of independent functional units is an important factor that determines the capacity of a network model. By adjusting the number of functional units, the capacity of a model can better match the complexity of a dataset.

Keywords: CNN, capsule network, capacity optimization, character recognition, data augmentation; semantic segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 672
407 Prediction of Rubberised Concrete Strength by Using Artificial Neural Networks

Authors: A. M. N. El-Khoja, A. F. Ashour, J. Abdalhmid, X. Dai, A. Khan

Abstract:

In recent years, waste tyre problem is considered as one of the most crucial environmental pollution problems facing the world. Thus, reusing waste rubber crumb from recycled tyres to develop highly damping concrete is technically feasible and a viable alternative to landfill or incineration. The utilization of waste rubber in concrete generally enhances the ductility, toughness, thermal insulation, and impact resistance. However, the mechanical properties decrease with the amount of rubber used in concrete. The aim of this paper is to develop artificial neural network (ANN) models to predict the compressive strength of rubberised concrete (RuC). A trained and tested ANN was developed using a comprehensive database collected from different sources in the literature. The ANN model developed used 5 input parameters that include: coarse aggregate (CA), fine aggregate (FA), w/c ratio, fine rubber (Fr), and coarse rubber (Cr), whereas the ANN outputs were the corresponding compressive strengths. A parametric study was also conducted to study the trend of various RuC constituents on the compressive strength of RuC.

Keywords: Rubberized concrete, compressive strength, artificial neural network, prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878
406 Key Performance Indicators and the Model for Achieving Digital Inclusion for Smart Cities

Authors: Khalid Obaed Mahmod, Mesut Cevik

Abstract:

The term smart city has appeared recently and was accompanied by many definitions and concepts, but as a simplified and clear definition, it can be said that the smart city is a geographical location that has gained efficiency and flexibility in providing public services to citizens through its use of technological and communication technologies, and this is what distinguishes it from other cities. Smart cities connect the various components of the city through the main and sub networks in addition to a set of applications, and thus are able to collect data that is the basis for providing technological solutions to manage resources and provide services. The basis of the work of the smart city is the use of artificial intelligence (AI) and the technology of the Internet of Things (IoT). The work presents the concept of smart cities, the pillars, standards and evaluation indicators on which smart cities depend, and the reasons that prompted the world to move towards its establishment. It also provides a simplified hypothetical way to measure the ideal smart city model by defining some indicators and key pillars, simulating them with logic circuits and testing them to determine if the city can be considered an ideal smart city or not.

Keywords: Evaluation indicators, logic gates, performance factors, pillars, smart city.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 288
405 DocPro: A Framework for Processing Semantic and Layout Information in Business Documents

Authors: Ming-Jen Huang, Chun-Fang Huang, Chiching Wei

Abstract:

With the recent advance of the deep neural network, we observe new applications of NLP (natural language processing) and CV (computer vision) powered by deep neural networks for processing business documents. However, creating a real-world document processing system needs to integrate several NLP and CV tasks, rather than treating them separately. There is a need to have a unified approach for processing documents containing textual and graphical elements with rich formats, diverse layout arrangement, and distinct semantics. In this paper, a framework that fulfills this unified approach is presented. The framework includes a representation model definition for holding the information generated by various tasks and specifications defining the coordination between these tasks. The framework is a blueprint for building a system that can process documents with rich formats, styles, and multiple types of elements. The flexible and lightweight design of the framework can help build a system for diverse business scenarios, such as contract monitoring and reviewing.

Keywords: Document processing, framework, formal definition, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 600
404 Robust Heart Sounds Segmentation Based on the Variation of the Phonocardiogram Curve Length

Authors: Mecheri Zeid Belmecheri, Maamar Ahfir, Izzet Kale

Abstract:

Automatic cardiac auscultation is still a subject of research in order to establish an objective diagnosis. Recorded heart sounds as Phonocardiogram (PCG) signals can be used for automatic segmentation into components that have clinical meanings. These are the first sound, S1, the second sound, S2, and the systolic and diastolic components, respectively. In this paper, an automatic method is proposed for the robust segmentation of heart sounds. This method is based on calculating an intermediate sawtooth-shaped signal from the length variation of the recorded PCG signal in the time domain and, using its positive derivative function that is a binary signal in training a Recurrent Neural Network (RNN). Results obtained in the context of a large database of recorded PCGs with their simultaneously recorded Electrocardiograms (ECGs) from different patients in clinical settings, including normal and abnormal subjects, show on average a segmentation testing performance average of 76% sensitivity and 94% specificity.

Keywords: Heart sounds, PCG segmentation, event detection, Recurrent Neural Networks, PCG curve length.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 267
403 Prediction of Compressive Strength of SCC Containing Bottom Ash using Artificial Neural Networks

Authors: Yogesh Aggarwal, Paratibha Aggarwal

Abstract:

The paper presents a comparative performance of the models developed to predict 28 days compressive strengths using neural network techniques for data taken from literature (ANN-I) and data developed experimentally for SCC containing bottom ash as partial replacement of fine aggregates (ANN-II). The data used in the models are arranged in the format of six and eight input parameters that cover the contents of cement, sand, coarse aggregate, fly ash as partial replacement of cement, bottom ash as partial replacement of sand, water and water/powder ratio, superplasticizer dosage and an output parameter that is 28-days compressive strength and compressive strengths at 7 days, 28 days, 90 days and 365 days, respectively for ANN-I and ANN-II. The importance of different input parameters is also given for predicting the strengths at various ages using neural network. The model developed from literature data could be easily extended to the experimental data, with bottom ash as partial replacement of sand with some modifications.

Keywords: Self compacting concrete, bottom ash, strength, prediction, neural network, importance factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2195
402 Optimal Construction Using Multi-Criteria Decision-Making Methods

Authors: Masood Karamoozian, Zhang Hong

Abstract:

The necessity and complexity of the decision-making process and the interference of the various factors to make decisions and consider all the relevant factors in a problem are very obvious nowadays. Hence, researchers show their interest in multi-criteria decision-making methods. In this research, the Analytical Hierarchy Process (AHP), Simple Additive Weighting (SAW), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods of multi-criteria decision-making have been used to solve the problem of optimal construction systems. Systems being evaluated in this problem include; Light Steel Frames (LSF), a case study of designs by Zhang Hong studio in the Southeast University of Nanjing, Insulating Concrete Form (ICF), Ordinary Construction System (OCS), and Precast Concrete System (PRCS) as another case study designs in Zhang Hong studio in the Southeast University of Nanjing. Crowdsourcing was done by using a questionnaire at the sample level (200 people). Questionnaires were distributed among experts in university centers and conferences. According to the results of the research, the use of different methods of decision-making led to relatively the same results. In this way, with the use of all three multi-criteria decision-making methods mentioned above, the PRCS was in the first rank, and the LSF system ranked second. Also, the PRCS, in terms of performance standards and economics, was ranked first, and the LSF system was allocated the first rank in terms of environmental standards.

Keywords: Multi-criteria decision making, AHP, SAW, TOPSIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 199
401 The Influence of Beta Shape Parameters in Project Planning

Authors: Αlexios Kotsakis, Stefanos Katsavounis, Dimitra Alexiou

Abstract:

Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.

Keywords: Beta distribution, PERT, Monte Carlo Simulation, skewness, project completion time distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 741
400 Evaluation of the ANN Based Nonlinear System Models in the MSE and CRLB Senses

Authors: M.V Rajesh, Archana R, A Unnikrishnan, R Gopikakumari, Jeevamma Jacob

Abstract:

The System Identification problem looks for a suitably parameterized model, representing a given process. The parameters of the model are adjusted to optimize a performance function based on error between the given process output and identified process output. The linear system identification field is well established with many classical approaches whereas most of those methods cannot be applied for nonlinear systems. The problem becomes tougher if the system is completely unknown with only the output time series is available. It has been reported that the capability of Artificial Neural Network to approximate all linear and nonlinear input-output maps makes it predominantly suitable for the identification of nonlinear systems, where only the output time series is available. [1][2][4][5]. The work reported here is an attempt to implement few of the well known algorithms in the context of modeling of nonlinear systems, and to make a performance comparison to establish the relative merits and demerits.

Keywords: Multilayer neural networks, Radial Basis Functions, Clustering algorithm, Back Propagation training, Extended Kalmanfiltering, Mean Square Error, Nonlinear Modeling, Cramer RaoLower Bound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
399 Prioritizing Service Quality Dimensions:A Neural Network Approach

Authors: A. Golmohammadi, B. Jahandideh

Abstract:

One of the determinants of a firm-s prosperity is the customers- perceived service quality and satisfaction. While service quality is wide in scope, and consists of various dimensions, there may be differences in the relative importance of these dimensions in affecting customers- overall satisfaction of service quality. Identifying the relative rank of different dimensions of service quality is very important in that it can help managers to find out which service dimensions have a greater effect on customers- overall satisfaction. Such an insight will consequently lead to more effective resource allocation which will finally end in higher levels of customer satisfaction. This issue –despite its criticality- has not received enough attention so far. Therefore, using a sample of 240 bank customers in Iran, an artificial neural network is developed to address this gap in the literature. As customers- evaluation of service quality is a subjective process, artificial neural networks –as a brain metaphor- may appear to have a potentiality to model such a complicated process. Proposing a neural network which is able to predict the customers- overall satisfaction of service quality with a promising level of accuracy is the first contribution of this study. In addition, prioritizing the service quality dimensions in affecting customers- overall satisfaction –by using sensitivity analysis of neural network- is the second important finding of this paper.

Keywords: service quality, customer satisfaction, relativeimportance, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2133
398 Critical Factors Affecting the Implementation of Total Quality Management in the Construction Industry in U.A.E

Authors: Firas Mohamad Al-Sabek

Abstract:

The purpose of the paper is to examine the most critical and important factor which will affect the implementation of Total Quality Management (TQM) in the construction industry in the United Arab Emirates. It also examines the most effected Project outcome from implementing TQM. A framework was also proposed depending on the literature studies. The method used in this paper is a quantitative study. A survey with a sample of 60 respondents was created and distributed in a construction company in Abu Dhabi, which includes 15 questions to examine the most critical factor that will affect the implementation of TQM in addition to the most effected project outcome from implementing TQM. The survey showed that management commitment is the most important factor in implementing TQM in a construction company. Also it showed that Project cost is most effected outcome from the implementation of TQM. Management commitment is very important for implementing TQM in any company. If the management loose interest in quality then everyone in the organization will do so. The success of TQM will depend mostly on the top of the pyramid. Also cost is reduced and money is saved when the project team implement TQM. While if no quality measures are present within the team, the project will suffer a commercial failure. Based on literature, more factors can be examined and added to the model. In addition, more construction companies could be surveyed in order to obtain more accurate results. Also this study could be conducted outside the United Arab Emirates for further enchantment.

Keywords: Construction project, total quality management, management commitment, cost, theoretical framework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4028
397 A Characterized and Optimized Approach for End-to-End Delay Constrained QoS Routing

Authors: P.S.Prakash, S.Selvan

Abstract:

QoS Routing aims to find paths between senders and receivers satisfying the QoS requirements of the application which efficiently using the network resources and underlying routing algorithm to be able to find low-cost paths that satisfy given QoS constraints. The problem of finding least-cost routing is known to be NP hard or complete and some algorithms have been proposed to find a near optimal solution. But these heuristics or algorithms either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we analyzed two algorithms namely Characterized Delay Constrained Routing (CDCR) and Optimized Delay Constrained Routing (ODCR). The CDCR algorithm dealt an approach for delay constrained routing that captures the trade-off between cost minimization and risk level regarding the delay constraint. The ODCR which uses an adaptive path weight function together with an additional constraint imposed on the path cost, to restrict search space and hence ODCR finds near optimal solution in much quicker time.

Keywords: QoS, Delay, Routing, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1247
396 Toward an Efficient Framework for Designing, Developing, and Using Secure Mobile Applications

Authors: Mohamed Adel Serhani, Abdelghani Benharref, Rachida Dssouli, Rabeb Mizouni

Abstract:

Nowadays, people are going more and more mobile, both in terms of devices and associated applications. Moreover, services that these devices are offering are getting wider and much more complex. Even though actual handheld devices have considerable computing power, their contexts of utilization are different. These contexts are affected by the availability of connection, high latency of wireless networks, battery life, size of the screen, on-screen or hard keyboard, etc. Consequently, development of mobile applications and their associated mobile Web services, if any, should follow a concise methodology so they will provide a high Quality of Service. The aim of this paper is to highlight and discuss main issues to consider when developing mobile applications and mobile Web services and then propose a framework that leads developers through different steps and modules toward development of efficient and secure mobile applications. First, different challenges in developing such applications are elicited and deeply discussed. Second, a development framework is presented with different modules addressing each of these challenges. Third, the paper presents an example of a mobile application, Eivom Cinema Guide, which benefits from following our development framework.

Keywords: Mobile applications, development of mobile applications, efficient mobile application, secure mobile application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
395 Trust and Reputation Mechanism with Path Optimization in Multipath Routing

Authors: Ramya Dorai, M. Rajaram

Abstract:

A Mobile Adhoc Network (MANET) is a collection of mobile nodes that communicate with each other with wireless links and without pre-existing communication infrastructure. Routing is an important issue which impacts network performance. As MANETs lack central administration and prior organization, their security concerns are different from those of conventional networks. Wireless links make MANETs susceptible to attacks. This study proposes a new trust mechanism to mitigate wormhole attack in MANETs. Different optimization techniques find available optimal path from source to destination. This study extends trust and reputation to an improved link quality and channel utilization based Adhoc Ondemand Multipath Distance Vector (AOMDV). Differential Evolution (DE) is used for optimization.

Keywords: Mobile Adhoc Network (MANET), Adhoc Ondemand Multi-Path Distance Vector (AOMDV), Trust and Reputation, Differential Evolution (DE), Link Quality, Channel Utilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
394 A Superior Delay Estimation Model for VLSI Interconnect in Current Mode Signaling

Authors: Sunil Jadav, Rajeevan Chandel Munish Vashishath

Abstract:

Today’s VLSI networks demands for high speed. And in this work the compact form mathematical model for current mode signalling in VLSI interconnects is presented.RLC interconnect line is modelled using characteristic impedance of transmission line and inductive effect. The on-chip inductance effect is dominant at lower technology node is emulated into an equivalent resistance. First order transfer function is designed using finite difference equation, Laplace transform and by applying the boundary conditions at the source and load termination. It has been observed that the dominant pole determines system response and delay in the proposed model. The novel proposed current mode model shows superior performance as compared to voltage mode signalling. Analysis shows that current mode signalling in VLSI interconnects provides 2.8 times better delay performance than voltage mode. Secondly the damping factor of a lumped RLC circuit is shown to be a useful figure of merit.

Keywords: Current Mode, Voltage Mode, VLSI Interconnect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2422
393 Low-Latency and Low-Overhead Path Planning for In-band Network-Wide Telemetry

Authors: Penghui Zhang, Hua Zhang, Jun-Bo Wang, Cheng Zeng, Zijian Cao

Abstract:

With the development of software-defined networks and programmable data planes, in-band network telemetry (INT) has become an emerging technology in communications because it can get accurate and real-time network information. However, due to the expansion of the network scale, existing telemetry systems, to the best of the authors’ knowledge, have difficulty in meeting the common requirements of low overhead, low latency and full coverage for traffic measurement. This paper proposes a network-wide telemetry system with a low-latency low-overhead path planning (INT-LLPP). This paper builds a mathematical model to analyze the telemetry overhead and latency of INT systems. Then, we adopt a greedy-based path planning algorithm to reduce the overhead and latency of the network telemetry with the full network coverage. The simulation results show that network-wide telemetry is achieved and the telemetry overhead can be reduced significantly compared with existing INT systems. INT-LLPP can control the system latency to get real-time network information.

Keywords: Network telemetry, network monitoring, path planning, low latency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187
392 ECA-SCTP: Enhanced Cooperative ACK for SCTP Path Recovery in Concurrent Multiple Transfer

Authors: GangHeok Kim, SungHoon Seo, JooSeok Song

Abstract:

Stream Control Transmission Protocol (SCTP) has been proposed to provide reliable transport of real-time communications. Due to its attractive features, such as multi-streaming and multihoming, the SCTP is often expected to be an alternative protocol for TCP and UDP. In the original SCTP standard, the secondary path is mainly regarded as a redundancy. Recently, most of researches have focused on extending the SCTP to enable a host to send its packets to a destination over multiple paths simultaneously. In order to transfer packets concurrently over the multiple paths, the SCTP should be well designed to avoid unnecessary fast retransmission and the mis-estimation of congestion window size through the paths. Therefore, we propose an Enhanced Cooperative ACK SCTP (ECASCTP) to improve the path recovery efficiency of multi-homed host which is under concurrent multiple transfer mode. We evaluated the performance of our proposed scheme using ns-2 simulation in terms of cwnd variation, path recovery time, and goodput. Our scheme provides better performance in lossy and path asymmetric networks.

Keywords: SCTP, Concurrent Multiple Transfer, CooperativeSack, Dynamic ack policy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
391 Attacks Classification in Adaptive Intrusion Detection using Decision Tree

Authors: Dewan Md. Farid, Nouria Harbi, Emna Bahri, Mohammad Zahidur Rahman, Chowdhury Mofizur Rahman

Abstract:

Recently, information security has become a key issue in information technology as the number of computer security breaches are exposed to an increasing number of security threats. A variety of intrusion detection systems (IDS) have been employed for protecting computers and networks from malicious network-based or host-based attacks by using traditional statistical methods to new data mining approaches in last decades. However, today's commercially available intrusion detection systems are signature-based that are not capable of detecting unknown attacks. In this paper, we present a new learning algorithm for anomaly based network intrusion detection system using decision tree algorithm that distinguishes attacks from normal behaviors and identifies different types of intrusions. Experimental results on the KDD99 benchmark network intrusion detection dataset demonstrate that the proposed learning algorithm achieved 98% detection rate (DR) in comparison with other existing methods.

Keywords: Detection rate, decision tree, intrusion detectionsystem, network security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3583