Search results for: conventional neural network
7184 Environmental Performance Measurement for Network-Level Pavement Management
Authors: Jessica Achebe, Susan Tighe
Abstract:
The recent Canadian infrastructure report card reveals the unhealthy state of municipal infrastructure intensified challenged faced by municipalities to maintain adequate infrastructure performance thresholds and meet user’s required service levels. For a road agency, huge funding gap issue is inflated by growing concerns of the environmental repercussion of road construction, operation and maintenance activities. As the reduction of material consumption and greenhouse gas emission when maintain and rehabilitating road networks can achieve added benefits including improved life cycle performance of pavements, reduced climate change impacts and human health effect due to less air pollution, improved productivity due to optimal allocation of resources and reduced road user cost. Incorporating environmental sustainability measure into pavement management is solution widely cited and studied. However measuring the environmental performance of road network is still a far-fetched practice in road network management, more so an ostensive agency-wide environmental sustainability or sustainable maintenance specifications is missing. To address this challenge, this present research focuses on the environmental sustainability performance of network-level pavement management. The ultimate goal is to develop a framework to incorporate environmental sustainability in pavement management systems for network-level maintenance programming. In order to achieve this goal, this study reviewed previous studies that employed environmental performance measures, as well as the suitability of environmental performance indicators for the evaluation of the sustainability of network-level pavement maintenance strategies. Through an industry practice survey, this paper provides a brief forward regarding the pavement manager motivations and barriers to making more sustainable decisions, and data needed to support the network-level environmental sustainability. The trends in network-level sustainable pavement management are also presented, existing gaps are highlighted, and ideas are proposed for sustainable network-level pavement management.Keywords: pavement management, sustainability, network-level evaluation, environment measures
Procedia PDF Downloads 2117183 Reducing Energy Consumption and GHG Emission by Integration of Flare Gas with Fuel Gas Network in Refinery
Authors: N. Tahouni, M. Gholami, M. H. Panjeshahi
Abstract:
Gas flaring is one of the most GHG emitting sources in the oil and gas industries. It is also a major way for wasting such an energy that could be better utilized and even generates revenue. Minimize flaring is an effective approach for reducing GHG emissions and also conserving energy in flaring systems. Integrating waste and flared gases into the fuel gas networks (FGN) of refineries is an efficient tool. A fuel gas network collects fuel gases from various source streams and mixes them in an optimal manner, and supplies them to different fuel sinks such as furnaces, boilers, turbines, etc. In this article we use fuel gas network model proposed by Hasan et al. as a base model and modify some of its features and add constraints on emission pollution by gas flaring to reduce GHG emissions as possible. Results for a refinery case study showed that integration of flare gas stream with waste and natural gas streams to construct an optimal FGN can significantly reduce total annualized cost and flaring emissions.Keywords: flaring, fuel gas network, GHG emissions, stream
Procedia PDF Downloads 3447182 Avoiding Packet Drop for Improved through Put in the Multi-Hop Wireless N/W
Authors: Manish Kumar Rajak, Sanjay Gupta
Abstract:
Mobile ad hoc networks (MANETs) are infrastructure less and intercommunicate using single-hop and multi-hop paths. Network based congestion avoidance which involves managing the queues in the network devices is an integral part of any network. QoS: A set of service requirements that are met by the network while transferring a packet stream from a source to a destination. Especially in MANETs, packet loss results in increased overheads. This paper presents a new algorithm to avoid congestion using one or more queue on nodes and corresponding flow rate decided in advance for each node. When any node attains an initial value of queue then it sends this status to its downstream nodes which in turn uses the pre-decided flow rate of packet transfer to its upstream nodes. The flow rate on each node is adjusted according to the status received from its upstream nodes. This proposed algorithm uses the existing infrastructure to inform to other nodes about its current queue status.Keywords: mesh networks, MANET, packet count, threshold, throughput
Procedia PDF Downloads 4747181 An Approach to Maximize the Influence Spread in the Social Networks
Authors: Gaye Ibrahima, Mendy Gervais, Seck Diaraf, Ouya Samuel
Abstract:
In this paper, we consider the influence maximization in social networks. Here we give importance to initial diffuser called the seeds. The goal is to find efficiently a subset of k elements in the social network that will begin and maximize the information diffusion process. A new approach which treats the social network before to determine the seeds, is proposed. This treatment eliminates the information feedback toward a considered element as seed by extracting an acyclic spanning social network. At first, we propose two algorithm versions called SCG − algoritm (v1 and v2) (Spanning Connected Graphalgorithm). This algorithm takes as input data a connected social network directed or no. And finally, a generalization of the SCG − algoritm is proposed. It is called SG − algoritm (Spanning Graph-algorithm) and takes as input data any graph. These two algorithms are effective and have each one a polynomial complexity. To show the pertinence of our approach, two seeds set are determined and those given by our approach give a better results. The performances of this approach are very perceptible through the simulation carried out by the R software and the igraph package.Keywords: acyclic spanning graph, centrality measures, information feedback, influence maximization, social network
Procedia PDF Downloads 2487180 Urban Road Network Connectivity and Accessibility Analysis Using RS and GIS: A Case Study of Chandannagar City
Authors: Joy Ghosh, Debasmita Biswas
Abstract:
The road network of any area is the most important indicator of regional planning. For proper utilization of urban road networks, the structural parameters such as connectivity and accessibility should be analyzed and evaluated. This paper aims to explain the application of GIS on urban road network connectivity and accessibility analysis with a case study of Chandannagar City. This paper has been made to analyze the road network connectivity through various connectivity measurements like the total number of nodes and links, Cyclomatic Number, Alpha Index, Beta Index, Gamma index, Eta index, Pi index, Theta Index, and Aggregated Transport Score, Road Density based on existing road network in Chandannagar city in India. Accessibility is measured through the shortest Path Matrix, associate Number, and Shimbel Index. Various urban services, such as schools, banks, Hospitals, petrol pumps, ATMs, police stations, theatres, parks, etc., are considered for the accessibility analysis for each ward. This paper also highlights the relationship between urban land use/ land cover (LULC) and urban road network and population density using various spatial and statistical measurements. The datasets were collected through a field survey of 33 wards of the Chandannagar Municipal Corporation area, and the secondary data were collected through an open street map and satellite image of LANDSAT8 OLI & TIRS from USGS. Chandannagar was actually once a French colony, and at that time, various sort of planning was applied, but now Chandannagar city continues to grow haphazardly because that city is facing some problems; the knowledge gained from this paper helps to create a more efficient and accessible road network. Therefore, it would be suggested that some wards need to improve their connectivity and accessibility for the future growth and development of Chandannagar.Keywords: accessibility, connectivity, transport, road network
Procedia PDF Downloads 737179 Conscious Intention-based Processes Impact the Neural Activities Prior to Voluntary Action on Reinforcement Learning Schedules
Authors: Xiaosheng Chen, Jingjing Chen, Phil Reed, Dan Zhang
Abstract:
Conscious intention can be a promising point cut to grasp consciousness and orient voluntary action. The current study adopted a random ratio (RR), yoked random interval (RI) reinforcement learning schedule instead of the previous highly repeatable and single decision point paradigms, aimed to induce voluntary action with the conscious intention that evolves from the interaction between short-range-intention and long-range-intention. Readiness potential (RP) -like-EEG amplitude and inter-trial-EEG variability decreased significantly prior to voluntary action compared to cued action for inter-trial-EEG variability, mainly featured during the earlier stage of neural activities. Notably, (RP) -like-EEG amplitudes decreased significantly prior to higher RI-reward rates responses in which participants formed a higher plane of conscious intention. The present study suggests the possible contribution of conscious intention-based processes to the neural activities from the earlier stage prior to voluntary action on reinforcement leanring schedule.Keywords: Reinforcement leaning schedule, voluntary action, EEG, conscious intention, readiness potential
Procedia PDF Downloads 787178 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays
Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal
Abstract:
Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).Keywords: fault tolerance, FPGA, single event upset, approximate computing
Procedia PDF Downloads 1987177 Deep Learning Approach to Trademark Design Code Identification
Authors: Girish J. Showkatramani, Arthi M. Krishna, Sashi Nareddi, Naresh Nula, Aaron Pepe, Glen Brown, Greg Gabel, Chris Doninger
Abstract:
Trademark examination and approval is a complex process that involves analysis and review of the design components of the marks such as the visual representation as well as the textual data associated with marks such as marks' description. Currently, the process of identifying marks with similar visual representation is done manually in United States Patent and Trademark Office (USPTO) and takes a considerable amount of time. Moreover, the accuracy of these searches depends heavily on the experts determining the trademark design codes used to catalog the visual design codes in the mark. In this study, we explore several methods to automate trademark design code classification. Based on recent successes of convolutional neural networks in image classification, we have used several different convolutional neural networks such as Google’s Inception v3, Inception-ResNet-v2, and Xception net. The study also looks into other techniques to augment the results from CNNs such as using Open Source Computer Vision Library (OpenCV) to pre-process the images. This paper reports the results of the various models trained on year of annotated trademark images.Keywords: trademark design code, convolutional neural networks, trademark image classification, trademark image search, Inception-ResNet-v2
Procedia PDF Downloads 2327176 Comparison of the Isolation Rates and Characteristics of Salmonella Isolated from Antibiotic-Free and Conventional Chicken Meat Samples
Authors: Jin-Hyeong Park, Hong-Seok Kim, Jin-Hyeok Yim, Young-Ji Kim, Dong-Hyeon Kim, Jung-Whan Chon, Kun-Ho Seo
Abstract:
Salmonella contamination in chicken samples can cause major health problems in humans. However, not only the effects of antibiotic treatment during growth but also the impacts of poultry slaughter line on the prevalence of Salmonella in final chicken meat sold to consumers are unknown. In this study, we compared the isolation rates and antimicrobial resistance of Salmonella between antibiotic-free, conventional, conventional Korean native retail chicken meat samples and clonal divergence of Salmonella isolates by multilocus sequence typing. In addition, the distribution of extended-spectrum β-lactamase (ESBL) genes in ESBL-producing Salmonella isolates was analyzed. A total of 72 retail chicken meat samples (n = 24 antibiotic-free broiler [AFB] chickens, n = 24 conventional broiler [CB] chickens, and n = 24 conventional Korean native [CK] chickens) were collected from local retail markets in Seoul, South Korea. The isolation rates of Salmonella were 66.6% in AFB chickens, 45.8% in CB chickens, and 25% in CK chickens. By analyzing the minimum inhibitory concentrations of β -lactam antibiotics with the disc-diffusion test, we found that 81.2% of Salmonella isolates from AFB chickens, 63.6% of isolates from CB chickens, and 50% of isolates from CK chickens were ESBL producers; all ESBL-positive isolates had the CTX-M-15 genotype. Interestingly, all ESBL-producing Salmonella were revealed as ST16 by multilocus sequence typing. In addition, all CTX-M-15-positive isolates had the genetic platform of blaCTX-M gene (IS26-ISEcp1-blaCTX-M-15-IS903), to the best of our knowledge, this is the first report in Salmonella around the world. The Salmonella ST33 strain (S. Hadar) isolated in this study has never been reported in South Korea. In conclusion, our findings showed that antibiotic-free retail chicken meat products were also largely contaminated with ESBL-producing Salmonella and that their ESBL genes and genetic platforms were the same as those isolated from conventional retail chicken meat products.Keywords: antibiotic-free poultry, conventional poultry, multilocus sequence typing, extended-spectrum β-lactamase, antimicrobial resistance
Procedia PDF Downloads 2777175 Visualization of Malaysia Universities Websites Based On Social Network Analysis
Authors: N. A. Ismail, Abdul Arif, Sharul Hafiz, Lu S. J., Tham W. S., Wong S. K.
Abstract:
This paper investigates the visulization of Malaysia universities websites. Twenty (20) public universities websites in Malaysia has been chosen as samples to explore and visualize the link relationship between their academic websites using social network analysis methods such as inlink, degree, weight, betweenness and modularity class. All of the connection and relation demonstrate the power to influence, comprehensive strength and also the variety of subject types that are present in universities. The experimental results also show that University Malaysia Sabah (UMS) is the biggest back links provider.Keywords: academic websites, link analysis, social network analysis, experimental result
Procedia PDF Downloads 4717174 The Effect of Feature Selection on Pattern Classification
Authors: Chih-Fong Tsai, Ya-Han Hu
Abstract:
The aim of feature selection (or dimensionality reduction) is to filter out unrepresentative features (or variables) making the classifier perform better than the one without feature selection. Since there are many well-known feature selection algorithms, and different classifiers based on different selection results may perform differently, very few studies consider examining the effect of performing different feature selection algorithms on the classification performances by different classifiers over different types of datasets. In this paper, two widely used algorithms, which are the genetic algorithm (GA) and information gain (IG), are used to perform feature selection. On the other hand, three well-known classifiers are constructed, which are the CART decision tree (DT), multi-layer perceptron (MLP) neural network, and support vector machine (SVM). Based on 14 different types of datasets, the experimental results show that in most cases IG is a better feature selection algorithm than GA. In addition, the combinations of IG with DT and IG with SVM perform best and second best for small and large scale datasets.Keywords: data mining, feature selection, pattern classification, dimensionality reduction
Procedia PDF Downloads 6697173 Talent-to-Vec: Using Network Graphs to Validate Models with Data Sparsity
Authors: Shaan Khosla, Jon Krohn
Abstract:
In a recruiting context, machine learning models are valuable for recommendations: to predict the best candidates for a vacancy, to match the best vacancies for a candidate, and compile a set of similar candidates for any given candidate. While useful to create these models, validating their accuracy in a recommendation context is difficult due to a sparsity of data. In this report, we use network graph data to generate useful representations for candidates and vacancies. We use candidates and vacancies as network nodes and designate a bi-directional link between them based on the candidate interviewing for the vacancy. After using node2vec, the embeddings are used to construct a validation dataset with a ranked order, which will help validate new recommender systems.Keywords: AI, machine learning, NLP, recruiting
Procedia PDF Downloads 847172 Novel Spoke-Type BLDC Motor Design for Cost Effective and High Power Density
Authors: Suyong Kim
Abstract:
Recently because of the rise in the price of rare earth magnet, interest of non-rare earth or less-rare earth motor is growing. Especially to achieve the high power density, Spoke-Type BLDC (Brushless Permanent Magnet) Motor with ferrite permanent magnet are spotlighted. But Spoke-Type Ferrite BLDC Motor has much of magnetic flux leakage in the direction of rotor shaft. In order to solve this problem, there are two conventional ways. But conventional ways bring the increases of product cost or the decreases of the power density. Therefore, this paper proposes new Spoke-Type BLDC Rotor shape that has the advantages of both conventional methods. The new shape is consists of a one-piece core. The inside and the outside of the rotor are open alternately. So it can take reduced production cost and high power density.Keywords: motor, BLDC, spoke, ferrite
Procedia PDF Downloads 5737171 Social Network Analysis, Social Power in Water Co-Management (Case Study: Iran, Shemiranat, Jirood Village)
Authors: Fariba Ebrahimi, Mehdi Ghorbani, Ali Salajegheh
Abstract:
Comprehensively water management considers economic, environmental, technical and social and also sustainability of water resources for future generations. Grassland management implies cooperative approach and involves all stakeholders and also introduces issues to managers, decision and policy makers. Solving these issues needs integrated and system approach. According to the recognition of actors or key persons in necessary to apply cooperative management of Water. Therefore, based on stakeholder analysis and social network analysis can be used to demonstrate the most effective actors for environmental decisions. In this research, social powers according are specified to social network approach at Water utilizers’ level of Natural in Jirood catchment of Latian basin. In this paper, utilizers of water resources were recognized using field trips and then, trust and collaboration matrix produced using questionnaires. In the next step, degree centrality index were Examined. Finally, geometric position of each actor was illustrated in the network. The results of the research based on centrality index have a key role in recognition of cooperative management of Water in Jirood and also will help managers and planners of water in the case of recognition of social powers in order to organization and implementation of sustainable management of Water.Keywords: social network analysis, water co-management, social power, centrality index, local stakeholders network, Jirood catchment
Procedia PDF Downloads 3727170 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL
Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson
Abstract:
The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.Keywords: PCR, optimisation, microfluidics, COMSOL
Procedia PDF Downloads 1617169 Proposed Framework based on Classification of Vertical Handover Decision Strategies in Heterogeneous Wireless Networks
Authors: Shidrokh Goudarzi, Wan Haslina Hassan
Abstract:
Heterogeneous wireless networks are converging towards an all-IP network as part of the so-called next-generation network. In this paradigm, different access technologies need to be interconnected; thus, vertical handovers or vertical handoffs are necessary for seamless mobility. In this paper, we conduct a review of existing vertical handover decision-making mechanisms that aim to provide ubiquitous connectivity to mobile users. To offer a systematic comparison, we categorize these vertical handover measurement and decision structures based on their respective methodology and parameters. Subsequently, we analyze several vertical handover approaches in the literature and compare them according to their advantages and weaknesses. The paper compares the algorithms based on the network selection methods, complexity of the technologies used and efficiency in order to introduce our vertical handover decision framework. We find that vertical handovers on heterogeneous wireless networks suffer from the lack of a standard and efficient method to satisfy both user and network quality of service requirements at different levels including architectural, decision-making and protocols. Also, the consolidation of network terminal, cross-layer information, multi packet casting and intelligent network selection algorithm appears to be an optimum solution for achieving seamless service continuity in order to facilitate seamless connectivity.Keywords: heterogeneous wireless networks, vertical handovers, vertical handover metric, decision-making algorithms
Procedia PDF Downloads 3937168 Developing a Model – an Application of Fuzzy Analytic Network Process Techniques for Hostels
Authors: Pin-Ju Juan, Peng-Yu Juan, Yi-Shan Chen
Abstract:
The main purpose of this paper is to present a fuzzy Analytic Network Process (ANP) model for the hostel organizational performance selection. In this article, we created 39 criteria for selecting hostel organizational performance acquired from literature's review and experts method practical investigations, and the methods of fuzzy analytic network process are used to consolidate decision-makers’ assessments about criteria weightings. Finally, we selected organizational performance of a hostel in Taiwan to determine the effectiveness of the proposed evaluation model in this paper.Keywords: Fuzzy ANP, hostel, organizational performance, strategy management
Procedia PDF Downloads 2007167 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 1297166 Empirical and Indian Automotive Equity Portfolio Decision Support
Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu
Abstract:
A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis
Procedia PDF Downloads 4857165 A Performance Comparison between Conventional and Flexible Box Erecting Machines Using Dispatching Rules
Authors: Min Kyu Kim, Eun Young Lee, Dong Woo Son, Yoon Seok Chang
Abstract:
In this paper, we introduce a flexible box erecting machine (BEM) that swiftly and automatically transforms cardboard into a three dimensional box. Recently, the parcel service and home-shopping industries have grown rapidly, and there is an increasing need for various box types to ship various products. However, workers cannot fold thousands of boxes manually in a day. As such, automatic BEMs are garnering greater attention. This study takes equipment operation into consideration as well as mechanical improvements in order to design a BEM that is able to outperform its conventional counterparts. We analyzed six dispatching rules – First In First Out (FIFO), Shortest Processing Time (SPT), Earliest Due Date (EDD), Setup Avoidance, EDD + SPT, and EDD + Setup Avoidance – to determine which one was most suitable for BEM operation. Consequently, SPT and Setup Avoidance were found to be the most critical rules, followed by EDD + Setup Avoidance, EDD + SPT, EDD, and FIFO. This hierarchy was valid for both our conventional BEM and our new flexible BEM from the viewpoint of processing time. We believe that this research can contribute to flexible BEM management, which has the potential to increase productivity and convenience.Keywords: automation, box erecting machine, dispatching rule, setup time
Procedia PDF Downloads 3637164 Packet Fragmentation Caused by Encryption and Using It as a Security Method
Authors: Said Rabah Azzam, Andrew Graham
Abstract:
Fragmentation of packets caused by encryption applied on the network layer of the IOS model in Internet Protocol version 4 (IPv4) networks as well as the possibility of using fragmentation and Access Control Lists (ACLs) as a method of restricting network access to certain hosts or areas of a network.Using default settings, fragmentation is expected to occur and each fragment to be reassembled at the other end. If this does not occur then a high number of ICMP messages should be generated back towards the source host indicating that the packet is too large and that it needs to be made smaller. This result is also expected when the MTU is changed for certain links between devices.When using ACLs and packet fragments to restrict access to hosts or network segments it is possible that ACLs cannot be set up in this way. If ACLs cannot be setup to allow only fragments then it is a limitation of the hardware’s firmware holding back this particular method. If the ACL on the restricted switch can be set up in such a way to allow only fragments then a connection that forces packets to fragment should be allowed to pass through the ACL. This should then make a network connection to the destination machine allowing data to be sent to and from the destination machine. ICMP messages from the restricted access switch and host should also be blocked from being sent back across the link which will be shown in an SSH session into the switch.Keywords: fragmentation, encryption, security, switch
Procedia PDF Downloads 3367163 Analysis on the Copyright Protection Dilemma of Webcast in 'Internet Plus' Era
Authors: Yi Yang
Abstract:
In the era of 'Internet plus', the rapid development of webcast has posed new challenges to the intellectual property law. Meanwhile, traditional copyright protection has also exposed the existing theoretical imbalance in webcast. Through the analysis of the outstanding problems in the copyright protection of the network live broadcast, this paper points out that the main causes of the problems are the unclear nature of the copyright of the network live broadcast, the copyright protection system of the game network live broadcast has not yet been constructed, and the copyright infringement of the pan entertainment live broadcast is mostly, and so on. Based on the current practice, this paper puts forward the specific thinking of the protection path of online live broadcast copyright. First of all, to provide a reasonable judicial solution for a large number of online live copyright cases, we need to integrate the right scope and regulatory behavior of broadcasting right and information network communication right. Secondly, in order to protect the rights of network anchors, the webcast should be regarded as works. Thirdly, in order to protect the copyright of webcast and prevent the infringement of copyright by webcast, the webcast platform will be used as an intermediary to provide solutions for solving the judicial dilemma. In the era of 'Internet plus', it is a theoretical attempt to explore the protection and method of copyright protection on webcast, which has positive guiding significance for judicial practice.Keywords: 'Internet Plus' era, webcast, copyright, protection dilemma
Procedia PDF Downloads 1137162 Learning Dynamic Representations of Nodes in Temporally Variant Graphs
Authors: Sandra Mitrovic, Gaurav Singh
Abstract:
In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.Keywords: churn prediction, dynamic networks, node2vec, auto-encoders
Procedia PDF Downloads 3147161 Decellularized Brain-Chitosan Scaffold for Neural Tissue Engineering
Authors: Yun-An Chen, Hung-Jun Lin, Tai-Horng Young, Der-Zen Liu
Abstract:
Decellularized brain extracellular matrix had been shown that it has the ability to influence on cell proliferation, differentiation and associated cell phenotype. However, this scaffold is thought to have poor mechanical properties and rapid degradation, it is hard for cell recellularization. In this study, we used decellularized brain extracellular matrix combined with chitosan, which is naturally occurring polysaccharide and non-cytotoxic polymer, forming a 3-D scaffold for neural stem/precursor cells (NSPCs) regeneration. HE staining and DAPI fluorescence staining confirmed decellularized process could effectively vanish the cellular components from the brain. GAGs and collagen I, collagen IV were be showed a great preservation by Alcain staining and immunofluorescence staining respectively. Decellularized brain extracellular matrix was well mixed in chitosan to form a 3-D scaffold (DB-C scaffold). The pore size was approximately 50±10 μm examined by SEM images. Alamar blue results demonstrated NSPCs had great proliferation ability in DB-C scaffold. NSPCs that were cultured in this complex scaffold differentiated into neurons and astrocytes, as reveled by NSPCs expression of microtubule-associated protein 2 (MAP2) and glial fibrillary acidic protein (GFAP). In conclusion, DB-C scaffold may provide bioinformatics cues for NSPCs generation and aid for CNS injury functional recovery applications.Keywords: brain, decellularization, chitosan, scaffold, neural stem/precursor cells
Procedia PDF Downloads 3207160 Deep-Learning Based Approach to Facial Emotion Recognition through Convolutional Neural Network
Authors: Nouha Khediri, Mohammed Ben Ammar, Monji Kherallah
Abstract:
Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. Accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER, benefiting from deep learning, especially CNN and VGG16. First, the data is pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work.Keywords: CNN, deep-learning, facial emotion recognition, machine learning
Procedia PDF Downloads 957159 Sustainable Design of Coastal Bridge Networks in the Presence of Multiple Flood and Earthquake Risks
Authors: Riyadh Alsultani, Ali Majdi
Abstract:
It is necessary to develop a design methodology that includes the possibility of seismic events occurring in a region, the vulnerability of the civil hydraulic structure, and the effects of the occurrence hazard on society, environment, and economy in order to evaluate the flood and earthquake risks of coastal bridge networks. This paper presents a design approach for the assessment of the risk and sustainability of coastal bridge networks under time-variant flood-earthquake conditions. The social, environmental, and economic indicators of the network are used to measure its sustainability. These consist of anticipated loss, downtime, energy waste, and carbon dioxide emissions. The design process takes into account the possibility of happening in a set of flood and earthquake scenarios that represent the local seismic activity. Based on the performance of each bridge as determined by fragility assessments, network linkages are measured. The network's connections and bridges' damage statuses after an earthquake scenario determine the network's sustainability and danger. The sustainability measures' temporal volatility and the danger of structural degradation are both highlighted. The method is shown using a transportation network in Baghdad, Iraq.Keywords: sustainability, Coastal bridge networks, flood-earthquake risk, structural design
Procedia PDF Downloads 947158 A Comparative and Critical Analysis of Some Routing Protocols in Wireless Sensor Networks
Authors: Ishtiaq Wahid, Masood Ahmad, Nighat Ayub, Sajad Ali
Abstract:
Lifetime of a wireless sensor network (WSN) is directly proportional to the energy consumption of its constituent nodes. Routing in wireless sensor network is very challenging due its inherit characteristics. In hierarchal routing the sensor filed is divided into clusters. The cluster-heads are selected from each cluster, which forms a hierarchy of nodes. The cluster-heads are used to transmit the data to the base station while other nodes perform the sensing task. In this way the lifetime of the network is increased. In this paper a comparative study of hierarchal routing protocols are conducted. The simulation is done in NS-2 for validation.Keywords: WSN, cluster, routing, sensor networks
Procedia PDF Downloads 4797157 A Reinforcement Learning Approach for Evaluation of Real-Time Disaster Relief Demand and Network Condition
Authors: Ali Nadi, Ali Edrissi
Abstract:
Relief demand and transportation links availability is the essential information that is needed for every natural disaster operation. This information is not in hand once a disaster strikes. Relief demand and network condition has been evaluated based on prediction method in related works. Nevertheless, prediction seems to be over or under estimated due to uncertainties and may lead to a failure operation. Therefore, in this paper a stochastic programming model is proposed to evaluate real-time relief demand and network condition at the onset of a natural disaster. To address the time sensitivity of the emergency response, the proposed model uses reinforcement learning for optimization of the total relief assessment time. The proposed model is tested on a real size network problem. The simulation results indicate that the proposed model performs well in the case of collecting real-time information.Keywords: disaster management, real-time demand, reinforcement learning, relief demand
Procedia PDF Downloads 3167156 An Entropy Based Novel Algorithm for Internal Attack Detection in Wireless Sensor Network
Authors: Muhammad R. Ahmed, Mohammed Aseeri
Abstract:
Wireless Sensor Network (WSN) consists of low-cost and multi functional resources constrain nodes that communicate at short distances through wireless links. It is open media and underpinned by an application driven technology for information gathering and processing. It can be used for many different applications range from military implementation in the battlefield, environmental monitoring, health sector as well as emergency response of surveillance. With its nature and application scenario, security of WSN had drawn a great attention. It is known to be valuable to variety of attacks for the construction of nodes and distributed network infrastructure. In order to ensure its functionality especially in malicious environments, security mechanisms are essential. Malicious or internal attacker has gained prominence and poses the most challenging attacks to WSN. Many works have been done to secure WSN from internal attacks but most of it relay on either training data set or predefined threshold. Without a fixed security infrastructure a WSN needs to find the internal attacks is a challenge. In this paper we present an internal attack detection method based on maximum entropy model. The final experimental works showed that the proposed algorithm does work well at the designed level.Keywords: internal attack, wireless sensor network, network security, entropy
Procedia PDF Downloads 4557155 Pegylated Interferon in HCV Genotype 3 Relapser to Conventional Interferon in Pakistani Population
Authors: Saad Khalid Niaz, Arif Mahmood Siddiqui, Afzal Haqi
Abstract:
Background: Estimated prevalence of Hepatitis C in Pakistan is 5% of which 78 % are Genotype 3, in which Response to conventional interferon is reported to be 70%. Objective: To determine the efficacy of pegylated interferon 20 kDa (Unipeg) plus ribavirin (Ribazole) in HCV genotype 3 patients who relapsed to conventional interferon. Methods: This is an ongoing study of 20 enrolled patients. Pegylated interferon alfa-2a 20 kDa 180 mcg weekly with ribavirin, were administered for a period of 24 weeks. Virological Responses were measured by Qualitative HCV RNA at weeks 4, 12, 24 and 48 to determine Rapid Virological Response (RVR), Early Virological Response (EVR), End of Treatment (ETR) and Sustained Virological Response (SVR), respectively. EVR was done for those who didn’t achieve RVR. Results: Males were 12 (60%) and mean age was 38.5 ±7.62 years. Out of 20 recruited patients, all completed 4 weeks therapy; RVR was achieved in 8 (40%) patients. One patient was lost to follow up and one yet to visit at 12 weeks. From 10 patients, 8 (80%) patients achieved EVR. Out of intent-to-treat patients, 15 completed 24 weeks therapy, ETR was achieved in 14 (93%) patients and 9 patients completed post therapy follow-up, of which, 8 (89%) patients achieved SVR. Conclusion: Our interim data demonstrates that Pegylated Interferon alfa-2a 20 kDa 180 mcg (Unipeg) in combination with Ribavirin (Ribazole) has shown promising results in treating HCV Genotype 3 patients who relapsed to conventional interferon. We recommend use of Pegylated Interferon in Relapsers with Genotype 3 when financial constraints limit the use of oral antivirals.Keywords: pegylated interferon (unipeg), hepatitis c, relapsers, Pakistan
Procedia PDF Downloads 309