Search results for: network information criterion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14744

Search results for: network information criterion

13394 The Anti-Cyber and Information Technology Crimes Law on Information Access and Dissemination by Egyptian Journalists

Authors: Miral Sabry AlAshry

Abstract:

The main objective of the study is to investigate the effectiveness of Egyptian Journalists through the Anti-Cyber and Information Technology Crimes Law, as well as its implications for journalistic practice and the implications for press freedom in Egypt. Questionnaires were undertaken with 192 journalists representing four official newspapers, and in-depth interviews were held with 15 journalists. The study used an Authoritarian theory as a theoretical framework. The study revealed that the government placed restrictions on journalists by using the law to oppress them.

Keywords: anti-cyber and information technology crimes law, media legislation, personal information, Egyptian constitution

Procedia PDF Downloads 358
13393 Supply Chain Network Design for Perishable Products in Developing Countries

Authors: Abhishek Jain, Kavish Kejriwal, V. Balaji Rao, Abhigna Chavda

Abstract:

Increasing environmental and social concerns are forcing companies to take a fresh view of the impact of supply chain operations on environment and society when designing a supply chain. A challenging task in today’s food industry is the distribution of high-quality food items throughout the food supply chain. Improper storage and unwanted transportation are the major hurdles in food supply chain and can be tackled by making dynamic storage facility location decisions with the distribution network. Since food supply chain in India is one of the biggest supply chains in the world, the companies should also consider environmental impact caused by the supply chain. This project proposes a multi-objective optimization model by integrating sustainability in decision-making, on distribution in a food supply chain network (SCN). A Multi-Objective Mixed-Integer Linear Programming (MOMILP) model between overall cost and environmental impact caused by the SCN is formulated for the problem. The goal of MOMILP is to determine the pareto solutions for overall cost and environmental impact caused by the supply chain. This is solved by using GAMS with CPLEX as third party solver. The outcomes of the project are pareto solutions for overall cost and environmental impact, facilities to be operated and the amount to be transferred to each warehouse during the time horizon.

Keywords: multi-objective mixed linear programming, food supply chain network, GAMS, multi-product, multi-period, environment

Procedia PDF Downloads 304
13392 Application of Artificial Neural Network for Prediction of Load-Haul-Dump Machine Performance Characteristics

Authors: J. Balaraju, M. Govinda Raj, C. S. N. Murthy

Abstract:

Every industry is constantly looking for enhancement of its day to day production and productivity. This can be possible only by maintaining the men and machinery at its adequate level. Prediction of performance characteristics plays an important role in performance evaluation of the equipment. Analytical and statistical approaches will take a bit more time to solve complex problems such as performance estimations as compared with software-based approaches. Keeping this in view the present study deals with an Artificial Neural Network (ANN) modelling of a Load-Haul-Dump (LHD) machine to predict the performance characteristics such as reliability, availability and preventive maintenance (PM). A feed-forward-back-propagation ANN technique has been used to model the Levenberg-Marquardt (LM) training algorithm. The performance characteristics were computed using Isograph Reliability Workbench 13.0 software. These computed values were validated using predicted output responses of ANN models. Further, recommendations are given to the industry based on the performed analysis for improvement of equipment performance.

Keywords: load-haul-dump, LHD, artificial neural network, ANN, performance, reliability, availability, preventive maintenance

Procedia PDF Downloads 128
13391 Prediction of Music Track Popularity: A Machine Learning Approach

Authors: Syed Atif Hassan, Luv Mehta, Syed Asif Hassan

Abstract:

Hit song science is a field of investigation wherein machine learning techniques are applied to music tracks in order to extract such features from audio signals which can capture information that could explain the popularity of respective tracks. Record companies invest huge amounts of money into recruiting fresh talents and churning out new music each year. Gaining insight into the basis of why a song becomes popular will result in tremendous benefits for the music industry. This paper aims to extract basic musical and more advanced, acoustic features from songs while also taking into account external factors that play a role in making a particular song popular. We use a dataset derived from popular Spotify playlists divided by genre. We use ten genres (blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, rock), chosen on the basis of clear to ambiguous delineation in the typical sound of their genres. We feed these features into three different classifiers, namely, SVM with RBF kernel, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model at the end. Predicting song popularity is particularly important for the music industry as it would allow record companies to produce better content for the masses resulting in a more competitive market.

Keywords: classifier, machine learning, music tracks, popularity, prediction

Procedia PDF Downloads 637
13390 Information Exchange Process Analysis between Authoring Design Tools and Lighting Simulation Tools

Authors: Rudan Xue, Annika Moscati, Rehel Zeleke Kebede, Peter Johansson

Abstract:

Successful buildings’ simulation and analysis inevitably require information exchange between multiple building information modeling (BIM) software. The BIM infor-mation exchange based on IFC is widely used. However, Industry Foundation Classifi-cation (IFC) files are not always reliable and information can get lost when using dif-ferent software for modeling and simulations. In this research, interviews with lighting simulation experts and a case study provided by a company producing lighting devices have been the research methods used to identify the necessary steps and data for suc-cessful information exchange between lighting simulation tools and authoring design tools. Model creation, information exchange, and model simulation have been identi-fied as key aspects for the success of information exchange. The paper concludes with recommendations for improved information exchange and more reliable simulations that take all the needed parameters into consideration.

Keywords: BIM, data exchange, interoperability issues, lighting simulations

Procedia PDF Downloads 218
13389 The Application of Artificial Neural Networks for the Performance Prediction of Evacuated Tube Solar Air Collector with Phase Change Material

Authors: Sukhbir Singh

Abstract:

This paper describes the modeling of novel solar air collector (NSAC) system by using artificial neural network (ANN) model. The objective of the study is to demonstrate the application of the ANN model to predict the performance of the NSAC with acetamide as a phase change material (PCM) storage. Input data set consist of time, solar intensity and ambient temperature wherever as outlet air temperature of NSAC was considered as output. Experiments were conducted between 9.00 and 24.00 h in June and July 2014 underneath the prevailing atmospheric condition of Kurukshetra (city of the India). After that, experimental results were utilized to train the back propagation neural network (BPNN) to predict the outlet air temperature of NSAC. The results of proposed algorithm show that the BPNN is effective tool for the prediction of responses. The BPNN predicted results are 99% in agreement with the experimental results.

Keywords: Evacuated tube solar air collector, Artificial neural network, Phase change material, solar air collector

Procedia PDF Downloads 107
13388 Machine Learning Based Gender Identification of Authors of Entry Programs

Authors: Go Woon Kwak, Siyoung Jun, Soyun Maeng, Haeyoung Lee

Abstract:

Entry is an education platform used in South Korea, created to help students learn to program, in which they can learn to code while playing. Using the online version of the entry, teachers can easily assign programming homework to the student and the students can make programs simply by linking programming blocks. However, the programs may be made by others, so that the authors of the programs should be identified. In this paper, as the first step toward author identification of entry programs, we present an artificial neural network based classification approach to identify genders of authors of a program written in an entry. A neural network has been trained from labeled training data that we have collected. Our result in progress, although preliminary, shows that the proposed approach could be feasible to be applied to the online version of entry for gender identification of authors. As future work, we will first use a machine learning technique for age identification of entry programs, which would be the second step toward the author identification.

Keywords: artificial intelligence, author identification, deep neural network, gender identification, machine learning

Procedia PDF Downloads 303
13387 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 412
13386 Bias Prevention in Automated Diagnosis of Melanoma: Augmentation of a Convolutional Neural Network Classifier

Authors: Kemka Ihemelandu, Chukwuemeka Ihemelandu

Abstract:

Melanoma remains a public health crisis, with incidence rates increasing rapidly in the past decades. Improving diagnostic accuracy to decrease misdiagnosis using Artificial intelligence (AI) continues to be documented. Unfortunately, unintended racially biased outcomes, a product of lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone, have increasingly been recognized as a problem.Resulting in noted limitations of the accuracy of the Convolutional neural network (CNN)models. CNN models are prone to biased output due to biases in the dataset used to train them. Our aim in this study was the optimization of convolutional neural network algorithms to mitigate bias in the automated diagnosis of melanoma. We hypothesized that our proposed training algorithms based on a data augmentation method to optimize the diagnostic accuracy of a CNN classifier by generating new training samples from the original ones will reduce bias in the automated diagnosis of melanoma. We applied geometric transformation, including; rotations, translations, scale change, flipping, and shearing. Resulting in a CNN model that provided a modifiedinput data making for a model that could learn subtle racial features. Optimal selection of the momentum and batch hyperparameter increased our model accuracy. We show that our augmented model reduces bias while maintaining accuracy in the automated diagnosis of melanoma.

Keywords: bias, augmentation, melanoma, convolutional neural network

Procedia PDF Downloads 196
13385 Personalize E-Learning System Based on Clustering and Sequence Pattern Mining Approach

Authors: H. S. Saini, K. Vijayalakshmi, Rishi Sayal

Abstract:

Network-based education has been growing rapidly in size and quality. Knowledge clustering becomes more important in personalized information retrieval for web-learning. A personalized-Learning service after the learners’ knowledge has been classified with clustering. Through automatic analysis of learners’ behaviors, their partition with similar data level and interests may be discovered so as to produce learners with contents that best match educational needs for collaborative learning. We present a specific mining tool and a recommender engine that we have integrated in the online learning in order to help the teacher to carry out the whole e-learning process. We propose to use sequential pattern mining algorithms to discover the most used path by the students and from this information can recommend links to the new students automatically meanwhile they browse in the course. We have Developed a specific author tool in order to help the teacher to apply all the data mining process. We tend to report on many experiments with real knowledge so as to indicate the quality of using both clustering and sequential pattern mining algorithms together for discovering personalized e-learning systems.

Keywords: e-learning, cluster, personalization, sequence, pattern

Procedia PDF Downloads 410
13384 Design and Implementation of a Nano-Power Wireless Sensor Device for Smart Home Security

Authors: Chia-Chi Chang

Abstract:

Most battery-driven wireless sensor devices will enter in sleep mode as soon as possible to extend the overall lifetime of a sensor network. It is necessary to turn off unnecessary radio and peripheral functions, especially the radio unit always consumes more energy than other components during wireless communication. The microcontroller is the most important part of the wireless sensor device. It is responsible for the manipulation of sensing data and communication protocols. The microcontroller always has different sleep modes, each with a different level of energy usage. The deeper the sleep, the lower the energy consumption. Most wireless sensor devices can only enter the sleep mode: the external low-frequency oscillator is still running to wake up the sleeping microcontroller when the sleep timer expires. In this paper, our sensor device can enter the extended sleep mode: none of the oscillator is running and the wireless sensor device has the nanoampere consumption and self-awaking ability. Finally, these wireless sensor devices were deployed in a smart home security network.

Keywords: wireless sensor network, battery-driven, sleep mode, home security

Procedia PDF Downloads 289
13383 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives

Authors: Chao Wang

Abstract:

Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.

Keywords: sebum layer, topical and similarity, interactive technology, image narrative

Procedia PDF Downloads 376
13382 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing

Procedia PDF Downloads 170
13381 Information Technology and Communications in Management of the Imperial Citadel of Thang Long-A World Heritage Site

Authors: Ngo the Bach

Abstract:

Information technology and communications are growing strongly and penetrated almost the entire Vietnamese economy and society. The article presents an overview of information technology and application communications in the management the Central Sector of the Imperial Citadel of Thang Long (Hanoi, Vietnam) - A World Heritage Site. The author also points out the opportunities and challenges of the information technology and communications in the sectors of culture and heritage; the use of information technology as an effective tool to develop mass and interactive communications. The article emphasizes on the advantage of information technology and communications in supporting effectively the management reform with respect to the Imperial Citadel of Thang Long in particular and the management of world heritage sites in Vietnam in general.

Keywords: information technology, communications, management, culture, heritage

Procedia PDF Downloads 314
13380 Network Analysis to Reveal Microbial Community Dynamics in the Coral Reef Ocean

Authors: Keigo Ide, Toru Maruyama, Michihiro Ito, Hiroyuki Fujimura, Yoshikatu Nakano, Shoichiro Suda, Sachiyo Aburatani, Haruko Takeyama

Abstract:

Understanding environmental system is one of the important tasks. In recent years, conservation of coral environments has been focused for biodiversity issues. The damage of coral reef under environmental impacts has been observed worldwide. However, the casual relationship between damage of coral and environmental impacts has not been clearly understood. On the other hand, structure/diversity of marine bacterial community may be relatively robust under the certain strength of environmental impact. To evaluate the coral environment conditions, it is necessary to investigate relationship between marine bacterial composition in coral reef and environmental factors. In this study, the Time Scale Network Analysis was developed and applied to analyze the marine environmental data for investigating the relationship among coral, bacterial community compositions and environmental factors. Seawater samples were collected fifteen times from November 2014 to May 2016 at two locations, Ishikawabaru and South of Sesoko in Sesoko Island, Okinawa. The physicochemical factors such as temperature, photosynthetic active radiation, dissolved oxygen, turbidity, pH, salinity, chlorophyll, dissolved organic matter and depth were measured at the coral reef area. Metagenome and metatranscriptome in seawater of coral reef were analyzed as the biological factors. Metagenome data was used to clarify marine bacterial community composition. In addition, functional gene composition was estimated from metatranscriptome. For speculating the relationships between physicochemical and biological factors, cross-correlation analysis was applied to time scale data. Even though cross-correlation coefficients usually include the time precedence information, it also included indirect interactions between the variables. To elucidate the direct regulations between both factors, partial correlation coefficients were combined with cross correlation. This analysis was performed against all parameters such as the bacterial composition, the functional gene composition and the physicochemical factors. As the results, time scale network analysis revealed the direct regulation of seawater temperature by photosynthetic active radiation. In addition, concentration of dissolved oxygen regulated the value of chlorophyll. Some reasonable regulatory relationships between environmental factors indicate some part of mechanisms in coral reef area.

Keywords: coral environment, marine microbiology, network analysis, omics data analysis

Procedia PDF Downloads 240
13379 Handshake Algorithm for Minimum Spanning Tree Construction

Authors: Nassiri Khalid, El Hibaoui Abdelaaziz et Hajar Moha

Abstract:

In this paper, we introduce and analyse a probabilistic distributed algorithm for a construction of a minimum spanning tree on network. This algorithm is based on the handshake concept. Firstly, each network node is considered as a sub-spanning tree. And at each round of the execution of our algorithm, a sub-spanning trees are merged. The execution continues until all sub-spanning trees are merged into one. We analyze this algorithm by a stochastic process.

Keywords: Spanning tree, Distributed Algorithm, Handshake Algorithm, Matching, Probabilistic Analysis

Procedia PDF Downloads 643
13378 Vulnerable Paths Assessment for Distributed Denial of Service Attacks in a Cloud Computing Environment

Authors: Manas Tripathi, Arunabha Mukhopadhyay

Abstract:

In Cloud computing environment, cloud servers, sometimes may crash after receiving huge amount of request and cloud services may stop which can create huge loss to users of that cloud services. This situation is called Denial of Service (DoS) attack. In Distributed Denial of Service (DDoS) attack, an attacker targets multiple network paths by compromising various vulnerable systems (zombies) and floods the victim with huge amount of request through these zombies. There are many solutions to mitigate this challenge but most of the methods allows the attack traffic to arrive at Cloud Service Provider (CSP) and then only takes actions against mitigation. Here in this paper we are rather focusing on preventive mechanism to deal with these attacks. We analyze network topology and find most vulnerable paths beforehand without waiting for the traffic to arrive at CSP. We have used Dijkstra's and Yen’s algorithm. Finally, risk assessment of these paths can be done by multiplying the probabilities of attack for these paths with the potential loss.

Keywords: cloud computing, DDoS, Dijkstra, Yen’s k-shortest path, network security

Procedia PDF Downloads 267
13377 Fault-Detection and Self-Stabilization Protocol for Wireless Sensor Networks

Authors: Ather Saeed, Arif Khan, Jeffrey Gosper

Abstract:

Sensor devices are prone to errors and sudden node failures, which are difficult to detect in a timely manner when deployed in real-time, hazardous, large-scale harsh environments and in medical emergencies. Therefore, the loss of data can be life-threatening when the sensed phenomenon is not disseminated due to sudden node failure, battery depletion or temporary malfunctioning. We introduce a set of partial differential equations for localizing faults, similar to Green’s and Maxwell’s equations used in Electrostatics and Electromagnetism. We introduce a node organization and clustering scheme for self-stabilizing sensor networks. Green’s theorem is applied to regions where the curve is closed and continuously differentiable to ensure network connectivity. Experimental results show that the proposed GTFD (Green’s Theorem fault-detection and Self-stabilization) protocol not only detects faulty nodes but also accurately generates network stability graphs where urgent intervention is required for dynamically self-stabilizing the network.

Keywords: Green’s Theorem, self-stabilization, fault-localization, RSSI, WSN, clustering

Procedia PDF Downloads 56
13376 The Investigation of Relationship between Accounting Information and the Value of Companies

Authors: Golamhassan Ghahramani Aghdam, Pedram Bavili Tabrizi

Abstract:

The aim of this research is to investigate the relationship between accounting information and the value of the companies accepted in Tehran Exchange Market. The dependent variable in this research is the value of a company that is measured by price coefficients, and the independent variables are balance sheet information, profit and loss information, cash flow state information, and profit quality characteristics. The profit quality characteristic index is to be related and to be on-time. This research is an application research, and the research population includes all companies that are active in Tehran exchange market. The number of 194 companies was selected by the systematic method as the statistics sample in the period of 2018-2019. The multi-variable linear regression model was used for the hypotheses test. The results show that there is no relationship between accounting information and companies’ value (stock value) that can be due to the lack of efficiency of the investment market and the inability to use the accounting information by investment market activists.

Keywords: accounting information, company value, profit quality characteristics, price coefficient

Procedia PDF Downloads 119
13375 A Coordinate-Based Heuristic Route Search Algorithm for Delivery Truck Routing Problem

Authors: Ahmed Tarek, Ahmed Alveed

Abstract:

Vehicle routing problem is a well-known re-search avenue in computing. Modern vehicle routing is more focused with the GPS-based coordinate system, as the state-of-the-art vehicle, and trucking systems are equipped with digital navigation. In this paper, a new two dimensional coordinate-based algorithm for addressing the vehicle routing problem for a supply chain network is proposed and explored, and the algorithm is compared with other available, and recently devised heuristics. For the algorithms discussed, which includes the pro-posed coordinate-based search heuristic as well, the advantages and the disadvantages associated with the heuristics are explored. The proposed algorithm is studied from the stand point of a small supermarket chain delivery network that supplies to its stores in four different states around the East Coast area, and is trying to optimize its trucking delivery cost. Minimizing the delivery cost for the supply network of a supermarket chain is important to ensure its business success.

Keywords: coordinate-based optimal routing, Hamiltonian Circuit, heuristic algorithm, traveling salesman problem, vehicle routing problem

Procedia PDF Downloads 132
13374 A Hebbian Neural Network Model of the Stroop Effect

Authors: Vadim Kulikov

Abstract:

The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.

Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop

Procedia PDF Downloads 252
13373 A Weighted K-Medoids Clustering Algorithm for Effective Stability in Vehicular Ad Hoc Networks

Authors: Rejab Hajlaoui, Tarek Moulahi, Hervé Guyennet

Abstract:

In a highway scenario, the vehicle speed can exceed 120 kmph. Therefore, any vehicle can enter or leave the network within a very short time. This mobility adversely affects the network connectivity and decreases the life time of all established links. To ensure an effective stability in vehicular ad hoc networks with minimum broadcasting storm, we have developed a weighted algorithm based on the k-medoids clustering algorithm (WKCA). Indeed, the number of clusters and the initial cluster heads will not be selected randomly as usual, but considering the available transmission range and the environment size. Then, to ensure optimal assignment of nodes to clusters in both k-medoids phases, the combined weight of any node will be computed according to additional metrics including direction, relative speed and proximity. Empirical results prove that in addition to the convergence speed that characterizes the k-medoids algorithm, our proposed model performs well both AODV-Clustering and OLSR-Clustering protocols under different densities and velocities in term of end-to-end delay, packet delivery ratio, and throughput.

Keywords: communication, clustering algorithm, k-medoids, sensor, vehicular ad hoc network

Procedia PDF Downloads 221
13372 Alignment of Information System Strategy and Green Information System Strategy: Comprehension and A Review of the Literature

Authors: Wartika Memed Purawinata, Kridanto Surendro, Husni Sastramiharja, Iping Supriana S.

Abstract:

The information system is one of the contributors to environmental degradation and pollution are known to be released, such as the increasing of use of IT equipment and energy consumption , life cycles of IT equipment are getting shorter, IT equipment waste disposal and so on, therefore the information system should have a role in related environmental issues. Organization need to develop the ability of green to minimize negative impacts on the environment. Although the green information system is an important topic, many organizations fail to manage the environment in a way that is adequate because they ignore aspect of strategy. Alignment strategy is very important to ensure that all people do the activities of the organization headed in the same direction. Alignment strategy helps organization, determine which is more important for organization, and then make road mad to achieve the organization goal. Therefore, this paper discusses the review of the alignment, information systems strategy, and IS green strategy. With this discussion is expected there is an understanding about the alignment of information systems strategy and strategy of green IS, and its relationship with the achievement of business goals that have commitment to reduce the negative impact of information systems on the environment.

Keywords: alignment, strategy, information system, green

Procedia PDF Downloads 435
13371 A Motion Dictionary to Real-Time Recognition of Sign Language Alphabet Using Dynamic Time Warping and Artificial Neural Network

Authors: Marcio Leal, Marta Villamil

Abstract:

Computacional recognition of sign languages aims to allow a greater social and digital inclusion of deaf people through interpretation of their language by computer. This article presents a model of recognition of two of global parameters from sign languages; hand configurations and hand movements. Hand motion is captured through an infrared technology and its joints are built into a virtual three-dimensional space. A Multilayer Perceptron Neural Network (MLP) was used to classify hand configurations and Dynamic Time Warping (DWT) recognizes hand motion. Beyond of the method of sign recognition, we provide a dataset of hand configurations and motion capture built with help of fluent professionals in sign languages. Despite this technology can be used to translate any sign from any signs dictionary, Brazilian Sign Language (Libras) was used as case study. Finally, the model presented in this paper achieved a recognition rate of 80.4%.

Keywords: artificial neural network, computer vision, dynamic time warping, infrared, sign language recognition

Procedia PDF Downloads 200
13370 Effects of Compensation on Distribution System Technical Losses

Authors: B. Kekezoglu, C. Kocatepe, O. Arikan, Y. Hacialiefendioglu, G. Ucar

Abstract:

One of the significant problems of energy systems is to supply economic and efficient energy to consumers. Therefore studies has been continued to reduce technical losses in the network. In this paper, the technical losses analyzed for a portion of European side of Istanbul MV distribution network for different compensation scenarios by considering real system and load data and results are presented. Investigated system is modeled with CYME Power Engineering Software and optimal capacity placement has been proposed to minimize losses.

Keywords: distribution system, optimal capacitor placement, reactive power compensation, technical losses

Procedia PDF Downloads 656
13369 Using Technology to Enhance the Student Assessment Experience

Authors: Asim Qayyum, David Smith

Abstract:

The use of information tools is a common activity for students of any educational stage when they encounter online learning activities. Finding the relevant information for particular learning tasks is the topic of this paper as it investigates the use of information tools for a group of student participants. The paper describes and discusses the results with particular implications for use in higher education, and the findings suggest that improvement in assessment design and subsequent student learning may be achieved by structuring the purposefulness of information tools usage and online reading behaviors of university students.

Keywords: information tools, assessment, online learning, student assessment experience

Procedia PDF Downloads 540
13368 Imputation of Urban Movement Patterns Using Big Data

Authors: Eusebio Odiari, Mark Birkin, Susan Grant-Muller, Nicolas Malleson

Abstract:

Big data typically refers to consumer datasets revealing some detailed heterogeneity in human behavior, which if harnessed appropriately, could potentially revolutionize our understanding of the collective phenomena of the physical world. Inadvertent missing values skew these datasets and compromise the validity of the thesis. Here we discuss a conceptually consistent strategy for identifying other relevant datasets to combine with available big data, to plug the gaps and to create a rich requisite comprehensive dataset for subsequent analysis. Specifically, emphasis is on how these methodologies can for the first time enable the construction of more detailed pictures of passenger demand and drivers of mobility on the railways. These methodologies can predict the influence of changes within the network (like a change in time-table or impact of a new station), explain local phenomena outside the network (like rail-heading) and the other impacts of urban morphology. Our analysis also reveals that our new imputation data model provides for more equitable revenue sharing amongst network operators who manage different parts of the integrated UK railways.

Keywords: big-data, micro-simulation, mobility, ticketing-data, commuters, transport, synthetic, population

Procedia PDF Downloads 212
13367 Civic Participation in Context of Political Transformation: Case of Argentina

Authors: Kirill Neverov

Abstract:

In the paper is considered issues of civic participation in context of changing political landscape of Argentina. Last two years, this South American country faced a drastic change of political course. Pro-peronist, left-oriented administration of Christina Fernandez de Kirchner were replaced by right of center Mauricio Macri's one. The study is focused on inclusive policy in conditions of political transformations. We use network analysis to figure out which actors are involved in participation and to describe connections between them. As a resuflt, we plan to receive map of transactions which form inclusive policy in Argentina.

Keywords: civic participation, Argentina, political transformation, network analysis

Procedia PDF Downloads 192
13366 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals

Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam

Abstract:

The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study

Procedia PDF Downloads 306
13365 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 269