Search results for: MIMO relay networks
136 Investigating the Regulation System of the Synchronous Motor Excitation Mode Serving as a Reactive Power Source
Authors: Baghdasaryan Marinka, Ulikyan Azatuhi
Abstract:
The efficient usage of the compensation abilities of the electrical drive synchronous motors used in production processes can essentially improve the technical and economic indices of the process. Reducing the flows of the reactive electrical energy due to the compensation of reactive power allows to significantly reduce the load losses of power in the electrical networks. As a result of analyzing the scientific works devoted to the issues of regulating the excitation of the synchronous motors, the need for comprehensive investigation and estimation of the excitation mode has been substantiated. By means of the obtained transmission functions, in the Simulink environment of the software package MATLAB, the transition processes of the excitation mode have been studied. As a result of obtaining and estimating the graph of the Nyquist plot and the transient process, the necessity of developing the Proportional-Integral-Derivative (PID) regulator has been justified. The transient processes of the system of the PID regulator have been investigated, and the amplitude–phase characteristics of the system have been estimated. The analysis of the obtained results has shown that the regulation indices of the developed system have been improved. The developed system can be successfully applied for regulating the excitation voltage of different-power synchronous motors, operating with a changing load, ensuring a value of the power coefficient close to 1.
Keywords: Transient process, synchronous motor, excitation mode, regulator, reactive power.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 688135 Fast Painting with Different Colors Using Cross Correlation in the Frequency Domain
Authors: Hazem M. El-Bakry
Abstract:
In this paper, a new technique for fast painting with different colors is presented. The idea of painting relies on applying masks with different colors to the background. Fast painting is achieved by applying these masks in the frequency domain instead of spatial (time) domain. New colors can be generated automatically as a result from the cross correlation operation. This idea was applied successfully for faster specific data (face, object, pattern, and code) detection using neural algorithms. Here, instead of performing cross correlation between the input input data (e.g., image, or a stream of sequential data) and the weights of neural networks, the cross correlation is performed between the colored masks and the background. Furthermore, this approach is developed to reduce the computation steps required by the painting operation. The principle of divide and conquer strategy is applied through background decomposition. Each background is divided into small in size subbackgrounds and then each sub-background is processed separately by using a single faster painting algorithm. Moreover, the fastest painting is achieved by using parallel processing techniques to paint the resulting sub-backgrounds using the same number of faster painting algorithms. In contrast to using only faster painting algorithm, the speed up ratio is increased with the size of the background when using faster painting algorithm and background decomposition. Simulation results show that painting in the frequency domain is faster than that in the spatial domain.Keywords: Fast Painting, Cross Correlation, Frequency Domain, Parallel Processing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795134 Implementing a Visual Servoing System for Robot Controlling
Authors: Maryam Vafadar, Alireza Behrad, Saeed Akbari
Abstract:
Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.Keywords: Back propagation neural network, Feature vector, Hand gesture recognition, k-Nearest Neighbor, Learning vector quantization neural network, Robot control, Spatio-temporal volume, Visual servoing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670133 Storage Method for Parts from End of Life Vehicles' Dismantling Process According to Sustainable Development Requirements: Polish Case Study
Authors: M. Kosacka, I. Kudelska
Abstract:
Vehicle is one of the most influential and complex product worldwide, which affects people’s life, state of the environment and condition of the economy (all aspects of sustainable development concept) during each stage of lifecycle. With the increase of vehicles’ number, there is growing potential for management of End of Life Vehicle (ELV), which is hazardous waste. From one point of view, the ELV should be managed to ensure risk elimination, but from another point, it should be treated as a source of valuable materials and spare parts. In order to obtain materials and spare parts, there are established recycling networks, which are an example of sustainable policy realization at the national level. The basic object in the polish recycling network is dismantling facility. The output material streams in dismantling stations include waste, which very often generate costs and spare parts, that have the biggest potential for revenues creation. Both outputs are stored into warehouses, according to the law. In accordance to the revenue creation and sustainability potential, it has been placed a strong emphasis on storage process. We present the concept of storage method, which takes into account the specific of the dismantling facility in order to support decision-making process with regard to the principles of sustainable development. The method was developed on the basis of case study of one of the greatest dismantling facility in Poland.Keywords: Dismantling, end of life vehicle, sustainability, storage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1366132 Neural Network Implementation Using FPGA: Issues and Application
Authors: A. Muthuramalingam, S. Himavathi, E. Srinivasan
Abstract:
.Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented
Keywords: FPGA implementation, multi-input neuron, neural network, nn based space vector modulator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4424131 Broadcasting Mechanism with Less Flooding Packets by Optimally Constructing Forwarding and Non-Forwarding Nodes in Mobile Ad Hoc Networks
Authors: R. Reka, R. S. D. Wahidabanu
Abstract:
The conventional routing protocol designed for MANET fail to handle dynamic movement and self-starting behavior of the node effectively. Every node in MANET is considered as forward as well receiver node and all of them participate in routing the packet from source to the destination. While the interconnection topology is highly dynamic, the performance of the most of the routing protocol is not encouraging. In this paper, a reliable broadcast approach for MANET is proposed for improving the transmission rate. The MANET is considered with asymmetric characteristics and the properties of the source and destination nodes are different. The non-forwarding node list is generated with a downstream node and they do not participate in the routing. While the forwarding and non-forwarding node is constructed in a conventional way, the number of nodes in non-forwarding list is more and increases the load. In this work, we construct the forwarding and non-forwarding node optimally so that the flooding and broadcasting is reduced to certain extent. The forwarded packet is considered as acknowledgements and the non-forwarding nodes explicitly send the acknowledgements to the source. The performance of the proposed approach is evaluated in NS2 environment. Since the proposed approach reduces the flooding, we have considered functionality of the proposed approach with AODV variants. The effect of network density on the overhead and collision rate is considered for performance evaluation. The performance is compared with the AODV variants found that the proposed approach outperforms all the variants.
Keywords: Flooding, Forwarded Nodes, MANET, Non-forwarding nodes, Routing protocols.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2026130 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.
Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314129 Customer Need Type Classification Model using Data Mining Techniques for Recommender Systems
Authors: Kyoung-jae Kim
Abstract:
Recommender systems are usually regarded as an important marketing tool in the e-commerce. They use important information about users to facilitate accurate recommendation. The information includes user context such as location, time and interest for personalization of mobile users. We can easily collect information about location and time because mobile devices communicate with the base station of the service provider. However, information about user interest can-t be easily collected because user interest can not be captured automatically without user-s approval process. User interest usually represented as a need. In this study, we classify needs into two types according to prior research. This study investigates the usefulness of data mining techniques for classifying user need type for recommendation systems. We employ several data mining techniques including artificial neural networks, decision trees, case-based reasoning, and multivariate discriminant analysis. Experimental results show that CHAID algorithm outperforms other models for classifying user need type. This study performs McNemar test to examine the statistical significance of the differences of classification results. The results of McNemar test also show that CHAID performs better than the other models with statistical significance.Keywords: Customer need type, Data mining techniques, Recommender system, Personalization, Mobile user.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2146128 Internet Governance based on Multiple-Stakeholders: Opportunities, Issues and Developments
Authors: Martin Hans Knahl
Abstract:
The Internet is the global data communications infrastructure based on the interconnection of both public and private networks using protocols that implement Internetworking on a global scale. Hence the control of protocol and infrastructure development, resource allocation and network operation are crucial and interlinked aspects. Internet Governance is the hotly debated and contentious subject that refers to the global control and operation of key Internet infrastructure such as domain name servers and resources such as domain names. It is impossible to separate technical and political positions as they are interlinked. Furthermore the existence of a global market, transparency and competition impact upon Internet Governance and related topics such as network neutrality and security. Current trends and developments regarding Internet governance with a focus on the policy-making process, security and control have been observed to evaluate current and future implications on the Internet. The multi stakeholder approach to Internet Governance discussed in this paper presents a number of opportunities, issues and developments that will affect the future direction of the Internet. Internet operation, maintenance and advisory organisations such as the Internet Corporation for Assigned Names and Numbers (ICANN) or the Internet Governance Forum (IGF) are currently in the process of formulating policies for future Internet Governance. Given the controversial nature of the issues at stake and the current lack of agreement it is predicted that institutional as well as market governance will remain present for the network access and content.Keywords: Internet Governance, ICANN, Democracy, Security
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869127 A Software-Supported Methodology for Designing General-Purpose Interconnection Networks for Reconfigurable Architectures
Authors: Kostas Siozios, Dimitrios Soudris, Antonios Thanailakis
Abstract:
Modern applications realized onto FPGAs exhibit high connectivity demands. Throughout this paper we study the routing constraints of Virtex devices and we propose a systematic methodology for designing a novel general-purpose interconnection network targeting to reconfigurable architectures. This network consists of multiple segment wires and SB patterns, appropriately selected and assigned across the device. The goal of our proposed methodology is to maximize the hardware utilization of fabricated routing resources. The derived interconnection scheme is integrated on a Virtex style FPGA. This device is characterized both for its high-performance, as well as for its low-energy requirements. Due to this, the design criterion that guides our architecture selections was the minimal Energy×Delay Product (EDP). The methodology is fully-supported by three new software tools, which belong to MEANDER Design Framework. Using a typical set of MCNC benchmarks, extensive comparison study in terms of several critical parameters proves the effectiveness of the derived interconnection network. More specifically, we achieve average Energy×Delay Product reduction by 63%, performance increase by 26%, reduction in leakage power by 21%, reduction in total energy consumption by 11%, at the expense of increase of channel width by 20%.
Keywords: Design Methodology, FPGA, Interconnection, Low-Energy, High-Performance, CAD tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721126 Flow Discharge Determination in Straight Compound Channels Using ANNs
Authors: A. Zahiri, A. A. Dehghani
Abstract:
Although many researchers have studied the flow hydraulics in compound channels, there are still many complicated problems in determination of their flow rating curves. Many different methods have been presented for these channels but extending them for all types of compound channels with different geometrical and hydraulic conditions is certainly difficult. In this study, by aid of nearly 400 laboratory and field data sets of geometry and flow rating curves from 30 different straight compound sections and using artificial neural networks (ANNs), flow discharge in compound channels was estimated. 13 dimensionless input variables including relative depth, relative roughness, relative width, aspect ratio, bed slope, main channel side slopes, flood plains side slopes and berm inclination and one output variable (flow discharge), have been used in ANNs. Comparison of ANNs model and traditional method (divided channel method-DCM) shows high accuracy of ANNs model results. The results of Sensitivity analysis showed that the relative depth with 47.6 percent contribution, is the most effective input parameter for flow discharge prediction. Relative width and relative roughness have 19.3 and 12.2 percent of importance, respectively. On the other hand, shape parameter, main channel and flood plains side slopes with 2.1, 3.8 and 3.8 percent of contribution, have the least importance.Keywords: ANN model, compound channels, divided channel method (DCM), flow rating curve
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2558125 Estimation of the Bit Side Force by Using Artificial Neural Network
Authors: Mohammad Heidari
Abstract:
Horizontal wells are proven to be better producers because they can be extended for a long distance in the pay zone. Engineers have the technical means to forecast the well productivity for a given horizontal length. However, experiences have shown that the actual production rate is often significantly less than that of forecasted. It is a difficult task, if not impossible to identify the real reason why a horizontal well is not producing what was forecasted. Often the source of problem lies in the drilling of horizontal section such as permeability reduction in the pay zone due to mud invasion or snaky well patterns created during drilling. Although drillers aim to drill a constant inclination hole in the pay zone, the more frequent outcome is a sinusoidal wellbore trajectory. The two factors, which play an important role in wellbore tortuosity, are the inclination and side force at bit. A constant inclination horizontal well can only be drilled if the bit face is maintained perpendicular to longitudinal axis of bottom hole assembly (BHA) while keeping the side force nil at the bit. This approach assumes that there exists no formation force at bit. Hence, an appropriate BHA can be designed if bit side force and bit tilt are determined accurately. The Artificial Neural Network (ANN) is superior to existing analytical techniques. In this study, the neural networks have been employed as a general approximation tool for estimation of the bit side forces. A number of samples are analyzed with ANN for parameters of bit side force and the results are compared with exact analysis. Back Propagation Neural network (BPN) is used to approximation of bit side forces. Resultant low relative error value of the test indicates the usability of the BPN in this area.Keywords: Artificial Neural Network, BHA, Horizontal Well, Stabilizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978124 A Pattern Recognition Neural Network Model for Detection and Classification of SQL Injection Attacks
Authors: Naghmeh Moradpoor Sheykhkanloo
Abstract:
Thousands of organisations store important and confidential information related to them, their customers, and their business partners in databases all across the world. The stored data ranges from less sensitive (e.g. first name, last name, date of birth) to more sensitive data (e.g. password, pin code, and credit card information). Losing data, disclosing confidential information or even changing the value of data are the severe damages that Structured Query Language injection (SQLi) attack can cause on a given database. It is a code injection technique where malicious SQL statements are inserted into a given SQL database by simply using a web browser. In this paper, we propose an effective pattern recognition neural network model for detection and classification of SQLi attacks. The proposed model is built from three main elements of: a Uniform Resource Locator (URL) generator in order to generate thousands of malicious and benign URLs, a URL classifier in order to: 1) classify each generated URL to either a benign URL or a malicious URL and 2) classify the malicious URLs into different SQLi attack categories, and a NN model in order to: 1) detect either a given URL is a malicious URL or a benign URL and 2) identify the type of SQLi attack for each malicious URL. The model is first trained and then evaluated by employing thousands of benign and malicious URLs. The results of the experiments are presented in order to demonstrate the effectiveness of the proposed approach.Keywords: Neural Networks, pattern recognition, SQL injection attacks, SQL injection attack classification, SQL injection attack detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2844123 Quantifying the Second-Level Digital Divide on Sub-National Level
Authors: Vladimir Korovkin, Albert Park, Evgeny Kaganer
Abstract:
Digital divide, the gap in the access to the world of digital technologies and the socio-economic opportunities that they create is an important phenomenon of the XXI century. This gap may exist between countries, regions within a country or socio-demographic groups, creating the classes of “digital have and have nots”. While the 1st-level divide (the difference in opportunities to access the digital networks) was demonstrated to diminish with time, the issues of 2nd level divide (the difference in skills and usage of digital systems) and 3rd level divide (the difference in effects obtained from digital technology) may grow. The paper offers a systemic review of literature on the measurement of the digital divide, noting the certain conceptual stagnation due to the lack of effective instruments that would capture the complex nature of the phenomenon. As a result, many important concepts do not receive the empiric exploration they deserve. As a solution the paper suggests a composite Digital Life Index, that studies separately the digital supply and demand across seven independent dimensions providing for 14 subindices. The Index is based on Internet-borne data, a distinction from traditional research approaches that rely on official statistics or surveys. The application of the model to the study of the digital divide between Russian regions and between cities in China have brought promising results. The paper advances the existing methodological literature on the 2nd level digital divide and can also inform practical decision-making regarding the strategies of national and regional digital development.
Keywords: Digital transformation, second-level digital divide, composite index, digital policy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 462122 The Analysis of Internet and Social Media Behaviors of the Students in the Higher School of Vocational and Technical Sciences
Authors: Mehmet Balci, Sakir Tasdemir, Mustafa Altin, Ozlem Bozok
Abstract:
Our globalizing world has become almost a small village and everyone can access any information at any time. Everyone lets each other know who does whatever in which place. We can learn which social events occur in which place in the world. From the perspective of education, the course notes that a lecturer use in lessons in a university in any state of America can be examined by a student studying in a city of Africa or the Far East. This dizzying communication we have mentioned happened thanks to fast developments in computer and internet technologies. While these developments occur in the world, Turkey that has a very large young population and whose electronic infrastructure rapidly improves has also been affected by these developments. Nowadays, mobile devices have become common and thus, it causes to increase data traffic in social networks. This study was carried out on students in the different age groups in Selcuk University Vocational School of Technical Sciences, the Department of Computer Technology. Students’ opinions about the use of internet and social media were obtained. The features such as using the Internet and social media skills, purposes, operating frequency, accessing facilities and tools, social life and effects on vocational education and so forth were explored. The positive effects and negative effects of both internet and social media use on the students in this department and findings are evaluated from different perspectives and results are obtained. In addition, relations and differences were found out statistically.
Keywords: Computer technologies, internet use, social network, higher vocational school.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758121 Managing City Pipe Leaks through Community Participation Using a Web and Mobile Application in South Africa
Authors: Mpai Mokoena, Nsenda Lukumwena
Abstract:
South Africa is one of the driest countries in the world and is facing a water crisis. In addition to inadequate infrastructure and poor planning, the country is experiencing high rates of water wastage due to pipe leaks. This study outlines the level of water wastage and develops a smart solution to efficiently manage and reduce the effects of pipe leaks, while monitoring the situation before and after fixing the pipe leaks. To understand the issue in depth, a literature review of journal papers and government reports was conducted. A questionnaire was designed and distributed to the general public. Additionally, the municipality office was contacted from a managerial perspective. The analysis from the study indicated that the majority of the citizens are aware of the water crisis and are willing to participate positively to decrease the level of water wasted. Furthermore, the response from the municipality acknowledged that more practical solutions are needed to reduce water wastage, and resources to attend to pipe leaks swiftly. Therefore, this paper proposes a specific solution for municipalities, local plumbers and citizens to minimize the effects of pipe leaks. The solution provides web and mobile application platforms to report and manage leaks swiftly. The solution is beneficial to the country in achieving water security and would promote a culture of responsibility toward water usage.
Keywords: Urban Distribution Networks, leak management, mobile application, responsible citizens, water crisis, water security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694120 A BERT-Based Model for Financial Social Media Sentiment Analysis
Authors: Josiel Delgadillo, Johnson Kinyua, Charles Mutigwe
Abstract:
The purpose of sentiment analysis is to determine the sentiment strength (e.g., positive, negative, neutral) from a textual source for good decision-making. Natural Language Processing (NLP) in domains such as financial markets requires knowledge of domain ontology, and pre-trained language models, such as BERT, have made significant breakthroughs in various NLP tasks by training on large-scale un-labeled generic corpora such as Wikipedia. However, sentiment analysis is a strong domain-dependent task. The rapid growth of social media has given users a platform to share their experiences and views about products, services, and processes, including financial markets. StockTwits and Twitter are social networks that allow the public to express their sentiments in real time. Hence, leveraging the success of unsupervised pre-training and a large amount of financial text available on social media platforms could potentially benefit a wide range of financial applications. This work is focused on sentiment analysis using social media text on platforms such as StockTwits and Twitter. To meet this need, SkyBERT, a domain-specific language model pre-trained and fine-tuned on financial corpora, has been developed. The results show that SkyBERT outperforms current state-of-the-art models in financial sentiment analysis. Extensive experimental results demonstrate the effectiveness and robustness of SkyBERT.
Keywords: BERT, financial markets, Twitter, sentiment analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 716119 Privacy Protection Principles of Omnichannel Approach
Authors: Renata Mekovec, Dijana Peras, Ruben Picek
Abstract:
The advent of the Internet, mobile devices and social media is revolutionizing the experience of retail customers by linking multiple sources through various channels. Omnichannel retailing is a retailing that combines multiple channels to allow customers to seamlessly leverage all the distribution information online and offline while shopping. Therefore, today data are an asset more critical than ever for all organizations. Nonetheless, because of its heterogeneity through platforms, developers are currently facing difficulties in dealing with personal data. Considering the possibilities of omnichannel communication, this paper presents channel categorization that could enhance the customer experience of omnichannel center called hyper center. The purpose of this paper is fundamentally to describe the connection between the omnichannel hyper center and the customer, with particular attention to privacy protection. The first phase was finding the most appropriate channels of communication for hyper center. Consequently, a selection of widely used communication channels has been identified and analyzed with regard to the effect requirements for optimizing user experience. The evaluation criteria are divided into 3 groups: general, user profile and channel options. For each criterion the weight of importance for omnichannel communication was defined. The most important thing was to consider how the hyper center can make user identification while respecting the privacy protection requirements. The study carried out also shows what customer experience across digital networks would look like, based on an omnichannel approach owing to privacy protection principles.Keywords: Personal data, privacy protection, omnichannel communication, retail.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 639118 Feasibility of Integrating Heating Valve Drivers with KNX-standard for Performing Dynamic Hydraulic Balance in Domestic Buildings
Authors: Tobias Teich, Danny Szendrei, Markus Schrader, Franziska Jahn, Susan Franke
Abstract:
The increasing demand for sufficient and clean energy forces industrial and service companies to align their strategies towards efficient consumption. This trend refers also to the residential building sector. There, large amounts of energy consumption are caused by house and facility heating. Many of the operated hot water heating systems lack hydraulic balanced working conditions for heat distribution and –transmission and lead to inefficient heating. Through hydraulic balancing of heating systems, significant energy savings for primary and secondary energy can be achieved. This paper addresses the use of KNX-technology (Smart Buildings) in residential buildings to ensure a dynamic adaption of hydraulic system's performance, in order to increase the heating system's efficiency. In this paper, the procedure of heating system segmentation into hydraulically independent units (meshes) is presented. Within these meshes, the heating valve are addressed and controlled by a central facility server. Feasibility criteria towards such drivers will be named. The dynamic hydraulic balance is achieved by positioning these valves according to heating loads, that are generated from the temperature settings in the corresponding rooms. The energetic advantages of single room heating control procedures, based on the application FacilityManager, is presented.Keywords: building automation, dynamic hydraulic balance, energy savings, VPN-networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896117 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: Artificial neural networks, fuel consumption, machine learning, regression, statistical tests.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 829116 Stochastic Edge Based Anomaly Detection for Supervisory Control and Data Acquisitions Systems: Considering the Zambian Power Grid
Authors: Lukumba Phiri, Simon Tembo, Kumbuso Joshua Nyoni
Abstract:
In Zambia, recent initiatives by various power operators like ZESCO, CEC, and consumers like the mines, to upgrade power systems into smart grids, target an even tighter integration with information technologies to enable the integration of renewable energy sources, local and bulk generation, and demand response. Thus, for the reliable operation of smart grids, its information infrastructure must be secure and reliable in the face of both failures and cyberattacks. Due to the nature of the systems, ICS/SCADA cybersecurity and governance face additional challenges compared to the corporate networks, and critical systems may be left exposed. There exist control frameworks internationally such as the NIST framework, however, they are generic and do not meet the domain-specific needs of the SCADA systems. Zambia is also lagging in cybersecurity awareness and adoption, and therefore there is a concern about securing ICS controlling key infrastructure critical to the Zambian economy as there are few known facts about the true posture. In this paper, we present a stochastic Edged-based Anomaly Detection for SCADA systems (SEADS) framework for threat modeling and risk assessment. SEADS enables the calculation of steady-steady probabilities that are further applied to establish metrics like system availability, maintainability, and reliability.
Keywords: Anomaly detection, SmartGrid, edge, maintainability, reliability, stochastic process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 322115 Research Action Fields at the Nexus of Digital Transformation and Supply Chain Management: Findings from Practitioner Focus Group Workshops
Authors: Brandtner Patrick, Staberhofer Franz
Abstract:
Logistics and Supply Chain Management are of crucial importance for organisational success. In the era of Digitalization, several implications and improvement potentials for these domains arise, which at the same time could lead to decreased competitiveness and could endanger long-term company success if ignored or neglected. However, empirical research on the issue of Digitalization and benefits purported to it by practitioners is scarce and mainly focused on single technologies or separate, isolated Supply Chain blocks as e.g. distribution logistics or procurement only. The current paper applies a holistic focus group approach to elaborate practitioner use cases at the nexus of the concepts of Supply Chain Management (SCM) and Digitalization. In the course of three focus group workshops with over 45 participants from more than 20 organisations, a comprehensive set of benefit entitlements and areas for improvement in terms of applying digitalization to SCM is developed. The main results of the paper indicate the relevance of Digitalization being realized in practice. In the form of seventeen concrete research action fields, the benefit entitlements are aggregated and transformed into potential starting points for future research projects in this area. The main contribution of this paper is an empirically grounded basis for future research projects and an overview of actual research action fields from practitioners’ point of view.
Keywords: Digital transformation, supply chain management, digital supply chain, value networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757114 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece
Authors: Panagiotis Karadimos, Leonidas Anthopoulos
Abstract:
Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.
Keywords: Actual cost and duration, attribute selection, bridge projects, neural networks, predicting models, FANN TOOL, WEKA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1229113 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers
Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen
Abstract:
In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other.
As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.
Keywords: AIS, ANN, ECG, hybrid classifiers, PSO.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916112 A Spanning Tree for Enhanced Cluster Based Routing in Wireless Sensor Network
Authors: M. Saravanan, M. Madheswaran
Abstract:
Wireless Sensor Network (WSN) clustering architecture enables features like network scalability, communication overhead reduction, and fault tolerance. After clustering, aggregated data is transferred to data sink and reducing unnecessary, redundant data transfer. It reduces nodes transmitting, and so saves energy consumption. Also, it allows scalability for many nodes, reduces communication overhead, and allows efficient use of WSN resources. Clustering based routing methods manage network energy consumption efficiently. Building spanning trees for data collection rooted at a sink node is a fundamental data aggregation method in sensor networks. The problem of determining Cluster Head (CH) optimal number is an NP-Hard problem. In this paper, we combine cluster based routing features for cluster formation and CH selection and use Minimum Spanning Tree (MST) for intra-cluster communication. The proposed method is based on optimizing MST using Simulated Annealing (SA). In this work, normalized values of mobility, delay, and remaining energy are considered for finding optimal MST. Simulation results demonstrate the effectiveness of the proposed method in improving the packet delivery ratio and reducing the end to end delay.
Keywords: Wireless sensor network, clustering, minimum spanning tree, genetic algorithm, low energy adaptive clustering hierarchy, simulated annealing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786111 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation
Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz
Abstract:
Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with successKeywords: Software Metrics, Software Cost Estimation, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957110 Performance Comparison of Resource Allocation without Feedback in Wireless Body Area Networks by Various Pseudo Orthogonal Sequences
Authors: Ojin Kwon, Yong-Jin Yoon, Liu Xin, Zhang Hongbao
Abstract:
Wireless Body Area Network (WBAN) is a short-range wireless communication around human body for various applications such as wearable devices, entertainment, military, and especially medical devices. WBAN attracts the attention of continuous health monitoring system including diagnostic procedure, early detection of abnormal conditions, and prevention of emergency situations. Compared to cellular network, WBAN system is more difficult to control inter- and inner-cell interference due to the limited power, limited calculation capability, mobility of patient, and non-cooperation among WBANs. In this paper, we compare the performance of resource allocation scheme based on several Pseudo Orthogonal Codewords (POCs) to mitigate inter-WBAN interference. Previously, the POCs are widely exploited for a protocol sequence and optical orthogonal code. Each POCs have different properties of auto- and cross-correlation and spectral efficiency according to its construction of POCs. To identify different WBANs, several different pseudo orthogonal patterns based on POCs exploits for resource allocation of WBANs. By simulating these pseudo orthogonal resource allocations of WBANs on MATLAB, we obtain the performance of WBANs according to different POCs and can analyze and evaluate the suitability of POCs for the resource allocation in the WBANs system.Keywords: Wireless body area network, body sensor network, resource allocation without feedback, interference mitigation, pseudo orthogonal pattern.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1343109 Distributed Automation System Based Remote Monitoring of Power Quality Disturbance on LV Network
Authors: Emmanuel D. Buedi, K. O. Boateng, Griffith S. Klogo
Abstract:
Electrical distribution networks are prone to power quality disturbances originating from the complexity of the distribution network, mode of distribution (overhead or underground) and types of loads used by customers. Data on the types of disturbances present and frequency of occurrence is needed for economic evaluation and hence finding solution to the problem. Utility companies have resorted to using secondary power quality devices such as smart meters to help gather the required data. Even though this approach is easier to adopt, data gathered from these devices may not serve the required purpose, since the installation of these devices in the electrical network usually does not conform to available PQM placement methods. This paper presents a design of a PQM that is capable of integrating into an existing DAS infrastructure to take advantage of available placement methodologies. The monitoring component of the design is implemented and installed to monitor an existing LV network. Data from the monitor is analyzed and presented. A portion of the LV network of the Electricity Company of Ghana is modeled in MATLAB-Simulink and analyzed under various earth fault conditions. The results presented show the ability of the PQM to detect and analyze PQ disturbance such as voltage sag and overvoltage. By adopting a placement methodology and installing these nodes, utilities are assured of accurate and reliable information with respect to the quality of power delivered to consumers.
Keywords: Power quality, remote monitoring, distributed automation system, economic evaluation, LV network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137108 Dynamic Fault Diagnosis for Semi-Batch Reactor under Closed-Loop Control via Independent Radial Basis Function Neural Network
Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm
Abstract:
In this paper, a robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor, when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics, and using the weighted sum-squared prediction error as the residual. The Recursive Orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. Several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature, and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835107 Reliability Assessment for Tie Line Capacity Assistance of Power Systems Based On Multi-Agent System
Authors: Nadheer A. Shalash, Abu Zaharin Bin Ahmad
Abstract:
Technological developments in industrial innovations have currently been related to interconnected system assistance and distribution networks. This important in order to enable an electrical load to continue receive power in the event of disconnection of load from the main power grid. This paper represents a method for reliability assessment of interconnected power systems based. The multi-agent system consists of four agents. The first agent was the generator agent to using as connected the generator to the grid depending on the state of the reserve margin and the load demand. The second was a load agent is that located at the load. Meanwhile, the third is so-called "the reverse margin agent" that to limit the reserve margin between 0 - 25% depend on the load and the unit size generator. In the end, calculation reliability Agent can be calculate expected energy not supplied (EENS), loss of load expectation (LOLE) and the effecting of tie line capacity to determine the risk levels Roy Billinton Test System (RBTS) can use to evaluated the reliability indices by using the developed JADE package. The results estimated of the reliability interconnection power systems presented in this paper. The overall reliability of power system can be improved. Thus, the market becomes more concentrated against demand increasing and the generation units were operating in relation to reliability indices.
Keywords: Reliability indices, Load expectation, Reserve margin, Daily load, Probability, Multi-agent system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2581