Search results for: passive optical networks (PON)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2626

Search results for: passive optical networks (PON)

136 A Software-Supported Methodology for Designing General-Purpose Interconnection Networks for Reconfigurable Architectures

Authors: Kostas Siozios, Dimitrios Soudris, Antonios Thanailakis

Abstract:

Modern applications realized onto FPGAs exhibit high connectivity demands. Throughout this paper we study the routing constraints of Virtex devices and we propose a systematic methodology for designing a novel general-purpose interconnection network targeting to reconfigurable architectures. This network consists of multiple segment wires and SB patterns, appropriately selected and assigned across the device. The goal of our proposed methodology is to maximize the hardware utilization of fabricated routing resources. The derived interconnection scheme is integrated on a Virtex style FPGA. This device is characterized both for its high-performance, as well as for its low-energy requirements. Due to this, the design criterion that guides our architecture selections was the minimal Energy×Delay Product (EDP). The methodology is fully-supported by three new software tools, which belong to MEANDER Design Framework. Using a typical set of MCNC benchmarks, extensive comparison study in terms of several critical parameters proves the effectiveness of the derived interconnection network. More specifically, we achieve average Energy×Delay Product reduction by 63%, performance increase by 26%, reduction in leakage power by 21%, reduction in total energy consumption by 11%, at the expense of increase of channel width by 20%.

Keywords: Design Methodology, FPGA, Interconnection, Low-Energy, High-Performance, CAD tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
135 Flow Discharge Determination in Straight Compound Channels Using ANNs

Authors: A. Zahiri, A. A. Dehghani

Abstract:

Although many researchers have studied the flow hydraulics in compound channels, there are still many complicated problems in determination of their flow rating curves. Many different methods have been presented for these channels but extending them for all types of compound channels with different geometrical and hydraulic conditions is certainly difficult. In this study, by aid of nearly 400 laboratory and field data sets of geometry and flow rating curves from 30 different straight compound sections and using artificial neural networks (ANNs), flow discharge in compound channels was estimated. 13 dimensionless input variables including relative depth, relative roughness, relative width, aspect ratio, bed slope, main channel side slopes, flood plains side slopes and berm inclination and one output variable (flow discharge), have been used in ANNs. Comparison of ANNs model and traditional method (divided channel method-DCM) shows high accuracy of ANNs model results. The results of Sensitivity analysis showed that the relative depth with 47.6 percent contribution, is the most effective input parameter for flow discharge prediction. Relative width and relative roughness have 19.3 and 12.2 percent of importance, respectively. On the other hand, shape parameter, main channel and flood plains side slopes with 2.1, 3.8 and 3.8 percent of contribution, have the least importance.

Keywords: ANN model, compound channels, divided channel method (DCM), flow rating curve

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2558
134 Performance Study of Neodymium Extraction by Carbon Nanotubes Assisted Emulsion Liquid Membrane Using Response Surface Methodology

Authors: Payman Davoodi-Nasab, Ahmad Rahbar-Kelishami, Jaber Safdari, Hossein Abolghasemi

Abstract:

The high purity rare earth elements (REEs) have been vastly used in the field of chemical engineering, metallurgy, nuclear energy, optical, magnetic, luminescence and laser materials, superconductors, ceramics, alloys, catalysts, and etc. Neodymium is one of the most abundant rare earths. By development of a neodymium–iron–boron (Nd–Fe–B) permanent magnet, the importance of neodymium has dramatically increased. Solvent extraction processes have many operational limitations such as large inventory of extractants, loss of solvent due to the organic solubility in aqueous solutions, volatilization of diluents, etc. One of the promising methods of liquid membrane processes is emulsion liquid membrane (ELM) which offers an alternative method to the solvent extraction processes. In this work, a study on Nd extraction through multi-walled carbon nanotubes (MWCNTs) assisted ELM using response surface methodology (RSM) has been performed. The ELM composed of diisooctylphosphinic acid (CYANEX 272) as carrier, MWCNTs as nanoparticles, Span-85 (sorbitan triooleate) as surfactant, kerosene as organic diluent and nitric acid as internal phase. The effects of important operating variables namely, surfactant concentration, MWCNTs concentration, and treatment ratio were investigated. Results were optimized using a central composite design (CCD) and a regression model for extraction percentage was developed. The 3D response surfaces of Nd(III) extraction efficiency were achieved and significance of three important variables and their interactions on the Nd extraction efficiency were found out. Results indicated that introducing the MWCNTs to the ELM process led to increasing the Nd extraction due to higher stability of membrane and mass transfer enhancement. MWCNTs concentration of 407 ppm, Span-85 concentration of 2.1 (%v/v) and treatment ratio of 10 were achieved as the optimum conditions. At the optimum condition, the extraction of Nd(III) reached the maximum of 99.03%.

Keywords: Emulsion liquid membrane, extraction of neodymium, multi-walled carbon nanotubes, response surface method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1258
133 Estimation of the Bit Side Force by Using Artificial Neural Network

Authors: Mohammad Heidari

Abstract:

Horizontal wells are proven to be better producers because they can be extended for a long distance in the pay zone. Engineers have the technical means to forecast the well productivity for a given horizontal length. However, experiences have shown that the actual production rate is often significantly less than that of forecasted. It is a difficult task, if not impossible to identify the real reason why a horizontal well is not producing what was forecasted. Often the source of problem lies in the drilling of horizontal section such as permeability reduction in the pay zone due to mud invasion or snaky well patterns created during drilling. Although drillers aim to drill a constant inclination hole in the pay zone, the more frequent outcome is a sinusoidal wellbore trajectory. The two factors, which play an important role in wellbore tortuosity, are the inclination and side force at bit. A constant inclination horizontal well can only be drilled if the bit face is maintained perpendicular to longitudinal axis of bottom hole assembly (BHA) while keeping the side force nil at the bit. This approach assumes that there exists no formation force at bit. Hence, an appropriate BHA can be designed if bit side force and bit tilt are determined accurately. The Artificial Neural Network (ANN) is superior to existing analytical techniques. In this study, the neural networks have been employed as a general approximation tool for estimation of the bit side forces. A number of samples are analyzed with ANN for parameters of bit side force and the results are compared with exact analysis. Back Propagation Neural network (BPN) is used to approximation of bit side forces. Resultant low relative error value of the test indicates the usability of the BPN in this area.

Keywords: Artificial Neural Network, BHA, Horizontal Well, Stabilizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
132 A Pattern Recognition Neural Network Model for Detection and Classification of SQL Injection Attacks

Authors: Naghmeh Moradpoor Sheykhkanloo

Abstract:

Thousands of organisations store important and confidential information related to them, their customers, and their business partners in databases all across the world. The stored data ranges from less sensitive (e.g. first name, last name, date of birth) to more sensitive data (e.g. password, pin code, and credit card information). Losing data, disclosing confidential information or even changing the value of data are the severe damages that Structured Query Language injection (SQLi) attack can cause on a given database. It is a code injection technique where malicious SQL statements are inserted into a given SQL database by simply using a web browser. In this paper, we propose an effective pattern recognition neural network model for detection and classification of SQLi attacks. The proposed model is built from three main elements of: a Uniform Resource Locator (URL) generator in order to generate thousands of malicious and benign URLs, a URL classifier in order to: 1) classify each generated URL to either a benign URL or a malicious URL and 2) classify the malicious URLs into different SQLi attack categories, and a NN model in order to: 1) detect either a given URL is a malicious URL or a benign URL and 2) identify the type of SQLi attack for each malicious URL. The model is first trained and then evaluated by employing thousands of benign and malicious URLs. The results of the experiments are presented in order to demonstrate the effectiveness of the proposed approach.

Keywords: Neural Networks, pattern recognition, SQL injection attacks, SQL injection attack classification, SQL injection attack detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2844
131 Quantifying the Second-Level Digital Divide on Sub-National Level

Authors: Vladimir Korovkin, Albert Park, Evgeny Kaganer

Abstract:

Digital divide, the gap in the access to the world of digital technologies and the socio-economic opportunities that they create is an important phenomenon of the XXI century. This gap may exist between countries, regions within a country or socio-demographic groups, creating the classes of “digital have and have nots”. While the 1st-level divide (the difference in opportunities to access the digital networks) was demonstrated to diminish with time, the issues of 2nd level divide (the difference in skills and usage of digital systems) and 3rd level divide (the difference in effects obtained from digital technology) may grow. The paper offers a systemic review of literature on the measurement of the digital divide, noting the certain conceptual stagnation due to the lack of effective instruments that would capture the complex nature of the phenomenon. As a result, many important concepts do not receive the empiric exploration they deserve. As a solution the paper suggests a composite Digital Life Index, that studies separately the digital supply and demand across seven independent dimensions providing for 14 subindices. The Index is based on Internet-borne data, a distinction from traditional research approaches that rely on official statistics or surveys. The application of the model to the study of the digital divide between Russian regions and between cities in China have brought promising results. The paper advances the existing methodological literature on the 2nd level digital divide and can also inform practical decision-making regarding the strategies of national and regional digital development.

Keywords: Digital transformation, second-level digital divide, composite index, digital policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 462
130 Performance Analysis of MIMO Based Multi-User Cooperation Diversity Over Various Fading Channels

Authors: Zuhaib Ashfaq Khan, Imran Khan, Nandana Rajatheva

Abstract:

In this paper, hybrid FDMA-TDMA access technique in a cooperative distributive fashion introducing and implementing a modified protocol introduced in [1] is analyzed termed as Power and Cooperation Diversity Gain Protocol (PCDGP). A wireless network consists of two users terminal , two relays and a destination terminal equipped with two antennas. The relays are operating in amplify-and-forward (AF) mode with a fixed gain. Two operating modes: cooperation-gain mode and powergain mode are exploited from source terminals to relays, as it is working in a best channel selection scheme. Vertical BLAST (Bell Laboratories Layered Space Time) or V-BLAST with minimum mean square error (MMSE) nulling is used at the relays to perfectly detect the joint signals from multiple source terminals. The performance is analyzed using binary phase shift keying (BPSK) modulation scheme and investigated over independent and identical (i.i.d) Rayleigh, Ricean-K and Nakagami-m fading environments. Subsequently, simulation results show that the proposed scheme can provide better signal quality of uplink users in a cooperative communication system using hybrid FDMATDMA technique.

Keywords: Cooperation Diversity, Best Channel Selectionscheme, MIMO relay networks, V-BLAST, QRdecomposition, and MMSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
129 The Analysis of Internet and Social Media Behaviors of the Students in the Higher School of Vocational and Technical Sciences

Authors: Mehmet Balci, Sakir Tasdemir, Mustafa Altin, Ozlem Bozok

Abstract:

Our globalizing world has become almost a small village and everyone can access any information at any time. Everyone lets each other know who does whatever in which place. We can learn which social events occur in which place in the world. From the perspective of education, the course notes that a lecturer use in lessons in a university in any state of America can be examined by a student studying in a city of Africa or the Far East. This dizzying communication we have mentioned happened thanks to fast developments in computer and internet technologies. While these developments occur in the world, Turkey that has a very large young population and whose electronic infrastructure rapidly improves has also been affected by these developments. Nowadays, mobile devices have become common and thus, it causes to increase data traffic in social networks. This study was carried out on students in the different age groups in Selcuk University Vocational School of Technical Sciences, the Department of Computer Technology. Students’ opinions about the use of internet and social media were obtained. The features such as using the Internet and social media skills, purposes, operating frequency, accessing facilities and tools, social life and effects on vocational education and so forth were explored. The positive effects and negative effects of both internet and social media use on the students in this department and findings are evaluated from different perspectives and results are obtained. In addition, relations and differences were found out statistically.

Keywords: Computer technologies, internet use, social network, higher vocational school.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
128 Managing City Pipe Leaks through Community Participation Using a Web and Mobile Application in South Africa

Authors: Mpai Mokoena, Nsenda Lukumwena

Abstract:

South Africa is one of the driest countries in the world and is facing a water crisis. In addition to inadequate infrastructure and poor planning, the country is experiencing high rates of water wastage due to pipe leaks. This study outlines the level of water wastage and develops a smart solution to efficiently manage and reduce the effects of pipe leaks, while monitoring the situation before and after fixing the pipe leaks. To understand the issue in depth, a literature review of journal papers and government reports was conducted. A questionnaire was designed and distributed to the general public. Additionally, the municipality office was contacted from a managerial perspective. The analysis from the study indicated that the majority of the citizens are aware of the water crisis and are willing to participate positively to decrease the level of water wasted. Furthermore, the response from the municipality acknowledged that more practical solutions are needed to reduce water wastage, and resources to attend to pipe leaks swiftly. Therefore, this paper proposes a specific solution for municipalities, local plumbers and citizens to minimize the effects of pipe leaks. The solution provides web and mobile application platforms to report and manage leaks swiftly. The solution is beneficial to the country in achieving water security and would promote a culture of responsibility toward water usage.

Keywords: Urban Distribution Networks, leak management, mobile application, responsible citizens, water crisis, water security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694
127 A BERT-Based Model for Financial Social Media Sentiment Analysis

Authors: Josiel Delgadillo, Johnson Kinyua, Charles Mutigwe

Abstract:

The purpose of sentiment analysis is to determine the sentiment strength (e.g., positive, negative, neutral) from a textual source for good decision-making. Natural Language Processing (NLP) in domains such as financial markets requires knowledge of domain ontology, and pre-trained language models, such as BERT, have made significant breakthroughs in various NLP tasks by training on large-scale un-labeled generic corpora such as Wikipedia. However, sentiment analysis is a strong domain-dependent task. The rapid growth of social media has given users a platform to share their experiences and views about products, services, and processes, including financial markets. StockTwits and Twitter are social networks that allow the public to express their sentiments in real time. Hence, leveraging the success of unsupervised pre-training and a large amount of financial text available on social media platforms could potentially benefit a wide range of financial applications. This work is focused on sentiment analysis using social media text on platforms such as StockTwits and Twitter. To meet this need, SkyBERT, a domain-specific language model pre-trained and fine-tuned on financial corpora, has been developed. The results show that SkyBERT outperforms current state-of-the-art models in financial sentiment analysis. Extensive experimental results demonstrate the effectiveness and robustness of SkyBERT.

Keywords: BERT, financial markets, Twitter, sentiment analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 716
126 Game Theory Based Diligent Energy Utilization Algorithm for Routing in Wireless Sensor Network

Authors: X. Mercilin Raajini, R. Raja Kumar, P. Indumathi, V. Praveen

Abstract:

Many cluster based routing protocols have been proposed in the field of wireless sensor networks, in which a group of nodes are formed as clusters. A cluster head is selected from one among those nodes based on residual energy, coverage area, number of hops and that cluster-head will perform data gathering from various sensor nodes and forwards aggregated data to the base station or to a relay node (another cluster-head), which will forward the packet along with its own data packet to the base station. Here a Game Theory based Diligent Energy Utilization Algorithm (GTDEA) for routing is proposed. In GTDEA, the cluster head selection is done with the help of game theory, a decision making process, that selects a cluster-head based on three parameters such as residual energy (RE), Received Signal Strength Index (RSSI) and Packet Reception Rate (PRR). Finding a feasible path to the destination with minimum utilization of available energy improves the network lifetime and is achieved by the proposed approach. In GTDEA, the packets are forwarded to the base station using inter-cluster routing technique, which will further forward it to the base station. Simulation results reveal that GTDEA improves the network performance in terms of throughput, lifetime, and power consumption.

Keywords: Cluster head, Energy utilization, Game Theory, LEACH, Sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
125 Effects of the Coagulation Bath and Reduction Process on SO2 Adsorption Capacity of Graphene Oxide Fiber

Authors: Özge Alptoğa, Nuray Uçar, Nilgün Karatepe Yavuz, Ayşen Önen

Abstract:

Sulfur dioxide (SO2) is a very toxic air pollutant gas and it causes the greenhouse effect, photochemical smog, and acid rain, which threaten human health severely. Thus, the capture of SO2 gas is very important for the environment. Graphene which is two-dimensional material has excellent mechanical, chemical, thermal properties, and many application areas such as energy storage devices, gas adsorption, sensing devices, and optical electronics. Further, graphene oxide (GO) is examined as a good adsorbent because of its important features such as functional groups (epoxy, carboxyl and hydroxyl) on the surface and layered structure. The SO2 adsorption properties of the fibers are usually investigated on carbon fibers. In this study, potential adsorption capacity of GO fibers was researched. GO dispersion was first obtained with Hummers’ method from graphite, and then GO fibers were obtained via wet spinning process. These fibers were converted into a disc shape, dried, and then subjected to SO2 gas adsorption test. The SO2 gas adsorption capacity of GO fiber discs was investigated in the fields of utilization of different coagulation baths and reduction by hydrazine hydrate. As coagulation baths, single and triple baths were used. In single bath, only ethanol and CaCl2 (calcium chloride) salt were added. In triple bath, each bath has a different concentration of water/ethanol and CaCl2 salt, and the disc obtained from triple bath has been called as reference disk. The fibers which were produced with single bath were flexible and rough, and the analyses show that they had higher SO2 adsorption capacity than triple bath fibers (reference disk). However, the reduction process did not increase the adsorption capacity, because the SEM images showed that the layers and uniform structure in the fiber form were damaged, and reduction decreased the functional groups which SO2 will be attached. Scanning Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction (XRD) analyzes were performed on the fibers and discs, and the effects on the results were interpreted. In the future applications of the study, it is aimed that subjects such as pH and additives will be examined.

Keywords: Coagulation bath, graphene oxide fiber, reduction, SO2 gas adsorption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1180
124 Privacy Protection Principles of Omnichannel Approach

Authors: Renata Mekovec, Dijana Peras, Ruben Picek

Abstract:

The advent of the Internet, mobile devices and social media is revolutionizing the experience of retail customers by linking multiple sources through various channels. Omnichannel retailing is a retailing that combines multiple channels to allow customers to seamlessly leverage all the distribution information online and offline while shopping. Therefore, today data are an asset more critical than ever for all organizations. Nonetheless, because of its heterogeneity through platforms, developers are currently facing difficulties in dealing with personal data. Considering the possibilities of omnichannel communication, this paper presents channel categorization that could enhance the customer experience of omnichannel center called hyper center. The purpose of this paper is fundamentally to describe the connection between the omnichannel hyper center and the customer, with particular attention to privacy protection. The first phase was finding the most appropriate channels of communication for hyper center. Consequently, a selection of widely used communication channels has been identified and analyzed with regard to the effect requirements for optimizing user experience. The evaluation criteria are divided into 3 groups: general, user profile and channel options. For each criterion the weight of importance for omnichannel communication was defined. The most important thing was to consider how the hyper center can make user identification while respecting the privacy protection requirements. The study carried out also shows what customer experience across digital networks would look like, based on an omnichannel approach owing to privacy protection principles.

Keywords: Personal data, privacy protection, omnichannel communication, retail.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 639
123 Feasibility of Integrating Heating Valve Drivers with KNX-standard for Performing Dynamic Hydraulic Balance in Domestic Buildings

Authors: Tobias Teich, Danny Szendrei, Markus Schrader, Franziska Jahn, Susan Franke

Abstract:

The increasing demand for sufficient and clean energy forces industrial and service companies to align their strategies towards efficient consumption. This trend refers also to the residential building sector. There, large amounts of energy consumption are caused by house and facility heating. Many of the operated hot water heating systems lack hydraulic balanced working conditions for heat distribution and –transmission and lead to inefficient heating. Through hydraulic balancing of heating systems, significant energy savings for primary and secondary energy can be achieved. This paper addresses the use of KNX-technology (Smart Buildings) in residential buildings to ensure a dynamic adaption of hydraulic system's performance, in order to increase the heating system's efficiency. In this paper, the procedure of heating system segmentation into hydraulically independent units (meshes) is presented. Within these meshes, the heating valve are addressed and controlled by a central facility server. Feasibility criteria towards such drivers will be named. The dynamic hydraulic balance is achieved by positioning these valves according to heating loads, that are generated from the temperature settings in the corresponding rooms. The energetic advantages of single room heating control procedures, based on the application FacilityManager, is presented.

Keywords: building automation, dynamic hydraulic balance, energy savings, VPN-networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896
122 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: Artificial neural networks, fuel consumption, machine learning, regression, statistical tests.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 829
121 Stochastic Edge Based Anomaly Detection for Supervisory Control and Data Acquisitions Systems: Considering the Zambian Power Grid

Authors: Lukumba Phiri, Simon Tembo, Kumbuso Joshua Nyoni

Abstract:

In Zambia, recent initiatives by various power operators like ZESCO, CEC, and consumers like the mines, to upgrade power systems into smart grids, target an even tighter integration with information technologies to enable the integration of renewable energy sources, local and bulk generation, and demand response. Thus, for the reliable operation of smart grids, its information infrastructure must be secure and reliable in the face of both failures and cyberattacks. Due to the nature of the systems, ICS/SCADA cybersecurity and governance face additional challenges compared to the corporate networks, and critical systems may be left exposed. There exist control frameworks internationally such as the NIST framework, however, they are generic and do not meet the domain-specific needs of the SCADA systems. Zambia is also lagging in cybersecurity awareness and adoption, and therefore there is a concern about securing ICS controlling key infrastructure critical to the Zambian economy as there are few known facts about the true posture. In this paper, we present a stochastic Edged-based Anomaly Detection for SCADA systems (SEADS) framework for threat modeling and risk assessment. SEADS enables the calculation of steady-steady probabilities that are further applied to establish metrics like system availability, maintainability, and reliability.

Keywords: Anomaly detection, SmartGrid, edge, maintainability, reliability, stochastic process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 322
120 Research Action Fields at the Nexus of Digital Transformation and Supply Chain Management: Findings from Practitioner Focus Group Workshops

Authors: Brandtner Patrick, Staberhofer Franz

Abstract:

Logistics and Supply Chain Management are of crucial importance for organisational success. In the era of Digitalization, several implications and improvement potentials for these domains arise, which at the same time could lead to decreased competitiveness and could endanger long-term company success if ignored or neglected. However, empirical research on the issue of Digitalization and benefits purported to it by practitioners is scarce and mainly focused on single technologies or separate, isolated Supply Chain blocks as e.g. distribution logistics or procurement only. The current paper applies a holistic focus group approach to elaborate practitioner use cases at the nexus of the concepts of Supply Chain Management (SCM) and Digitalization. In the course of three focus group workshops with over 45 participants from more than 20 organisations, a comprehensive set of benefit entitlements and areas for improvement in terms of applying digitalization to SCM is developed. The main results of the paper indicate the relevance of Digitalization being realized in practice. In the form of seventeen concrete research action fields, the benefit entitlements are aggregated and transformed into potential starting points for future research projects in this area. The main contribution of this paper is an empirically grounded basis for future research projects and an overview of actual research action fields from practitioners’ point of view.

Keywords: Digital transformation, supply chain management, digital supply chain, value networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
119 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece

Authors: Panagiotis Karadimos, Leonidas Anthopoulos

Abstract:

Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.

Keywords: Actual cost and duration, attribute selection, bridge projects, neural networks, predicting models, FANN TOOL, WEKA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1229
118 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers

Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen

Abstract:

In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other.

As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.

Keywords: AIS, ANN, ECG, hybrid classifiers, PSO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916
117 A Spanning Tree for Enhanced Cluster Based Routing in Wireless Sensor Network

Authors: M. Saravanan, M. Madheswaran

Abstract:

Wireless Sensor Network (WSN) clustering architecture enables features like network scalability, communication overhead reduction, and fault tolerance. After clustering, aggregated data is transferred to data sink and reducing unnecessary, redundant data transfer. It reduces nodes transmitting, and so saves energy consumption. Also, it allows scalability for many nodes, reduces communication overhead, and allows efficient use of WSN resources. Clustering based routing methods manage network energy consumption efficiently. Building spanning trees for data collection rooted at a sink node is a fundamental data aggregation method in sensor networks. The problem of determining Cluster Head (CH) optimal number is an NP-Hard problem. In this paper, we combine cluster based routing features for cluster formation and CH selection and use Minimum Spanning Tree (MST) for intra-cluster communication. The proposed method is based on optimizing MST using Simulated Annealing (SA). In this work, normalized values of mobility, delay, and remaining energy are considered for finding optimal MST. Simulation results demonstrate the effectiveness of the proposed method in improving the packet delivery ratio and reducing the end to end delay.

Keywords: Wireless sensor network, clustering, minimum spanning tree, genetic algorithm, low energy adaptive clustering hierarchy, simulated annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786
116 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation

Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz

Abstract:

Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with success

Keywords: Software Metrics, Software Cost Estimation, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
115 Distributed Automation System Based Remote Monitoring of Power Quality Disturbance on LV Network

Authors: Emmanuel D. Buedi, K. O. Boateng, Griffith S. Klogo

Abstract:

Electrical distribution networks are prone to power quality disturbances originating from the complexity of the distribution network, mode of distribution (overhead or underground) and types of loads used by customers. Data on the types of disturbances present and frequency of occurrence is needed for economic evaluation and hence finding solution to the problem. Utility companies have resorted to using secondary power quality devices such as smart meters to help gather the required data. Even though this approach is easier to adopt, data gathered from these devices may not serve the required purpose, since the installation of these devices in the electrical network usually does not conform to available PQM placement methods. This paper presents a design of a PQM that is capable of integrating into an existing DAS infrastructure to take advantage of available placement methodologies. The monitoring component of the design is implemented and installed to monitor an existing LV network. Data from the monitor is analyzed and presented. A portion of the LV network of the Electricity Company of Ghana is modeled in MATLAB-Simulink and analyzed under various earth fault conditions. The results presented show the ability of the PQM to detect and analyze PQ disturbance such as voltage sag and overvoltage. By adopting a placement methodology and installing these nodes, utilities are assured of accurate and reliable information with respect to the quality of power delivered to consumers.

Keywords: Power quality, remote monitoring, distributed automation system, economic evaluation, LV network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137
114 Dynamic Fault Diagnosis for Semi-Batch Reactor under Closed-Loop Control via Independent Radial Basis Function Neural Network

Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm

Abstract:

In this paper, a robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor, when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics, and using the weighted sum-squared prediction error as the residual. The Recursive Orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. Several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature, and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.

Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
113 Reliability Assessment for Tie Line Capacity Assistance of Power Systems Based On Multi-Agent System

Authors: Nadheer A. Shalash, Abu Zaharin Bin Ahmad

Abstract:

Technological developments in industrial innovations have currently been related to interconnected system assistance and distribution networks. This important in order to enable an electrical load to continue receive power in the event of disconnection of load from the main power grid. This paper represents a method for reliability assessment of interconnected power systems based. The multi-agent system consists of four agents. The first agent was the generator agent to using as connected the generator to the grid depending on the state of the reserve margin and the load demand. The second was a load agent is that located at the load. Meanwhile, the third is so-called "the reverse margin agent" that to limit the reserve margin between 0 - 25% depend on the load and the unit size generator. In the end, calculation reliability Agent can be calculate expected energy not supplied (EENS), loss of load expectation (LOLE) and the effecting of tie line capacity to determine the risk levels Roy Billinton Test System (RBTS) can use to evaluated the reliability indices by using the developed JADE package. The results estimated of the reliability interconnection power systems presented in this paper. The overall reliability of power system can be improved. Thus, the market becomes more concentrated against demand increasing and the generation units were operating in relation to reliability indices. 

Keywords: Reliability indices, Load expectation, Reserve margin, Daily load, Probability, Multi-agent system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2581
112 Low Resolution Face Recognition Using Mixture of Experts

Authors: Fatemeh Behjati Ardakani, Fatemeh Khademian, Abbas Nowzari Dalini, Reza Ebrahimpour

Abstract:

Human activity is a major concern in a wide variety of applications, such as video surveillance, human computer interface and face image database management. Detecting and recognizing faces is a crucial step in these applications. Furthermore, major advancements and initiatives in security applications in the past years have propelled face recognition technology into the spotlight. The performance of existing face recognition systems declines significantly if the resolution of the face image falls below a certain level. This is especially critical in surveillance imagery where often, due to many reasons, only low-resolution video of faces is available. If these low-resolution images are passed to a face recognition system, the performance is usually unacceptable. Hence, resolution plays a key role in face recognition systems. In this paper we introduce a new low resolution face recognition system based on mixture of expert neural networks. In order to produce the low resolution input images we down-sampled the 48 × 48 ORL images to 12 × 12 ones using the nearest neighbor interpolation method and after that applying the bicubic interpolation method yields enhanced images which is given to the Principal Component Analysis feature extractor system. Comparison with some of the most related methods indicates that the proposed novel model yields excellent recognition rate in low resolution face recognition that is the recognition rate of 100% for the training set and 96.5% for the test set.

Keywords: Low resolution face recognition, Multilayered neuralnetwork, Mixture of experts neural network, Principal componentanalysis, Bicubic interpolation, Nearest neighbor interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
111 Cross Signal Identification for PSG Applications

Authors: Carmen Grigoraş, Victor Grigoraş, Daniela Boişteanu

Abstract:

The standard investigational method for obstructive sleep apnea syndrome (OSAS) diagnosis is polysomnography (PSG), which consists of a simultaneous, usually overnight recording of multiple electro-physiological signals related to sleep and wakefulness. This is an expensive, encumbering and not a readily repeated protocol, and therefore there is need for simpler and easily implemented screening and detection techniques. Identification of apnea/hypopnea events in the screening recordings is the key factor for the diagnosis of OSAS. The analysis of a solely single-lead electrocardiographic (ECG) signal for OSAS diagnosis, which may be done with portable devices, at patient-s home, is the challenge of the last years. A novel artificial neural network (ANN) based approach for feature extraction and automatic identification of respiratory events in ECG signals is presented in this paper. A nonlinear principal component analysis (NLPCA) method was considered for feature extraction and support vector machine for classification/recognition. An alternative representation of the respiratory events by means of Kohonen type neural network is discussed. Our prospective study was based on OSAS patients of the Clinical Hospital of Pneumology from Iaşi, Romania, males and females, as well as on non-OSAS investigated human subjects. Our computed analysis includes a learning phase based on cross signal PSG annotation.

Keywords: Artificial neural networks, feature extraction, obstructive sleep apnea syndrome, pattern recognition, signalprocessing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541
110 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou

Abstract:

Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.

Keywords: Calibration, dynamic range, radiometric resolution, SNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340
109 The Potential of Hybrid Microgrids for Mitigating Power Outage in Lebanon

Authors: R. Chedid, R. Ghajar

Abstract:

Lebanon electricity crisis continues to escalate. Rationing hours still apply across the country but with different rates. The capital Beirut is subjected to 3 hours cut while other cities, town and villages may endure 9 to 14 hours of power shortage. To mitigate this situation, private diesel generators distributed illegally all over the country are being used to bridge the gap in power supply. Almost each building in large cities has its own generator and individual villages may have more than one generator supplying their loads. These generators together with their private networks form incomplete and ill-designed and managed microgrids (MG) but can be further developed to become renewable energy-based MG operating in island- or grid-connected modes. This paper will analyze the potential of introducing MG to help resolve the energy crisis in Lebanon. It will investigate the usefulness of developing MG under the prevailing situation of existing private power supply service providers and in light of the developed national energy policy that supports renewable energy development. A case study on a distribution feeder in a rural area will be analyzed using HOMER software to demonstrate the usefulness of introducing photovoltaic (PV) arrays along the existing diesel generators for all the stakeholders; namely, the developers, the customers, the utility and the community at large. Policy recommendations regarding MG development in Lebanon will be presented on the basis of the accumulated experience in private generation and the privatization and public-private partnership laws.

Keywords: Decentralized systems, microgrids, distributed generation, renewable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 981
108 Evaluation of State of the Art IDS Message Exchange Protocols

Authors: Robert Koch, Mario Golling, Gabi Dreo

Abstract:

During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.

Keywords: Cyber Defence, Cyber Warfare, Intrusion Detection Information Exchange, Early Warning Systems, Joint Intrusion Detection, Cyber Conflict

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2293
107 EZW Coding System with Artificial Neural Networks

Authors: Saudagar Abdul Khader Jilani, Syed Abdul Sattar

Abstract:

Image compression plays a vital role in today-s communication. The limitation in allocated bandwidth leads to slower communication. To exchange the rate of transmission in the limited bandwidth the Image data must be compressed before transmission. Basically there are two types of compressions, 1) LOSSY compression and 2) LOSSLESS compression. Lossy compression though gives more compression compared to lossless compression; the accuracy in retrievation is less in case of lossy compression as compared to lossless compression. JPEG, JPEG2000 image compression system follows huffman coding for image compression. JPEG 2000 coding system use wavelet transform, which decompose the image into different levels, where the coefficient in each sub band are uncorrelated from coefficient of other sub bands. Embedded Zero tree wavelet (EZW) coding exploits the multi-resolution properties of the wavelet transform to give a computationally simple algorithm with better performance compared to existing wavelet transforms. For further improvement of compression applications other coding methods were recently been suggested. An ANN base approach is one such method. Artificial Neural Network has been applied to many problems in image processing and has demonstrated their superiority over classical methods when dealing with noisy or incomplete data for image compression applications. The performance analysis of different images is proposed with an analysis of EZW coding system with Error Backpropagation algorithm. The implementation and analysis shows approximately 30% more accuracy in retrieved image compare to the existing EZW coding system.

Keywords: Accuracy, Compression, EZW, JPEG2000, Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1933