Search results for: minimum mean squre error (MMSE)
301 Many-Sided Self Risk Analysis Model for Information Asset to Secure Stability of the Information and Communication Service
Authors: Jin-Tae Lee, Jung-Hoon Suh, Sang-Soo Jang, Jae-Il Lee
Abstract:
Information and communication service providers (ICSP) that are significant in size and provide Internet-based services take administrative, technical, and physical protection measures via the information security check service (ISCS). These protection measures are the minimum action necessary to secure the stability and continuity of the information and communication services (ICS) that they provide. Thus, information assets are essential to providing ICS, and deciding the relative importance of target assets for protection is a critical procedure. The risk analysis model designed to decide the relative importance of information assets, which is described in this study, evaluates information assets from many angles, in order to choose which ones should be given priority when it comes to protection. Many-sided risk analysis (MSRS) grades the importance of information assets, based on evaluation of major security check items, evaluation of the dependency on the information and communication facility (ICF) and influence on potential incidents, and evaluation of major items according to their service classification, in order to identify the ISCS target. MSRS could be an efficient risk analysis model to help ICSPs to identify their core information assets and take information protection measures first, so that stability of the ICS can be ensured.Keywords: Information Asset, Information CommunicationFacility, Evaluation, ISCS (Information Security Check Service), Evaluation, Grade.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1448300 Application of Neural Network in User Authentication for Smart Home System
Authors: A. Joseph, D.B.L. Bong, D.A.A. Mat
Abstract:
Security has been an important issue and concern in the smart home systems. Smart home networks consist of a wide range of wired or wireless devices, there is possibility that illegal access to some restricted data or devices may happen. Password-based authentication is widely used to identify authorize users, because this method is cheap, easy and quite accurate. In this paper, a neural network is trained to store the passwords instead of using verification table. This method is useful in solving security problems that happened in some authentication system. The conventional way to train the network using Backpropagation (BPN) requires a long training time. Hence, a faster training algorithm, Resilient Backpropagation (RPROP) is embedded to the MLPs Neural Network to accelerate the training process. For the Data Part, 200 sets of UserID and Passwords were created and encoded into binary as the input. The simulation had been carried out to evaluate the performance for different number of hidden neurons and combination of transfer functions. Mean Square Error (MSE), training time and number of epochs are used to determine the network performance. From the results obtained, using Tansig and Purelin in hidden and output layer and 250 hidden neurons gave the better performance. As a result, a password-based user authentication system for smart home by using neural network had been developed successfully.Keywords: Neural Network, User Authentication, Smart Home, Security
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039299 General Regression Neural Network and Back Propagation Neural Network Modeling for Predicting Radial Overcut in EDM: A Comparative Study
Authors: Raja Das, M. K. Pradhan
Abstract:
This paper presents a comparative study between two neural network models namely General Regression Neural Network (GRNN) and Back Propagation Neural Network (BPNN) are used to estimate radial overcut produced during Electrical Discharge Machining (EDM). Four input parameters have been employed: discharge current (Ip), pulse on time (Ton), Duty fraction (Tau) and discharge voltage (V). Recently, artificial intelligence techniques, as it is emerged as an effective tool that could be used to replace time consuming procedures in various scientific or engineering applications, explicitly in prediction and estimation of the complex and nonlinear process. The both networks are trained, and the prediction results are tested with the unseen validation set of the experiment and analysed. It is found that the performance of both the networks are found to be in good agreement with average percentage error less than 11% and the correlation coefficient obtained for the validation data set for GRNN and BPNN is more than 91%. However, it is much faster to train GRNN network than a BPNN and GRNN is often more accurate than BPNN. GRNN requires more memory space to store the model, GRNN features fast learning that does not require an iterative procedure, and highly parallel structure. GRNN networks are slower than multilayer perceptron networks at classifying new cases.
Keywords: Electrical-discharge machining, General Regression Neural Network, Back-propagation Neural Network, Radial Overcut.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3115298 Establishing Econometric Modeling Equations for Lumpy Skin Disease Outbreaks in the Nile Delta of Egypt under Current Climate Conditions
Authors: Abdelgawad, Salah El-Tahawy
Abstract:
This paper aimed to establish econometrical equation models for the Nile delta region in Egypt, which will represent a basement for future predictions of Lumpy skin disease outbreaks and its pathway in relation to climate change. Data of lumpy skin disease (LSD) outbreaks were collected from the cattle farms located in the provinces representing the Nile delta region during 1 January, 2015 to December, 2015. The obtained results indicated that there was a significant association between the degree of the LSD outbreaks and the investigated climate factors (temperature, wind speed, and humidity) and the outbreaks peaked during the months of June, July, and August and gradually decreased to the lowest rate in January, February, and December. The model obtained depicted that the increment of these climate factors were associated with evidently increment on LSD outbreaks on the Nile Delta of Egypt. The model validation process was done by the root mean square error (RMSE) and means bias (MB) which compared the number of LSD outbreaks expected with the number of observed outbreaks and estimated the confidence level of the model. The value of RMSE was 1.38% and MB was 99.50% confirming that this established model described the current association between the LSD outbreaks and the change on climate factors and also can be used as a base for predicting the of LSD outbreaks depending on the climatic change on the future.
Keywords: LSD, climate factors, econometric models, Nile Delta.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 961297 Application of ANN for Estimation of Power Demand of Villages in Sulaymaniyah Governorate
Abstract:
Before designing an electrical system, the estimation of load is necessary for unit sizing and demand-generation balancing. The system could be a stand-alone system for a village or grid connected or integrated renewable energy to grid connection, especially as there are non–electrified villages in developing countries. In the classical model, the energy demand was found by estimating the household appliances multiplied with the amount of their rating and the duration of their operation, but in this paper, information exists for electrified villages could be used to predict the demand, as villages almost have the same life style. This paper describes a method used to predict the average energy consumed in each two months for every consumer living in a village by Artificial Neural Network (ANN). The input data are collected using a regional survey for samples of consumers representing typical types of different living, household appliances and energy consumption by a list of information, and the output data are collected from administration office of Piramagrun for each corresponding consumer. The result of this study shows that the average demand for different consumers from four villages in different months throughout the year is approximately 12 kWh/day, this model estimates the average demand/day for every consumer with a mean absolute percent error of 11.8%, and MathWorks software package MATLAB version 7.6.0 that contains and facilitate Neural Network Toolbox was used.
Keywords: Artificial neural network, load estimation, regional survey, rural electrification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1359296 Effect of Different Media and Mannitol Concentrations on Growth and Development of Vandopsis lissochiloides (Gaudich.) Pfitz. under Slow Growth Conditions
Authors: J. Linjikao, P. Inthima, A. Kongbangkerd
Abstract:
In vitro conservation of orchid germplasm provides an effective technique for ex situ conservation of orchid diversity. In this study, an efficient protocol for in vitro conservation of Vandopsis lissochiloides (Gaudich.) Pfitz. plantlet under slow growth conditions was investigated. Plantlets were cultured on different strength of Vacin and Went medium (½VW and ¼VW) supplemented with different concentrations of mannitol (0, 2, 4, 6 and 8%), sucrose (0 and 3%) and 50 g/L potato extract, 150 mL/L coconut water. The cultures were incubated at 25±2 °C and maintained under 20 µmol/m2s light intensity for 24 weeks without subculture. At the end of preservation period, the plantlets were subcultured to fresh medium for growth recovery. The results found that the highest leaf number per plantlet could be observed on ¼VW medium without adding sucrose and mannitol while the highest root number per plantlet was found on ½VW added with 3% sucrose without adding mannitol after 24 weeks of in vitro storage. The results showed that the maximum number of leaves (5.8 leaves) and roots (5.0 roots) of preserved plantlets were produced on ¼VW medium without adding sucrose and mannitol. Therefore, ¼VW medium without adding sucrose and mannitol was the best minimum growth conditions for medium-term storage of V. lissochiloides plantlets.
Keywords: Preservation, Vandopsis, germplasm, in vitro.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 706295 Hiding Data in Images Using PCP
Authors: Souvik Bhattacharyya, Gautam Sanyal
Abstract:
In recent years, everything is trending toward digitalization and with the rapid development of the Internet technologies, digital media needs to be transmitted conveniently over the network. Attacks, misuse or unauthorized access of information is of great concern today which makes the protection of documents through digital media a priority problem. This urges us to devise new data hiding techniques to protect and secure the data of vital significance. In this respect, steganography often comes to the fore as a tool for hiding information. Steganography is a process that involves hiding a message in an appropriate carrier like image or audio. It is of Greek origin and means "covered or hidden writing". The goal of steganography is covert communication. Here the carrier can be sent to a receiver without any one except the authenticated receiver only knows existence of the information. Considerable amount of work has been carried out by different researchers on steganography. In this work the authors propose a novel Steganographic method for hiding information within the spatial domain of the gray scale image. The proposed approach works by selecting the embedding pixels using some mathematical function and then finds the 8 neighborhood of the each selected pixel and map each bit of the secret message in each of the neighbor pixel coordinate position in a specified manner. Before embedding a checking has been done to find out whether the selected pixel or its neighbor lies at the boundary of the image or not. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.Keywords: Cover Image, LSB, Pixel Coordinate Position (PCP), Stego Image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821294 Artificial Intelligence: A Comprehensive and Systematic Literature Review of Applications and Comparative Technologies
Authors: Z. M. Najmi
Abstract:
Over the years, the question around Artificial Intelligence has always been one with many answers. Whether by means of use in business and industry or complicated algorithmic programming, management of these technologies has always been the core focus. More recently, technologies have been questioned in industry and society alike as to whether they have improved human-centred design, assisted choices and objectives, and had a hand in systematic processes across the board. With these questions the answer may lie within AI technologies, and the steps needed in removing common human error. Elements such as Machine Learning, Deep Learning, Recommender Systems and Natural Language Processing will all be features to consider moving forward. Our previous intervention with AI applications has resulted in increased productivity, however, raised concerns for the continuation of traditional human-centred occupations. Emerging technologies such as Augmented Reality and Virtual Reality have all played a part in this during AI’s prominent rise. As mentioned, AI has been constantly under the microscope; the benefits and drawbacks may seem endless is wide, but AI is something we must take notice of and adapt into our everyday lives. The aim of this paper is to give an overview of the technologies surrounding A.I. and its’ related technologies. A comprehensive review has been written as a timeline of the developing events and key points in the history of Artificial Intelligence. This research is gathered entirely from secondary research, academic statements of knowledge and gathered to produce an understanding of the timeline of AI.
Keywords: Artificial Intelligence, Deep Learning, Augmented Reality, Reinforcement Learning, Machine Learning, Supervised Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 576293 Optimum Design of Steel Space Frames by Hybrid Teaching-Learning Based Optimization and Harmony Search Algorithms
Authors: Alper Akın, İbrahim Aydoğdu
Abstract:
This study presents a hybrid metaheuristic algorithm to obtain optimum designs for steel space buildings. The optimum design problem of three-dimensional steel frames is mathematically formulated according to provisions of LRFD-AISC (Load and Resistance factor design of American Institute of Steel Construction). Design constraints such as the strength requirements of structural members, the displacement limitations, the inter-story drift and the other structural constraints are derived from LRFD-AISC specification. In this study, a hybrid algorithm by using teachinglearning based optimization (TLBO) and harmony search (HS) algorithms is employed to solve the stated optimum design problem. These algorithms are two of the recent additions to metaheuristic techniques of numerical optimization and have been an efficient tool for solving discrete programming problems. Using these two algorithms in collaboration creates a more powerful tool and mitigates each other’s weaknesses. To demonstrate the powerful performance of presented hybrid algorithm, the optimum design of a large scale steel building is presented and the results are compared to the previously obtained results available in the literature.Keywords: Optimum structural design, hybrid techniques, teaching-learning based optimization, harmony search algorithm, minimum weight, steel space frame.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2450292 Neuron Efficiency in Fluid Dynamics and Prediction of Groundwater Reservoirs'' Properties Using Pattern Recognition
Authors: J. K. Adedeji, S. T. Ijatuyi
Abstract:
The application of neural network using pattern recognition to study the fluid dynamics and predict the groundwater reservoirs properties has been used in this research. The essential of geophysical survey using the manual methods has failed in basement environment, hence the need for an intelligent computing such as predicted from neural network is inevitable. A non-linear neural network with an XOR (exclusive OR) output of 8-bits configuration has been used in this research to predict the nature of groundwater reservoirs and fluid dynamics of a typical basement crystalline rock. The control variables are the apparent resistivity of weathered layer (p1), fractured layer (p2), and the depth (h), while the dependent variable is the flow parameter (F=λ). The algorithm that was used in training the neural network is the back-propagation coded in C++ language with 300 epoch runs. The neural network was very intelligent to map out the flow channels and detect how they behave to form viable storage within the strata. The neural network model showed that an important variable gr (gravitational resistance) can be deduced from the elevation and apparent resistivity pa. The model results from SPSS showed that the coefficients, a, b and c are statistically significant with reduced standard error at 5%.
Keywords: Neural network, gravitational resistance, pattern recognition, non-linear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 801291 A Novel SVM-Based OOK Detector in Low SNR Infrared Channels
Authors: J. P. Dubois, O. M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Keywords: Least square-support vector machine, on-off keying, matched filter, maximum likelihood detector, wireless infrared communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953290 Optimum Replacement Policies for Kuwait Passenger Transport Company Busses: Case Study
Authors: Hilal A. Abdelwali, Elsayed E.M. Ellaimony, Ahmad E.M. Murad, Jasem M.S. Al-Rajhi
Abstract:
Due to the excess of a vehicle operation through its life, some elements may face failure and deteriorate with time. This leads us to carry out maintenance, repair, tune up or full overhaul. After a certain period, the vehicle elements deteriorations increase with time which causes a very high increase of doing the maintenance operations and their costs. However, the logic decision at this point is to replace the current vehicle by a new one with minimum failure and maximum income. The importance of studying vehicle replacement problems come from the increase of stopping days due to many deteriorations in the vehicle parts. These deteriorations increase year after year causing an increase of operating costs and decrease the vehicle income. Vehicle replacement aims to determine the optimum time to keep, maintain, overhaul, renew and replace vehicles. This leads to an improvement in vehicle income, total operating costs, maintenance cost, fuel and oil costs, ton-kilometers, vehicle and engine performance, vehicle noise, vibration, and pollution. The aim of this paper is to find the optimum replacement policies of Kuwait Passenger Transport Company (KPTCP) fleet of busses. The objective of these policies is to maximize the busses pure profits. The dynamic programming (D.P.) technique is used to generate the busses optimal replacement policies
Keywords: Replacement Problem, Automotive Replacement, Dynamic Programming, Equipment Replacement, K.P.T.C.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530289 Experimental Study on Strength and Durability Properties of Bio-Self-Cured Fly Ash Based Concrete under Aggressive Environments
Authors: R. Malathy
Abstract:
High performance concrete is not only characterized by its high strength, workability, and durability but also by its smartness in performance without human care since the first day. If the concrete can cure on its own without external curing without compromising its strength and durability, then it is said to be high performance self-curing concrete. In this paper, an attempt is made on the performance study of internally cured concrete using biomaterials, namely Spinacea pleracea and Calatropis gigantea as self-curing agents, and it is compared with the performance of concrete with existing self-cure chemical, namely polyethylene glycol. The present paper focuses on workability, strength, and durability study on M20, M30, and M40 grade concretes replacing 30% of fly ash for cement. The optimum dosage of Spinacea pleracea, Calatropis gigantea, and polyethylene glycol was taken as 0.6%, 0.24%, and 0.3% by weight of cement from the earlier research studies. From the slump tests performed, it was found that there is a minimum variation between conventional concrete and self-cured concrete. The strength activity index is determined by keeping compressive strength of conventionally cured concrete for 28 days as unity and observed that, for self-cured concrete, it is more than 1 after 28 days and more than 1.15 after 56 days because of secondary reaction of fly ash. The performance study of concretes in aggressive environment like acid attack, sea water attack, and chloride attack was made, and the results are positive and encouraging in bio-self-cured concretes which are ecofriendly, cost effective, and high performance materials.
Keywords: Biomaterials, Calatropis gigantea, polyethylene glycol, Spinacea oleracea, self-curing concrete.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2838288 Optimal Sliding Mode Controller for Knee Flexion During Walking
Authors: Gabriel Sitler, Yousef Sardahi, Asad Salem
Abstract:
This paper presents an optimal and robust sliding mode controller (SMC) to regulate the position of the knee joint angle for patients suffering from knee injuries. The controller imitates the role of active orthoses that produce the joint torques required to overcome gravity and loading forces and regain natural human movements. To this end, a mathematical model of the shank, the lower part of the leg, is derived first and then used for the control system design and computer simulations. The design of the controller is carried out in optimal and multi-objective settings. Four objectives are considered: minimization of the control effort and tracking error; and maximization of the control signal smoothness and closed-loop system’s speed of response. Optimal solutions in terms of the Pareto set and its image, the Pareto front, are obtained. The results show that there are trade-offs among the design objectives and many optimal solutions from which the decision-maker can choose to implement. Also, computer simulations conducted at different points from the Pareto set and assuming knee squat movement demonstrate competing relationships among the design goals. In addition, the proposed control algorithm shows robustness in tracking a standard gait signal when accounting for uncertainty in the shank’s parameters.
Keywords: Optimal control, multi-objective optimization, sliding mode control, wearable knee exoskeletons.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181287 Effect of Wood Vinegar for Controlling on Housefly (Musca domestica L.)
Authors: U. Pangnakorn, S. Kanlaya, C. Kuntha
Abstract:
Raw wood vinegar was purified by both standing and filtering methods. Toxicity tests were conducted under laboratory conditions by the topical application method (contact poison) and feeding method (stomach poison). Larvicidal activities of wood vinegar at four different concentrations (10, 15, 20, 25 and 30 %) were studied against second instar larvae of housefly (Musca domestica L.). Four replicates were maintained for all treatments and controls. Larval mortality was recorded up to 96 hours and compared with the larval survivability by two methods of larvicidal bioassay. Percent pupation and percent adult emergence were observed in treated M. domestica. The study revealed that the feeding method gave higher efficiency compared with the topical application method. Larval mortality increased with increasing concentration of wood vinegar and the duration of exposure. No mortality was found in treated M. domestica larvae at minimum 10% concentration of wood vinegar through the experiments. The treated larvae were maintained up to pupa and adult emergence. At 30% maximum concentration larval duration was extended to 11 days in M. domestica for topical application method and 9 days for feeding method. Similarly the pupal durations were also increased with increased concentrations (16 and 24 days for topical application method and feeding method respectively at 30% concentration) of the treatments.Keywords: Housefly (Musca domestica L.), wood vinegar, mortality, topical application, feeding
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3156286 Longitudinal Shear Modulus of Single Aramid, Carbon and Glass Fibres by Torsion Pendulum Tests
Authors: I Prasanna Kumar, Satya Prakash Kushwaha, Preetamkumar Mohite, Sudhir Kamle
Abstract:
The longitudinal shear moduli of a single aramid, carbon and glass fibres are measured in the present study. A popularly known concept of freely oscillating torsion pendulum has been used to characterize the torsional modulus. A simple freely oscillating torsional pendulum setup is designed with two different types of plastic discs: horizontal and vertical, as the known mass of the pendulum. The time period of the torsional oscillation is measured to determine the torsional rigidity of the fibre. Then the shear modulus of the fibre is calculated from its torsional rigidity. The mean shear modulus of aramid, carbon and glass fibres measured are 6.22±0.09, 18.5±0.91, 38.1±3.55 GPa by horizontal disc pendulum and 6.19±0.13, 18.1±1.34 and 39.5±1.83 GPa by vertical disc pendulum, respectively. The results obtained by both pendulums differed by less than 5% and agreed well with the results reported in literature for these three types of fibres. A detailed uncertainty calculations are carried out for the measurements. It is seen that scatter as well as uncertainty (or error) in the measured shear modulus of these fibres is less than 10%. For aramid fibres the effect of gauge length on the shear modulus value is also studied. It is verified that the scatter in measured shear modulus value increases with gauge length and scatter in fibre diameter.
Keywords: Aramid; Carbon; Glass fibres, Longitudinal shear modulus, Torsion pendulum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3768285 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping
Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting
Abstract:
Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.
Keywords: Deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1094284 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS
Authors: E. Ramaraj, A. Padmapriya
Abstract:
In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987283 Controlling of Multi-Level Inverter under Shading Conditions Using Artificial Neural Network
Authors: Abed Sami Qawasme, Sameer Khader
Abstract:
This paper describes the effects of photovoltaic voltage changes on Multi-level inverter (MLI) due to solar irradiation variations, and methods to overcome these changes. The irradiation variation affects the generated voltage, which in turn varies the switching angles required to turn-on the inverter power switches in order to obtain minimum harmonic content in the output voltage profile. Genetic Algorithm (GA) is used to solve harmonics elimination equations of eleven level inverters with equal and non-equal dc sources. After that artificial neural network (ANN) algorithm is proposed to generate appropriate set of switching angles for MLI at any level of input dc sources voltage causing minimization of the total harmonic distortion (THD) to an acceptable limit. MATLAB/Simulink platform is used as a simulation tool and Fast Fourier Transform (FFT) analyses are carried out for output voltage profile to verify the reliability and accuracy of the applied technique for controlling the MLI harmonic distortion. According to the simulation results, the obtained THD for equal dc source is 9.38%, while for variable or unequal dc sources it varies between 10.26% and 12.93% as the input dc voltage varies between 4.47V nd 11.43V respectively. The proposed ANN algorithm provides satisfied simulation results that match with results obtained by alternative algorithms.
Keywords: Multi level inverter, genetic algorithm, artificial neural network, total harmonic distortion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 617282 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.
Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314281 Evaluating the Nexus between Energy Demand and Economic Growth Using the VECM Approach: Case Study of Nigeria, China, and the United States
Authors: Rita U. Onolemhemhen, Saheed L. Bello, Akin P. Iwayemi
Abstract:
The effectiveness of energy demand policy depends on identifying the key drivers of energy demand both in the short-run and the long-run. This paper examines the influence of regional differences on the link between energy demand and other explanatory variables for Nigeria, China and USA using the Vector Error Correction Model (VECM) approach. This study employed annual time series data on energy consumption (ED), real gross domestic product (GDP) per capita (RGDP), real energy prices (P) and urbanization (N) for a thirty-six-year sample period. The utilized time-series data are sourced from World Bank’s World Development Indicators (WDI, 2016) and US Energy Information Administration (EIA). Results from the study, shows that all the independent variables (income, urbanization, and price) substantially affect the long-run energy consumption in Nigeria, USA and China, whereas, income has no significant effect on short-run energy demand in USA and Nigeria. In addition, the long-run effect of urbanization is relatively stronger in China. Urbanization is a key factor in energy demand, it therefore recommended that more attention should be given to the development of rural communities to reduce the inflow of migrants into urban communities which causes the increase in energy demand and energy excesses should be penalized while energy management should be incentivized.Keywords: Economic growth, energy demand, income, real GDP, urbanization, VECM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 988280 An Algorithm Proposed for FIR Filter Coefficients Representation
Authors: Mohamed Al Mahdi Eshtawie, Masuri Bin Othman
Abstract:
Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.
Keywords: Pulse shaping Filter, Distributed Arithmetic, Optimization algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3174279 Spectral Amplitude Coding Optical CDMA: Performance Analysis of PIIN Reduction Using VC Code Family
Authors: Hassan Yousif Ahmed, Ibrahima Faye, N.M.Saad, S.A. Aljined
Abstract:
Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Keywords: FBG, MUI, PIIN, SAC-OCDMA, VCC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2210278 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform
Authors: Vijaya Prakash.A.M, K.S.Gurumurthy
Abstract:
In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139277 Evaluation of Mixed-Mode Stress Intensity Factor by Digital Image Correlation and Intelligent Hybrid Method
Authors: K. Machida, H. Yamada
Abstract:
Displacement measurement was conducted on compact normal and shear specimens made of acrylic homogeneous material subjected to mixed-mode loading by digital image correlation. The intelligent hybrid method proposed by Nishioka et al. was applied to the stress-strain analysis near the crack tip. The accuracy of stress-intensity factor at the free surface was discussed from the viewpoint of both the experiment and 3-D finite element analysis. The surface images before and after deformation were taken by a CMOS camera, and we developed the system which enabled the real time stress analysis based on digital image correlation and inverse problem analysis. The great portion of processing time of this system was spent on displacement analysis. Then, we tried improvement in speed of this portion. In the case of cracked body, it is also possible to evaluate fracture mechanics parameters such as the J integral, the strain energy release rate, and the stress-intensity factor of mixed-mode. The 9-points elliptic paraboloid approximation could not analyze the displacement of submicron order with high accuracy. The analysis accuracy of displacement was improved considerably by introducing the Newton-Raphson method in consideration of deformation of a subset. The stress-intensity factor was evaluated with high accuracy of less than 1% of the error.
Keywords: Digital image correlation, mixed mode, Newton-Raphson method, stress intensity factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703276 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: Air pollution, dispersion, emissions, line sources, road traffic, urban transport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929275 Investigating Polynomial Interpolation Functions for Zooming Low Resolution Digital Medical Images
Authors: Maninder Pal
Abstract:
Medical digital images usually have low resolution because of nature of their acquisition. Therefore, this paper focuses on zooming these images to obtain better level of information, required for the purpose of medical diagnosis. For this purpose, a strategy for selecting pixels in zooming operation is proposed. It is based on the principle of analog clock and utilizes a combination of point and neighborhood image processing. In this approach, the hour hand of clock covers the portion of image to be processed. For alignment, the center of clock points at middle pixel of the selected portion of image. The minute hand is longer in length, and is used to gain information about pixels of the surrounding area. This area is called neighborhood pixels region. This information is used to zoom the selected portion of the image. The proposed algorithm is implemented and its performance is evaluated for many medical images obtained from various sources such as X-ray, Computerized Tomography (CT) scan and Magnetic Resonance Imaging (MRI). However, for illustration and simplicity, the results obtained from a CT scanned image of head is presented. The performance of algorithm is evaluated in comparison to various traditional algorithms in terms of Peak signal-to-noise ratio (PSNR), maximum error, SSIM index, mutual information and processing time. From the results, the proposed algorithm is found to give better performance than traditional algorithms.
Keywords: Zooming, interpolation, medical images, resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576274 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils
Authors: Muqdad Al-Juboori, Bithin Datta
Abstract:
Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.Keywords: Artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376273 Estimation of the Bit Side Force by Using Artificial Neural Network
Authors: Mohammad Heidari
Abstract:
Horizontal wells are proven to be better producers because they can be extended for a long distance in the pay zone. Engineers have the technical means to forecast the well productivity for a given horizontal length. However, experiences have shown that the actual production rate is often significantly less than that of forecasted. It is a difficult task, if not impossible to identify the real reason why a horizontal well is not producing what was forecasted. Often the source of problem lies in the drilling of horizontal section such as permeability reduction in the pay zone due to mud invasion or snaky well patterns created during drilling. Although drillers aim to drill a constant inclination hole in the pay zone, the more frequent outcome is a sinusoidal wellbore trajectory. The two factors, which play an important role in wellbore tortuosity, are the inclination and side force at bit. A constant inclination horizontal well can only be drilled if the bit face is maintained perpendicular to longitudinal axis of bottom hole assembly (BHA) while keeping the side force nil at the bit. This approach assumes that there exists no formation force at bit. Hence, an appropriate BHA can be designed if bit side force and bit tilt are determined accurately. The Artificial Neural Network (ANN) is superior to existing analytical techniques. In this study, the neural networks have been employed as a general approximation tool for estimation of the bit side forces. A number of samples are analyzed with ANN for parameters of bit side force and the results are compared with exact analysis. Back Propagation Neural network (BPN) is used to approximation of bit side forces. Resultant low relative error value of the test indicates the usability of the BPN in this area.Keywords: Artificial Neural Network, BHA, Horizontal Well, Stabilizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978272 Formation of Protective Aluminum-Oxide Layer on the Surface of Fe-Cr-Al Sintered-Metal-Fibers via Multi-Stage Thermal Oxidation
Authors: Loai Ben Naji, Osama M. Ibrahim, Khaled J. Al-Fadhalah
Abstract:
The objective of this paper is to investigate the formation and adhesion of a protective aluminum-oxide (Al2O3, alumina) layer on the surface of Iron-Chromium-Aluminum Alloy (Fe-Cr-Al) sintered-metal-fibers. The oxide-scale layer was developed via multi-stage thermal oxidation at 930 oC for 1 hour, followed by 1 hour at 960 oC, and finally at 990 oC for 2 hours. Scanning Electron Microscope (SEM) images show that the multi-stage thermal oxidation resulted in the formation of predominantly Al2O3 platelets-like and whiskers. SEM images also reveal non-uniform oxide-scale growth on the surface of the fibers. Furthermore, peeling/spalling of the alumina protective layer occurred after minimum handling, which indicates weak adhesion forces between the protective layer and the base metal alloy. Energy Dispersive Spectroscopy (EDS) analysis of the heat-treated Fe-Cr-Al sintered-metal-fibers confirmed the high aluminum content on the surface of the protective layer, and the low aluminum content on the exposed base metal alloy surface. In conclusion, the failure of the oxide-scale protective layer exposes the base metal alloy to further oxidation, and the fragile non-uniform oxide-scale is not suitable as a support for catalysts.
Keywords: High-temperature oxidation, alumina protective layer, iron-chromium-aluminum alloy, sintered-metal-fibers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895