Search results for: BP algorithm
2186 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output
Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin
Abstract:
With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.Keywords: channel estimation, LMMSE, LS, MIMO, MMSE
Procedia PDF Downloads 1912185 Logical-Probabilistic Modeling of the Reliability of Complex Systems
Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia
Abstract:
The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element
Procedia PDF Downloads 732184 A New Intelligent, Dynamic and Real Time Management System of Sewerage
Authors: R. Tlili Yaakoubi, H.Nakouri, O. Blanpain, S. Lallahem
Abstract:
The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 19 to 100 %. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 40 % of total volume rejected to the natural environment and of 65 % in the number of discharges.Keywords: automation, optimization, paradigm, RTC
Procedia PDF Downloads 2992183 Applying of an Adaptive Neuro-Fuzzy Inference System (ANFIS) for Estimation of Flood Hydrographs
Authors: Amir Ahmad Dehghani, Morteza Nabizadeh
Abstract:
This paper presents the application of an Adaptive Neuro-Fuzzy Inference System (ANFIS) to flood hydrograph modeling of Shahid Rajaee reservoir dam located in Iran. This was carried out using 11 flood hydrographs recorded in Tajan river gauging station. From this dataset, 9 flood hydrographs were chosen to train the model and 2 flood hydrographs to test the model. The different architectures of neuro-fuzzy model according to the membership function and learning algorithm were designed and trained with different epochs. The results were evaluated in comparison with the observed hydrographs and the best structure of model was chosen according the least RMSE in each performance. To evaluate the efficiency of neuro-fuzzy model, various statistical indices such as Nash-Sutcliff and flood peak discharge error criteria were calculated. In this simulation, the coordinates of a flood hydrograph including peak discharge were estimated using the discharge values occurred in the earlier time steps as input values to the neuro-fuzzy model. These results indicate the satisfactory efficiency of neuro-fuzzy model for flood simulating. This performance of the model demonstrates the suitability of the implemented approach to flood management projects.Keywords: adaptive neuro-fuzzy inference system, flood hydrograph, hybrid learning algorithm, Shahid Rajaee reservoir dam
Procedia PDF Downloads 4782182 Optimization and Automation of Functional Testing with White-Box Testing Method
Authors: Reyhaneh Soltanshah, Hamid R. Zarandi
Abstract:
In order to be more efficient in industries that are related to computer systems, software testing is necessary despite spending time and money. In the embedded system software test, complete knowledge of the embedded system architecture is necessary to avoid significant costs and damages. Software tests increase the price of the final product. The aim of this article is to provide a method to reduce time and cost in tests based on program structure. First, a complete review of eleven white box test methods based on ISO/IEC/IEEE 29119 2015 and 2021 versions has been done. The proposed algorithm is designed using two versions of the 29119 standards, and some white-box testing methods that are expensive or have little coverage have been removed. On each of the functions, white box test methods were applied according to the 29119 standard and then the proposed algorithm was implemented on the functions. To speed up the implementation of the proposed method, the Unity framework has been used with some changes. Unity framework can be used in embedded software testing due to its open source and ability to implement white box test methods. The test items obtained from these two approaches were evaluated using a mathematical ratio, which in various software mining reduced between 50% and 80% of the test cost and reached the desired result with the minimum number of test items.Keywords: embedded software, reduce costs, software testing, white-box testing
Procedia PDF Downloads 552181 PET Image Resolution Enhancement
Authors: Krzysztof Malczewski
Abstract:
PET is widely applied scanning procedure in medical imaging based research. It delivers measurements of functioning in distinct areas of the human brain while the patient is comfortable, conscious and alert. This article presents the new compression sensing based super-resolution algorithm for improving the image resolution in clinical Positron Emission Tomography (PET) scanners. The issue of motion artifacts is well known in Positron Emission Tomography (PET) studies as its side effect. The PET images are being acquired over a limited period of time. As the patients cannot hold breath during the PET data gathering, spatial blurring and motion artefacts are the usual result. These may lead to wrong diagnosis. It is shown that the presented approach improves PET spatial resolution in cases when Compressed Sensing (CS) sequences are used. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were traditionally thought necessary. The application of CS to PET has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the goal is to combine super-resolution image enhancement algorithm with CS framework to achieve high resolution PET output. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity.Keywords: PET, super-resolution, image reconstruction, pattern recognition
Procedia PDF Downloads 3712180 Frequency Modulation Continuous Wave Radar Human Fall Detection Based on Time-Varying Range-Doppler Features
Authors: Xiang Yu, Chuntao Feng, Lu Yang, Meiyang Song, Wenhao Zhou
Abstract:
The existing two-dimensional micro-Doppler features extraction ignores the correlation information between the spatial and temporal dimension features. For the range-Doppler map, the time dimension is introduced, and a frequency modulation continuous wave (FMCW) radar human fall detection algorithm based on time-varying range-Doppler features is proposed. Firstly, the range-Doppler sequence maps are generated from the echo signals of the continuous motion of the human body collected by the radar. Then the three-dimensional data cube composed of multiple frames of range-Doppler maps is input into the three-dimensional Convolutional Neural Network (3D CNN). The spatial and temporal features of time-varying range-Doppler are extracted by the convolution layer and pool layer at the same time. Finally, the extracted spatial and temporal features are input into the fully connected layer for classification. The experimental results show that the proposed fall detection algorithm has a detection accuracy of 95.66%.Keywords: FMCW radar, fall detection, 3D CNN, time-varying range-doppler features
Procedia PDF Downloads 1232179 Optimization of Coefficients of Fractional Order Proportional-Integrator-Derivative Controller on Permanent Magnet Synchronous Motors Using Particle Swarm Optimization
Authors: Ali Motalebi Saraji, Reza Zarei Lamuki
Abstract:
Speed control and behavior improvement of permanent magnet synchronous motors (PMSM) that have reliable performance, low loss, and high power density, especially in industrial drives, are of great importance for researchers. Because of its importance in this paper, coefficients optimization of proportional-integrator-derivative fractional order controller is presented using Particle Swarm Optimization (PSO) algorithm in order to improve the behavior of PMSM in its speed control loop. This improvement is simulated in MATLAB software for the proposed optimized proportional-integrator-derivative fractional order controller with a Genetic algorithm and compared with a full order controller with a classic optimization method. Simulation results show the performance improvement of the proposed controller with respect to two other controllers in terms of rising time, overshoot, and settling time.Keywords: speed control loop of permanent magnet synchronous motor, fractional and full order proportional-integrator-derivative controller, coefficients optimization, particle swarm optimization, improvement of behavior
Procedia PDF Downloads 1462178 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms
Authors: Seulki Lee, Seoung Bum Kim
Abstract:
Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process
Procedia PDF Downloads 2992177 Identification of Biological Pathways Causative for Breast Cancer Using Unsupervised Machine Learning
Authors: Karthik Mittal
Abstract:
This study performs an unsupervised machine learning analysis to find clusters of related SNPs which highlight biological pathways that are important for the biological mechanisms of breast cancer. Studying genetic variations in isolation is illogical because these genetic variations are known to modulate protein production and function; the downstream effects of these modifications on biological outcomes are highly interconnected. After extracting the SNPs and their effect on different types of breast cancer using the MRBase library, two unsupervised machine learning clustering algorithms were implemented on the genetic variants: a k-means clustering algorithm and a hierarchical clustering algorithm; furthermore, principal component analysis was executed to visually represent the data. These algorithms specifically used the SNP’s beta value on the three different types of breast cancer tested in this project (estrogen-receptor positive breast cancer, estrogen-receptor negative breast cancer, and breast cancer in general) to perform this clustering. Two significant genetic pathways validated the clustering produced by this project: the MAPK signaling pathway and the connection between the BRCA2 gene and the ESR1 gene. This study provides the first proof of concept showing the importance of unsupervised machine learning in interpreting GWAS summary statistics.Keywords: breast cancer, computational biology, unsupervised machine learning, k-means, PCA
Procedia PDF Downloads 1462176 A Comparison of South East Asian Face Emotion Classification based on Optimized Ellipse Data Using Clustering Technique
Authors: M. Karthigayan, M. Rizon, Sazali Yaacob, R. Nagarajan, M. Muthukumaran, Thinaharan Ramachandran, Sargunam Thirugnanam
Abstract:
In this paper, using a set of irregular and regular ellipse fitting equations using Genetic algorithm (GA) are applied to the lip and eye features to classify the human emotions. Two South East Asian (SEA) faces are considered in this work for the emotion classification. There are six emotions and one neutral are considered as the output. Each subject shows unique characteristic of the lip and eye features for various emotions. GA is adopted to optimize irregular ellipse characteristics of the lip and eye features in each emotion. That is, the top portion of lip configuration is a part of one ellipse and the bottom of different ellipse. Two ellipse based fitness equations are proposed for the lip configuration and relevant parameters that define the emotions are listed. The GA method has achieved reasonably successful classification of emotion. In some emotions classification, optimized data values of one emotion are messed or overlapped to other emotion ranges. In order to overcome the overlapping problem between the emotion optimized values and at the same time to improve the classification, a fuzzy clustering method (FCM) of approach has been implemented to offer better classification. The GA-FCM approach offers a reasonably good classification within the ranges of clusters and it had been proven by applying to two SEA subjects and have improved the classification rate.Keywords: ellipse fitness function, genetic algorithm, emotion recognition, fuzzy clustering
Procedia PDF Downloads 5462175 The Algorithm to Solve the Extend General Malfatti’s Problem in a Convex Circular Triangle
Authors: Ching-Shoei Chiang
Abstract:
The Malfatti’s Problem solves the problem of fitting 3 circles into a right triangle such that these 3 circles are tangent to each other, and each circle is also tangent to a pair of the triangle’s sides. This problem has been extended to any triangle (called general Malfatti’s Problem). Furthermore, the problem has been extended to have 1+2+…+n circles inside the triangle with special tangency properties among circles and triangle sides; we call it extended general Malfatti’s problem. In the extended general Malfatti’s problem, call it Tri(Tn), where Tn is the triangle number, there are closed-form solutions for Tri(T₁) (inscribed circle) problem and Tri(T₂) (3 Malfatti’s circles) problem. These problems become more complex when n is greater than 2. In solving Tri(Tn) problem, n>2, algorithms have been proposed to solve these problems numerically. With a similar idea, this paper proposed an algorithm to find the radii of circles with the same tangency properties. Instead of the boundary of the triangle being a straight line, we use a convex circular arc as the boundary and try to find Tn circles inside this convex circular triangle with the same tangency properties among circles and boundary Carc. We call these problems the Carc(Tn) problems. The CPU time it takes for Carc(T16) problem, which finds 136 circles inside a convex circular triangle with specified tangency properties, is less than one second.Keywords: circle packing, computer-aided geometric design, geometric constraint solver, Malfatti’s problem
Procedia PDF Downloads 1102174 Investigating Message Timing Side Channel Attacks on Networks on Chip with Ring Topology
Authors: Mark Davey
Abstract:
Communications on a Network on Chip (NoC) produce timing information, i.e., network injection delays, packet traversal times, throughput metrics, and other attributes relating to the traffic being sent across the chip. The security requirements of a platform encompass each node to operate with confidentiality, integrity, and availability (ISO 27001). Inherently, a shared NoC interconnect is exposed to analysis of timing patterns created by contention for the network components, i.e., links and switches/routers. This phenomenon is defined as information leakage, which represents a ‘side channel’ of sensitive information that can be correlated to platform activity. The key algorithm presented in this paper evaluates how an adversary can control two platform neighbouring nodes of a target node to obtain sensitive information about communication with the target node. The actual information obtained is the period value of a periodic task communication. This enacts a breach of the expected confidentiality of a node operating in a multiprocessor platform. An experimental investigation of the side channel is undertaken to judge the level and significance of inferred information produced by access times to the NoC. Results are presented with a series of expanding task set scenarios to evaluate the efficacy of the side channel detection algorithm as the network load increases.Keywords: embedded systems, multiprocessor, network on chip, side channel
Procedia PDF Downloads 712173 Channel Sounding and PAPR Reduction in OFDM for WiMAX Using Software Defined Radio
Authors: B. Siva Kumar Reddy, B. Lakshmi
Abstract:
WiMAX is a high speed broadband wireless access technology that adopted OFDM/OFDMA techniques to supply higher data rates with high spectral efficiency. However, OFDM suffers in view of high Peak to Average Power Ratio (PAPR) and high affect to synchronization errors. In this paper, the high PAPR problem is solved by using phase modulation to get Constant Envelop Orthogonal Frequency Division Multiplexing (CE-OFDM). The synchronization failures are brought down by employing a frequency lock loop, Poly phase clock synchronizer, Costas loop and blind equalizers such as Constant Modulus Algorithm (CMA) equalizer and Sign Kurtosis Maximization Adaptive Algorithm (SKMAA) equalizers. The WiMAX physical layer is executed on Software Defined Radio (SDR) prototype by utilizing USRP N210 as hardware and GNU Radio as software plat-forms. A SNR estimation is performed on the signal received through USRP N210. To empathize wireless propagation in specific environments, a sliding correlator wireless channel sounding system is designed by using SDR testbed.Keywords: BER, CMA equalizer, Kurtosis equalizer, GNU Radio, OFDM/OFDMA, USRP N210
Procedia PDF Downloads 3492172 The Optimum Mel-Frequency Cepstral Coefficients (MFCCs) Contribution to Iranian Traditional Music Genre Classification by Instrumental Features
Authors: M. Abbasi Layegh, S. Haghipour, K. Athari, R. Khosravi, M. Tafkikialamdari
Abstract:
An approach to find the optimum mel-frequency cepstral coefficients (MFCCs) for the Radif of Mirzâ Ábdollâh, which is the principal emblem and the heart of Persian music, performed by most famous Iranian masters on two Iranian stringed instruments ‘Tar’ and ‘Setar’ is proposed. While investigating the variance of MFCC for each record in themusic database of 1500 gushe of the repertoire belonging to 12 modal systems (dastgâh and âvâz), we have applied the Fuzzy C-Mean clustering algorithm on each of the 12 coefficient and different combinations of those coefficients. We have applied the same experiment while increasing the number of coefficients but the clustering accuracy remained the same. Therefore, we can conclude that the first 7 MFCCs (V-7MFCC) are enough for classification of The Radif of Mirzâ Ábdollâh. Classical machine learning algorithms such as MLP neural networks, K-Nearest Neighbors (KNN), Gaussian Mixture Model (GMM), Hidden Markov Model (HMM) and Support Vector Machine (SVM) have been employed. Finally, it can be realized that SVM shows a better performance in this study.Keywords: radif of Mirzâ Ábdollâh, Gushe, mel frequency cepstral coefficients, fuzzy c-mean clustering algorithm, k-nearest neighbors (KNN), gaussian mixture model (GMM), hidden markov model (HMM), support vector machine (SVM)
Procedia PDF Downloads 4462171 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.
Authors: Zabeehullah, Fahim Arif, Yawar Abbas
Abstract:
Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.Keywords: SDN, IoT, DL, ML, DRS
Procedia PDF Downloads 1102170 Speed Control of DC Motor Using Optimization Techniques Based PID Controller
Authors: Santosh Kumar Suman, Vinod Kumar Giri
Abstract:
The goal of this paper is to outline a speed controller of a DC motor by choice of a PID parameters utilizing genetic algorithms (GAs), the DC motor is extensively utilized as a part of numerous applications such as steel plants, electric trains, cranes and a great deal more. DC motor could be represented by a nonlinear model when nonlinearities such as attractive dissemination are considered. To provide effective control, nonlinearities and uncertainties in the model must be taken into account in the control design. The DC motor is considered as third order system. Objective of this paper three type of tuning techniques for PID parameter. In this paper, an independently energized DC motor utilizing MATLAB displaying, has been outlined whose velocity might be examined utilizing the Proportional, Integral, Derivative (KP, KI , KD) addition of the PID controller. Since, established controllers PID are neglecting to control the drive when weight parameters be likewise changed. The principle point of this paper is to dissect the execution of optimization techniques viz. The Genetic Algorithm (GA) for improve PID controllers parameters for velocity control of DC motor and list their points of interest over the traditional tuning strategies. The outcomes got from GA calculations were contrasted and that got from traditional technique. It was found that the optimization techniques beat customary tuning practices of ordinary PID controllers.Keywords: DC motor, PID controller, optimization techniques, genetic algorithm (GA), objective function, IAE
Procedia PDF Downloads 4202169 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 2572168 Optimal Location of Unified Power Flow Controller (UPFC) for Transient Stability: Improvement Using Genetic Algorithm (GA)
Authors: Basheer Idrees Balarabe, Aminu Hamisu Kura, Nabila Shehu
Abstract:
As the power demand rapidly increases, the generation and transmission systems are affected because of inadequate resources, environmental restrictions and other losses. The role of transient stability control in maintaining the steady-state operation in the occurrence of large disturbance and fault is to describe the ability of the power system to survive serious contingency in time. The application of a Unified power flow controller (UPFC) plays a vital role in controlling the active and reactive power flows in a transmission line. In this research, a genetic algorithm (GA) method is applied to determine the optimal location of the UPFC device in a power system network for the enhancement of the power-system Transient Stability. Optimal location of UPFC has Significantly Improved the transient stability, the damping oscillation and reduced the peak over shoot. The GA optimization Technique proposed was iteratively searches the optimal location of UPFC and maintains the unusual bus voltages within the satisfy limits. The result indicated that transient stability is improved and achieved the faster steady state. Simulations were performed on the IEEE 14 Bus test systems using the MATLAB/Simulink platform.Keywords: UPFC, transient stability, GA, IEEE, MATLAB and SIMULINK
Procedia PDF Downloads 142167 Arbitrarily Shaped Blur Kernel Estimation for Single Image Blind Deblurring
Authors: Aftab Khan, Ashfaq Khan
Abstract:
The research paper focuses on an interesting challenge faced in Blind Image Deblurring (BID). It relates to the estimation of arbitrarily shaped or non-parametric Point Spread Functions (PSFs) of motion blur caused by camera handshake. These PSFs exhibit much more complex shapes than their parametric counterparts and deblurring in this case requires intricate ways to estimate the blur and effectively remove it. This research work introduces a novel blind deblurring scheme visualized for deblurring images corrupted by arbitrarily shaped PSFs. It is based on Genetic Algorithm (GA) and utilises the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE) measure as the fitness function for arbitrarily shaped PSF estimation. The proposed BID scheme has been compared with other single image motion deblurring schemes as benchmark. Validation has been carried out on various blurred images. Results of both benchmark and real images are presented. Non-reference image quality measures were used to quantify the deblurring results. For benchmark images, the proposed BID scheme using BRISQUE converges in close vicinity of the original blurring functions.Keywords: blind deconvolution, blind image deblurring, genetic algorithm, image restoration, image quality measures
Procedia PDF Downloads 4432166 The Influence of Audio on Perceived Quality of Segmentation
Authors: Silvio Ricardo Rodrigues Sanches, Bianca Cogo Barbosa, Beatriz Regina Brum, Cléber Gimenez Corrêa
Abstract:
To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.Keywords: background substitution, influence of audio, segmentation evaluation, segmentation quality
Procedia PDF Downloads 1172165 A Comparative Analysis of Heuristics Applied to Collecting Used Lubricant Oils Generated in the City of Pereira, Colombia
Authors: Diana Fajardo, Sebastián Ortiz, Oscar Herrera, Angélica Santis
Abstract:
Currently, in Colombia is arising a problem related to collecting used lubricant oils which are generated by the increment of the vehicle fleet. This situation does not allow a proper disposal of this type of waste, which in turn results in a negative impact on the environment. Therefore, through the comparative analysis of various heuristics, the best solution to the VRP (Vehicle Routing Problem) was selected by comparing costs and times for the collection of used lubricant oils in the city of Pereira, Colombia; since there is no presence of management companies engaged in the direct administration of the collection of this pollutant. To achieve this aim, six proposals of through methods of solution of two phases were discussed. First, the assignment of the group of generator points of the residue was made (previously identified). Proposals one and four of through methods are based on the closeness of points. The proposals two and five are using the scanning method and the proposals three and six are considering the restriction of the capacity of collection vehicle. Subsequently, the routes were developed - in the first three proposals by the Clarke and Wright's savings algorithm and in the following proposals by the Traveling Salesman optimization mathematical model. After applying techniques, a comparative analysis of the results was performed and it was determined which of the proposals presented the most optimal values in terms of the distance, cost and travel time.Keywords: Heuristics, optimization Model, savings algorithm, used vehicular oil, V.R.P.
Procedia PDF Downloads 4142164 Conception of a Regulated, Dynamic and Intelligent Sewerage in Ostrevent
Authors: Rabaa Tlili Yaakoubi, Hind Nakouri, Olivier Blanpain
Abstract:
The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of the CARDIO project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 40 to 100%. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 60% of total volume rejected to the natural environment and of 80 % in the number of discharges.Keywords: RTC, paradigm, optimization, automation
Procedia PDF Downloads 2842163 Algorithm for Predicting Cognitive Exertion and Cognitive Fatigue Using a Portable EEG Headset for Concussion Rehabilitation
Authors: Lou J. Pino, Mark Campbell, Matthew J. Kennedy, Ashleigh C. Kennedy
Abstract:
A concussion is complex and nuanced, with cognitive rest being a key component of recovery. Cognitive overexertion during rehabilitation from a concussion is associated with delayed recovery. However, daily living imposes cognitive demands that may be unavoidable and difficult to quantify. Therefore, a portable tool capable of alerting patients before cognitive overexertion occurs could allow patients to maintain their quality of life while preventing symptoms and recovery setbacks. EEG allows for a sensitive measure of cognitive exertion. Clinical 32-lead EEG headsets are not practical for day-to-day concussion rehabilitation management. However, there are now commercially available and affordable portable EEG headsets. Thus, these headsets can potentially be used to continuously monitor cognitive exertion during mental tasks to alert the wearer of overexertion, with the aim of preventing the occurrence of symptoms to speed recovery times. The objective of this study was to test an algorithm for predicting cognitive exertion from EEG data collected from a portable headset. EEG data were acquired from 10 participants (5 males, 5 females). Each participant wore a portable 4 channel EEG headband while completing 10 tasks: rest (eyes closed), rest (eyes open), three levels of the increasing difficulty of logic puzzles, three levels of increasing difficulty in multiplication questions, rest (eyes open), and rest (eyes closed). After each task, the participant was asked to report their perceived level of cognitive exertion using the NASA Task Load Index (TLX). Each participant then completed a second session on a different day. A customized machine learning model was created using data from the first session. The performance of each model was then tested using data from the second session. The mean correlation coefficient between TLX scores and predicted cognitive exertion was 0.75 ± 0.16. The results support the efficacy of the algorithm for predicting cognitive exertion. This demonstrates that the algorithms developed in this study used with portable EEG devices have the potential to aid in the concussion recovery process by monitoring and warning patients of cognitive overexertion. Preventing cognitive overexertion during recovery may reduce the number of symptoms a patient experiences and may help speed the recovery process.Keywords: cognitive activity, EEG, machine learning, personalized recovery
Procedia PDF Downloads 2202162 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control
Procedia PDF Downloads 1622161 Adaptation of Projection Profile Algorithm for Skewed Handwritten Text Line Detection
Authors: Kayode A. Olaniyi, Tola. M. Osifeko, Adeola A. Ogunleye
Abstract:
Text line segmentation is an important step in document image processing. It represents a labeling process that assigns the same label using distance metric probability to spatially aligned units. Text line detection techniques have successfully been implemented mainly in printed documents. However, processing of the handwritten texts especially unconstrained documents has remained a key problem. This is because the unconstrained hand-written text lines are often not uniformly skewed. The spaces between text lines may not be obvious, complicated by the nature of handwriting and, overlapping ascenders and/or descenders of some characters. Hence, text lines detection and segmentation represents a leading challenge in handwritten document image processing. Text line detection methods that rely on the traditional global projection profile of the text document cannot efficiently confront with the problem of variable skew angles between different text lines. Hence, the formulation of a horizontal line as a separator is often not efficient. This paper presents a technique to segment a handwritten document into distinct lines of text. The proposed algorithm starts, by partitioning the initial text image into columns, across its width into chunks of about 5% each. At each vertical strip of 5%, the histogram of horizontal runs is projected. We have worked with the assumption that text appearing in a single strip is almost parallel to each other. The algorithm developed provides a sliding window through the first vertical strip on the left side of the page. It runs through to identify the new minimum corresponding to a valley in the projection profile. Each valley would represent the starting point of the orientation line and the ending point is the minimum point on the projection profile of the next vertical strip. The derived text-lines traverse around any obstructing handwritten vertical strips of connected component by associating it to either the line above or below. A decision of associating such connected component is made by the probability obtained from a distance metric decision. The technique outperforms the global projection profile for text line segmentation and it is robust to handle skewed documents and those with lines running into each other.Keywords: connected-component, projection-profile, segmentation, text-line
Procedia PDF Downloads 1242160 Novel Adaptive Radial Basis Function Neural Networks Based Approach for Short-Term Load Forecasting of Jordanian Power Grid
Authors: Eyad Almaita
Abstract:
In this paper, a novel adaptive Radial Basis Function Neural Networks (RBFNN) algorithm is used to forecast the hour by hour electrical load demand in Jordan. A small and effective RBFNN model is used to forecast the hourly total load demand based on a small number of features. These features are; the load in the previous day, the load in the same day in the previous week, the temperature in the same hour, the hour number, the day number, and the day type. The proposed adaptive RBFNN model can enhance the reliability of the conventional RBFNN after embedding the network in the system. This is achieved by introducing an adaptive algorithm that allows the change of the weights of the RBFNN after the training process is completed, which will eliminates the need to retrain the RBFNN model again. The data used in this paper is real data measured by National Electrical Power co. (Jordan). The data for the period Jan./2012-April/2013 is used train the RBFNN models and the data for the period May/2013- Sep. /2013 is used to validate the models effectiveness.Keywords: load forecasting, adaptive neural network, radial basis function, short-term, electricity consumption
Procedia PDF Downloads 3442159 Integrating Process Planning, WMS Dispatching, and WPPW Weighted Due Date Assignment Using a Genetic Algorithm
Authors: Halil Ibrahim Demir, Tarık Cakar, Ibrahim Cil, Muharrem Dugenci, Caner Erden
Abstract:
Conventionally, process planning, scheduling, and due-date assignment functions are performed separately and sequentially. The interdependence of these functions requires integration. Although integrated process planning and scheduling, and scheduling with due date assignment problems are popular research topics, only a few works address the integration of these three functions. This work focuses on the integration of process planning, WMS scheduling, and WPPW due date assignment. Another novelty of this work is the use of a weighted due date assignment. In the literature, due dates are generally assigned without considering the importance of customers. However, in this study, more important customers get closer due dates. Typically, only tardiness is punished, but the JIT philosophy punishes both earliness and tardiness. In this study, all weighted earliness, tardiness, and due date related costs are penalized. As no customer desires distant due dates, such distant due dates should be penalized. In this study, various levels of integration of these three functions are tested and genetic search and random search are compared both with each other and with ordinary solutions. Higher integration levels are superior, while search is always useful. Genetic searches outperformed random searches.Keywords: process planning, weighted scheduling, weighted due-date assignment, genetic algorithm, random search
Procedia PDF Downloads 3942158 De-Novo Structural Elucidation from Mass/NMR Spectra
Authors: Ismael Zamora, Elisabeth Ortega, Tatiana Radchenko, Guillem Plasencia
Abstract:
The structure elucidation based on Mass Spectra (MS) data of unknown substances is an unresolved problem that affects many different fields of application. The recent overview of software available for structure elucidation of small molecules has shown the demand for efficient computational tool that will be able to perform structure elucidation of unknown small molecules and peptides. We developed an algorithm for De-Novo fragment analysis based on MS data that proposes a set of scored and ranked structures that are compatible with the MS and MSMS spectra. Several different algorithms were developed depending on the initial set of fragments and the structure building processes. Also, in all cases, several scores for the final molecule ranking were computed. They were validated with small and middle databases (DB) with the eleven test set compounds. Similar results were obtained from any of the databases that contained the fragments of the expected compound. We presented an algorithm. Or De-Novo fragment analysis based on only mass spectrometry (MS) data only that proposed a set of scored/ranked structures that was validated on different types of databases and showed good results as proof of concept. Moreover, the solutions proposed by Mass Spectrometry were submitted to the prediction of NMR spectra in order to elucidate which of the proposed structures was compatible with the NMR spectra collected.Keywords: De Novo, structure elucidation, mass spectrometry, NMR
Procedia PDF Downloads 2952157 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System
Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim
Abstract:
General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms
Procedia PDF Downloads 391