Search results for: pseudo affine projection algorithm
89 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis
Authors: A.K. Tangirala, S. Babji
Abstract:
In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 153788 A Hybrid Fuzzy AGC in a Competitive Electricity Environment
Authors: H. Shayeghi, A. Jalili
Abstract:
This paper presents a new Hybrid Fuzzy (HF) PID type controller based on Genetic Algorithms (GA-s) for solution of the Automatic generation Control (AGC) problem in a deregulated electricity environment. In order for a fuzzy rule based control system to perform well, the fuzzy sets must be carefully designed. A major problem plaguing the effective use of this method is the difficulty of accurately constructing the membership functions, because it is a computationally expensive combinatorial optimization problem. On the other hand, GAs is a technique that emulates biological evolutionary theories to solve complex optimization problems by using directed random searches to derive a set of optimal solutions. For this reason, the membership functions are tuned automatically using a modified GA-s based on the hill climbing method. The motivation for using the modified GA-s is to reduce fuzzy system effort and take large parametric uncertainties into account. The global optimum value is guaranteed using the proposed method and the speed of the algorithm-s convergence is extremely improved, too. This newly developed control strategy combines the advantage of GA-s and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed GA based HF (GAHF) controller is tested on a threearea deregulated power system under different operating conditions and contract variations. The results of the proposed GAHF controller are compared with those of Multi Stage Fuzzy (MSF) controller, robust mixed H2/H∞ and classical PID controllers through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes.
Keywords: AGC, Hybrid Fuzzy Controller, Deregulated Power System, Power System Control, GAs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174087 FPGA Hardware Implementation and Evaluation of a Micro-Network Architecture for Multi-Core Systems
Authors: Yahia Salah, Med Lassaad Kaddachi, Rached Tourki
Abstract:
This paper presents the design, implementation and evaluation of a micro-network, or Network-on-Chip (NoC), based on a generic pipeline router architecture. The router is designed to efficiently support traffic generated by multimedia applications on embedded multi-core systems. It employs a simplest routing mechanism and implements the round-robin scheduling strategy to resolve output port contentions and minimize latency. A virtual channel flow control is applied to avoid the head-of-line blocking problem and enhance performance in the NoC. The hardware design of the router architecture has been implemented at the register transfer level; its functionality is evaluated in the case of the two dimensional Mesh/Torus topology, and performance results are derived from ModelSim simulator and Xilinx ISE 9.2i synthesis tool. An example of a multi-core image processing system utilizing the NoC structure has been implemented and validated to demonstrate the capability of the proposed micro-network architecture. To reduce complexity of the image compression and decompression architecture, the system use image processing algorithm based on classical discrete cosine transform with an efficient zonal processing approach. The experimental results have confirmed that both the proposed image compression scheme and NoC architecture can achieve a reasonable image quality with lower processing time.
Keywords: Generic Pipeline Network-on-Chip Router Architecture, JPEG Image Compression, FPGA Hardware Implementation, Performance Evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 310086 Fake Account Detection in Twitter Based on Minimum Weighted Feature set
Authors: Ahmed El Azab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny
Abstract:
Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, and then the determined factors are applied using different classification techniques. A comparison of the results of these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent researches in the same area; this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts; moreover, the study can be applied on different social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.Keywords: Fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 584985 Laser Registration and Supervisory Control of neuroArm Robotic Surgical System
Authors: Hamidreza Hoshyarmanesh, Hosein Madieh, Sanju Lama, Yaser Maddahi, Garnette R. Sutherland, Kourosh Zareinia
Abstract:
This paper illustrates the concept of an algorithm to register specified markers on the neuroArm surgical manipulators, an image-guided MR-compatible tele-operated robot for microsurgery and stereotaxy. Two range-finding algorithms, namely time-of-flight and phase-shift, are evaluated for registration and supervisory control. The time-of-flight approach is implemented in a semi-field experiment to determine the precise position of a tiny retro-reflective moving object. The moving object simulates a surgical tool tip. The tool is a target that would be connected to the neuroArm end-effector during surgery inside the magnet bore of the MR imaging system. In order to apply flight approach, a 905-nm pulsed laser diode and an avalanche photodiode are utilized as the transmitter and receiver, respectively. For the experiment, a high frequency time to digital converter was designed using a field-programmable gate arrays. In the phase-shift approach, a continuous green laser beam with a wavelength of 530 nm was used as the transmitter. Results showed that a positioning error of 0.1 mm occurred when the scanner-target point distance was set in the range of 2.5 to 3 meters. The effectiveness of this non-contact approach exhibited that the method could be employed as an alternative for conventional mechanical registration arm. Furthermore, the approach is not limited by physical contact and extension of joint angles.
Keywords: 3D laser scanner, intraoperative MR imaging, neuroArm, real time registration, robot-assisted surgery, supervisory control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 106684 Accurate Calculation of Free Frequencies of Beams and Rectangular Plates
Authors: R .Lassoued, M. Guenfoud
Abstract:
An accurate procedure to determine free vibrations of beams and plates is presented. The natural frequencies are exact solutions of governing vibration equations witch load to a nonlinear homogeny system. The bilinear and linear structures considered simulate a bridge. The dynamic behavior of this one is analyzed by using the theory of the orthotropic plate simply supported on two sides and free on the two others. The plate can be excited by a convoy of constant or harmonic loads. The determination of the dynamic response of the structures considered requires knowledge of the free frequencies and the shape modes of vibrations. Our work is in this context. Indeed, we are interested to develop a self-consistent calculation of the Eigen frequencies. The formulation is based on the determination of the solution of the differential equations of vibrations. The boundary conditions corresponding to the shape modes permit to lead to a homogeneous system. Determination of the noncommonplace solutions of this system led to a nonlinear problem in Eigen frequencies. We thus, develop a computer code for the determination of the eigenvalues. It is based on a method of bisection with interpolation whose precision reaches 10 -12. Moreover, to determine the corresponding modes, the calculation algorithm that we develop uses the method of Gauss with a partial optimization of the "pivots" combined with an inverse power procedure. The Eigen frequencies of a plate simply supported along two opposite sides while considering the two other free sides are thus analyzed. The results could be generalized with the case of a beam by regarding it as a plate with low width. We give, in this paper, some examples of treated cases. The comparison with results presented in the literature is completely satisfactory.Keywords: Free frequencies, beams, rectangular plates.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 219683 LIDAR Obstacle Warning and Avoidance System for Unmanned Aircraft
Authors: Roberto Sabatini, Alessandro Gardi, Mark A. Richardson
Abstract:
The availability of powerful eye-safe laser sources and the recent advancements in electro-optical and mechanical beam-steering components have allowed laser-based Light Detection and Ranging (LIDAR) to become a promising technology for obstacle warning and avoidance in a variety of manned and unmanned aircraft applications. LIDAR outstanding angular resolution and accuracy characteristics are coupled to its good detection performance in a wide range of incidence angles and weather conditions, providing an ideal obstacle avoidance solution, which is especially attractive in low-level flying platforms such as helicopters and small-to-medium size Unmanned Aircraft (UA). The Laser Obstacle Avoidance Marconi (LOAM) system is one of such systems, which was jointly developed and tested by SELEX-ES and the Italian Air Force Research and Flight Test Centre. The system was originally conceived for military rotorcraft platforms and, in this paper, we briefly review the previous work and discuss in more details some of the key development activities required for integration of LOAM on UA platforms. The main hardware and software design features of this LOAM variant are presented, including a brief description of the system interfaces and sensor characteristics, together with the system performance models and data processing algorithms for obstacle detection, classification and avoidance. In particular, the paper focuses on the algorithm proposed for optimal avoidance trajectory generation in UA applications.
Keywords: LIDAR, Low-Level Flight, Nap-of-the-Earth Flight, Near Infra-Red, Obstacle Avoidance, Obstacle Detection, Obstacle Warning System, Sense and Avoid, Trajectory Optimisation, Unmanned Aircraft.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708782 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging
Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan
Abstract:
In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 149581 Information Retrieval in Domain Specific Search Engine with Machine Learning Approaches
Authors: Shilpy Sharma
Abstract:
As the web continues to grow exponentially, the idea of crawling the entire web on a regular basis becomes less and less feasible, so the need to include information on specific domain, domain-specific search engines was proposed. As more information becomes available on the World Wide Web, it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest) [2]. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves. In a multi-view problem, the features of the domain can be partitioned into disjoint subsets (views) that are sufficient to learn the target concept. Semi-supervised, multi-view algorithms, which reduce the amount of labeled data required for learning, rely on the assumptions that the views are compatible and uncorrelated. This paper describes the use of semi-structured machine learning approach with Active learning for the “Domain Specific Search Engines". A domain-specific search engine is “An information access system that allows access to all the information on the web that is relevant to a particular domain. The proposed work shows that with the help of this approach relevant data can be extracted with the minimum queries fired by the user. It requires small number of labeled data and pool of unlabelled data on which the learning algorithm is applied to extract the required data.Keywords: Search engines; machine learning, Informationretrieval, Active logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 208480 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction
Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal
Abstract:
The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.Keywords: Acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 97779 ZigBee Wireless Sensor Nodes with Hybrid Energy Storage System Based On Li-ion Battery and Solar Energy Supply
Authors: Chia-Chi Chang, Chuan-Bi Lin, Chia-Min Chan
Abstract:
Most ZigBee sensor networks to date make use of nodes with limited processing, communication, and energy capabilities. Energy consumption is of great importance in wireless sensor applications as their nodes are commonly battery-driven. Once ZigBee nodes are deployed outdoors, limited power may make a sensor network useless before its purpose is complete. At present, there are two strategies for long node and network lifetime. The first strategy is saving energy as much as possible. The energy consumption will be minimized through switching the node from active mode to sleep mode and routing protocol with ultra-low energy consumption. The second strategy is to evaluate the energy consumption of sensor applications as accurately as possible. Erroneous energy model may render a ZigBee sensor network useless before changing batteries.
In this paper, we present a ZigBee wireless sensor node with four key modules: a processing and radio unit, an energy harvesting unit, an energy storage unit, and a sensor unit. The processing unit uses CC2530 for controlling the sensor, carrying out routing protocol, and performing wireless communication with other nodes. The harvesting unit uses a 2W solar panel to provide lasting energy for the node. The storage unit consists of a rechargeable 1200 mAh Li-ion battery and a battery charger using a constant-current/constant-voltage algorithm. Our solution to extend node lifetime is implemented. Finally, a long-term sensor network test is used to exhibit the functionality of the solar powered system.
Keywords: ZigBee, Li-ion battery, solar panel, CC2530.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 309578 Improved Segmentation of Speckled Images Using an Arithmetic-to-Geometric Mean Ratio Kernel
Abstract:
In this work, we improve a previously developed segmentation scheme aimed at extracting edge information from speckled images using a maximum likelihood edge detector. The scheme was based on finding a threshold for the probability density function of a new kernel defined as the arithmetic mean-to-geometric mean ratio field over a circular neighborhood set and, in a general context, is founded on a likelihood random field model (LRFM). The segmentation algorithm was applied to discriminated speckle areas obtained using simple elliptic discriminant functions based on measures of the signal-to-noise ratio with fractional order moments. A rigorous stochastic analysis was used to derive an exact expression for the cumulative density function of the probability density function of the random field. Based on this, an accurate probability of error was derived and the performance of the scheme was analysed. The improved segmentation scheme performed well for both simulated and real images and showed superior results to those previously obtained using the original LRFM scheme and standard edge detection methods. In particular, the false alarm probability was markedly lower than that of the original LRFM method with oversegmentation artifacts virtually eliminated. The importance of this work lies in the development of a stochastic-based segmentation, allowing an accurate quantification of the probability of false detection. Non visual quantification and misclassification in medical ultrasound speckled images is relatively new and is of interest to clinicians.Keywords: Discriminant function, false alarm, segmentation, signal-to-noise ratio, skewness, speckle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165777 Uncertainty Multiple Criteria Decision Making Analysis for Stealth Combat Aircraft Selection
Authors: C. Ardil
Abstract:
Fuzzy set theory and its extensions (intuitionistic fuzzy sets, picture fuzzy sets, and neutrosophic sets) have been widely used to address imprecision and uncertainty in complex decision-making. However, they may struggle with inherent indeterminacy and inconsistency in real-world situations. This study introduces uncertainty sets as a promising alternative, offering a structured framework for incorporating both types of uncertainty into decision-making processes.This work explores the theoretical foundations and applications of uncertainty sets. A novel decision-making algorithm based on uncertainty set-based proximity measures is developed and demonstrated through a practical application: selecting the most suitable stealth combat aircraft.
The results highlight the effectiveness of uncertainty sets in ranking alternatives under uncertainty. Uncertainty sets offer several advantages, including structured uncertainty representation, robust ranking mechanisms, and enhanced decision-making capabilities due to their ability to account for ambiguity.Future research directions are also outlined, including comparative analysis with existing MCDM methods under uncertainty, sensitivity analysis to assess the robustness of rankings,and broader application to various MCDM problems with diverse complexities. By exploring these avenues, uncertainty sets can be further established as a valuable tool for navigating uncertainty in complex decision-making scenarios.
Keywords: Uncertainty set, stealth combat aircraft selection multiple criteria decision-making analysis, MCDM, uncertainty proximity analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20076 Iris Recognition Based On the Low Order Norms of Gradient Components
Authors: Iman A. Saad, Loay E. George
Abstract:
Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.
Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196775 Analysis of Driver Point of Regard Determinations with Eye-Gesture Templates Using Receiver Operating Characteristic
Authors: Siti Nor Hafizah binti Mohd Zaid, Mohamed Abdel-Maguid, Abdel-Hamid Soliman
Abstract:
An Advance Driver Assistance System (ADAS) is a computer system on board a vehicle which is used to reduce the risk of vehicular accidents by monitoring factors relating to the driver, vehicle and environment and taking some action when a risk is identified. Much work has been done on assessing vehicle and environmental state but there is still comparatively little published work that tackles the problem of driver state. Visual attention is one such driver state. In fact, some researchers claim that lack of attention is the main cause of accidents as factors such as fatigue, alcohol or drug use, distraction and speeding all impair the driver-s capacity to pay attention to the vehicle and road conditions [1]. This seems to imply that the main cause of accidents is inappropriate driver behaviour in cases where the driver is not giving full attention while driving. The work presented in this paper proposes an ADAS system which uses an image based template matching algorithm to detect if a driver is failing to observe particular windscreen cells. This is achieved by dividing the windscreen into 24 uniform cells (4 rows of 6 columns) and matching video images of the driver-s left eye with eye-gesture templates drawn from images of the driver looking at the centre of each windscreen cell. The main contribution of this paper is to assess the accuracy of this approach using Receiver Operating Characteristic analysis. The results of our evaluation give a sensitivity value of 84.3% and a specificity value of 85.0% for the eye-gesture template approach indicating that it may be useful for driver point of regard determinations.
Keywords: Advanced Driver Assistance Systems, Eye-Tracking, Hazard Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163574 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body
Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi
Abstract:
The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.
Keywords: Accu-Chek, diabetes, neural network, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 162473 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: Autonomous vehicles, deformable part model, dpm, pedestrian recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 139872 A Robust Method for Finding Nearest-Neighbor using Hexagon Cells
Authors: Ahmad Attiq Al-Ogaibi, Ahmad Sharieh, Moh’d Belal Al-Zoubi, R. Bremananth
Abstract:
In pattern clustering, nearest neighborhood point computation is a challenging issue for many applications in the area of research such as Remote Sensing, Computer Vision, Pattern Recognition and Statistical Imaging. Nearest neighborhood computation is an essential computation for providing sufficient classification among the volume of pixels (voxels) in order to localize the active-region-of-interests (AROI). Furthermore, it is needed to compute spatial metric relationships of diverse area of imaging based on the applications of pattern recognition. In this paper, we propose a new methodology for finding the nearest neighbor point, depending on making a virtually grid of a hexagon cells, then locate every point beneath them. An algorithm is suggested for minimizing the computation and increasing the turnaround time of the process. The nearest neighbor query points Φ are fetched by seeking fashion of hexagon holistic. Seeking will be repeated until an AROI Φ is to be expected. If any point Υ is located then searching starts in the nearest hexagons in a circular way. The First hexagon is considered be level 0 (L0) and the surrounded hexagons is level 1 (L1). If Υ is located in L1, then search starts in the next level (L2) to ensure that Υ is the nearest neighbor for Φ. Based on the result and experimental results, we found that the proposed method has an advantage over the traditional methods in terms of minimizing the time complexity required for searching the neighbors, in turn, efficiency of classification will be improved sufficiently.
Keywords: Hexagon cells, k-nearest neighbors, Nearest Neighbor, Pattern recognition, Query pattern, Virtually grid
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 280671 Estimating Saturated Hydraulic Conductivity from Soil Physical Properties using Neural Networks Model
Authors: B. Ghanbarian-Alavijeh, A.M. Liaghat, S. Sohrabi
Abstract:
Saturated hydraulic conductivity is one of the soil hydraulic properties which is widely used in environmental studies especially subsurface ground water. Since, its direct measurement is time consuming and therefore costly, indirect methods such as pedotransfer functions have been developed based on multiple linear regression equations and neural networks model in order to estimate saturated hydraulic conductivity from readily available soil properties e.g. sand, silt, and clay contents, bulk density, and organic matter. The objective of this study was to develop neural networks (NNs) model to estimate saturated hydraulic conductivity from available parameters such as sand and clay contents, bulk density, van Genuchten retention model parameters (i.e. r θ , α , and n) as well as effective porosity. We used two methods to calculate effective porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s θ is saturated water content, FC θ is water content retained at -33 kPa matric potential, and inf θ is water content at the inflection point. Total of 311 soil samples from the UNSODA database was divided into three groups as 187 for the training, 62 for the validation (to avoid over training), and 62 for the test of NNs model. A commercial neural network toolbox of MATLAB software with a multi-layer perceptron model and back propagation algorithm were used for the training procedure. The statistical parameters such as correlation coefficient (R2), and mean square error (MSE) were also used to evaluate the developed NNs model. The best number of neurons in the middle layer of NNs model for methods (1) and (2) were calculated 44 and 6, respectively. The R2 and MSE values of the test phase were determined for method (1), 0.94 and 0.0016, and for method (2), 0.98 and 0.00065, respectively, which shows that method (2) estimates saturated hydraulic conductivity better than method (1).Keywords: Neural network, Saturated hydraulic conductivity, Soil physical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 255970 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel
Authors: F. M. Pisano, M. Ciminello
Abstract:
Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.
Keywords: Interactive dashboards, optical fibers, structural health monitoring, visual analytics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83569 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.
Keywords: CO2 emissions, performance based design, optimization, sustainable design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187568 Lamb Wave Wireless Communication in Healthy Plates Using Coherent Demodulation
Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad
Abstract:
Guided ultrasonic waves are used in Non-Destructive Testing and Structural Health Monitoring for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average bit error percentage. Results has shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.
Keywords: Lamb Wave Communication, wireless communication, coherent demodulation, bit error percentage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56567 Experimental and Theoretical Investigation of Rough Rice Drying in Infrared-assisted Hot Air Dryer Using Artificial Neural Network
Authors: D. Zare, H. Naderi, A. A. Jafari
Abstract:
Drying characteristics of rough rice (variety of lenjan) with an initial moisture content of 25% dry basis (db) was studied in a hot air dryer assisted by infrared heating. Three arrival air temperatures (30, 40 and 500C) and four infrared radiation intensities (0, 0.2 , 0.4 and 0.6 W/cm2) and three arrival air speeds (0.1, 0.15 and 0.2 m.s-1) were studied. Bending strength of brown rice kernel, percentage of cracked kernels and time of drying were measured and evaluated. The results showed that increasing the drying arrival air temperature and radiation intensity of infrared resulted decrease in drying time. High bending strength and low percentage of cracked kernel was obtained when paddy was dried by hot air assisted infrared dryer. Between this factors and their interactive effect were a significant difference (p<0.01). An intensity level of 0.2 W/cm2 was found to be optimum for radiation drying. Furthermore, in the present study, the application of Artificial Neural Network (ANN) for predicting the moisture content during drying (output parameter for ANN modeling) was investigated. Infrared Radiation intensity, drying air temperature, arrival air speed and drying time were considered as input parameters for the model. An ANN model with two hidden layers with 8 and 14 neurons were selected for studying the influence of transfer functions and training algorithms. The results revealed that a network with the Tansig (hyperbolic tangent sigmoid) transfer function and trainlm (Levenberg-Marquardt) back propagation algorithm made the most accurate predictions for the paddy drying system. Mean square error (MSE) was calculated and found that the random errors were within and acceptable range of ±5% with coefficient of determination (R2) of 99%.
Keywords: Rough rice, Infrared-hot air, Artificial Neural Network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182866 Evaluating the Validity of Computational Fluid Dynamics Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements
Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck
Abstract:
This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the mean geometric bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.
Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38465 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modeling and Solving
Authors: Yasin Tadayonrad, Alassane Ballé Ndiaye
Abstract:
Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading/unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is the loading/unloading capacity in each source/destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods (FMCG) industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on Python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.
Keywords: Supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 53964 Optimal Design of Selective Excitation Pulses in Magnetic Resonance Imaging using Genetic Algorithms
Authors: Mohammed A. Alolfe, Abou-Bakr M. Youssef, Yasser M. Kadah
Abstract:
The proper design of RF pulses in magnetic resonance imaging (MRI) has a direct impact on the quality of acquired images, and is needed for many applications. Several techniques have been proposed to obtain the RF pulse envelope given the desired slice profile. Unfortunately, these techniques do not take into account the limitations of practical implementation such as limited amplitude resolution. Moreover, implementing constraints for special RF pulses on most techniques is not possible. In this work, we propose to develop an approach for designing optimal RF pulses under theoretically any constraints. The new technique will pose the RF pulse design problem as a combinatorial optimization problem and uses efficient techniques from this area such as genetic algorithms (GA) to solve this problem. In particular, an objective function will be proposed as the norm of the difference between the desired profile and the one obtained from solving the Bloch equations for the current RF pulse design values. The proposed approach will be verified using analytical solution based RF simulations and compared to previous methods such as Shinnar-Le Roux (SLR) method, and analysis, selected, and tested the options and parameters that control the Genetic Algorithm (GA) can significantly affect its performance to get the best improved results and compared to previous works in this field. The results show a significant improvement over conventional design techniques, select the best options and parameters for GA to get most improvement over the previous works, and suggest the practicality of using of the new technique for most important applications as slice selection for large flip angles, in the area of unconventional spatial encoding, and another clinical use.
Keywords: Selective excitation, magnetic resonance imaging, combinatorial optimization, pulse design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161563 Accurate And Efficient Global Approximation using Adaptive Polynomial RSM for Complex Mechanical and Vehicular Performance Models
Authors: Y. Z. Wu, Z. Dong, S. K. You
Abstract:
Global approximation using metamodel for complex mathematical function or computer model over a large variable domain is often needed in sensibility analysis, computer simulation, optimal control, and global design optimization of complex, multiphysics systems. To overcome the limitations of the existing response surface (RS), surrogate or metamodel modeling methods for complex models over large variable domain, a new adaptive and regressive RS modeling method using quadratic functions and local area model improvement schemes is introduced. The method applies an iterative and Latin hypercube sampling based RS update process, divides the entire domain of design variables into multiple cells, identifies rougher cells with large modeling error, and further divides these cells along the roughest dimension direction. A small number of additional sampling points from the original, expensive model are added over the small and isolated rough cells to improve the RS model locally until the model accuracy criteria are satisfied. The method then combines local RS cells to regenerate the global RS model with satisfactory accuracy. An effective RS cells sorting algorithm is also introduced to improve the efficiency of model evaluation. Benchmark tests are presented and use of the new metamodeling method to replace complex hybrid electrical vehicle powertrain performance model in vehicle design optimization and optimal control are discussed.Keywords: Global approximation, polynomial response surface, domain decomposition, domain combination, multiphysics modeling, hybrid powertrain optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191062 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study
Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin
Abstract:
Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.Keywords: Objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163761 Investigation of Combined use of MFCC and LPC Features in Speech Recognition Systems
Authors: К. R. Aida–Zade, C. Ardil, S. S. Rustamov
Abstract:
Statement of the automatic speech recognition problem, the assignment of speech recognition and the application fields are shown in the paper. At the same time as Azerbaijan speech, the establishment principles of speech recognition system and the problems arising in the system are investigated. The computing algorithms of speech features, being the main part of speech recognition system, are analyzed. From this point of view, the determination algorithms of Mel Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coding (LPC) coefficients expressing the basic speech features are developed. Combined use of cepstrals of MFCC and LPC in speech recognition system is suggested to improve the reliability of speech recognition system. To this end, the recognition system is divided into MFCC and LPC-based recognition subsystems. The training and recognition processes are realized in both subsystems separately, and recognition system gets the decision being the same results of each subsystems. This results in decrease of error rate during recognition. The training and recognition processes are realized by artificial neural networks in the automatic speech recognition system. The neural networks are trained by the conjugate gradient method. In the paper the problems observed by the number of speech features at training the neural networks of MFCC and LPC-based speech recognition subsystems are investigated. The variety of results of neural networks trained from different initial points in training process is analyzed. Methodology of combined use of neural networks trained from different initial points in speech recognition system is suggested to improve the reliability of recognition system and increase the recognition quality, and obtained practical results are shown.Keywords: Speech recognition, cepstral analysis, Voice activation detection algorithm, Mel Frequency Cepstral Coefficients, features of speech, Cepstral Mean Subtraction, neural networks, Linear Predictive Coding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91660 Clustering for Detection of Population Groups at Risk from Anticholinergic Medication
Authors: Amirali Shirazibeheshti, Tarik Radwan, Alireza Ettefaghian, Farbod Khanizadeh, George Wilson, Cristina Luca
Abstract:
Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. This work evaluates the association between the average risk score and measures of socioeconomic status (index of multiple deprivation) and health (index of health and disability). The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, suggesting that females are more at risk from this kind of multiple medication. The risk may be monitored and controlled in a healthcare management system that is well-equipped with tools implementing appropriate techniques of artificial intelligence.
Keywords: Anticholinergic medication, socioeconomic status, deprivation, clustering, risk analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1074