Search results for: Parallel algorithm
116 An Integrated Design Evaluation and Assembly Sequence Planning Model using a Particle Swarm Optimization Approach
Authors: Feng-Yi Huang, Yuan-Jye Tseng
Abstract:
In the traditional concept of product life cycle management, the activities of design, manufacturing, and assembly are performed in a sequential way. The drawback is that the considerations in design may contradict the considerations in manufacturing and assembly. The different designs of components can lead to different assembly sequences. Therefore, in some cases, a good design may result in a high cost in the downstream assembly activities. In this research, an integrated design evaluation and assembly sequence planning model is presented. Given a product requirement, there may be several design alternative cases to design the components for the same product. If a different design case is selected, the assembly sequence for constructing the product can be different. In this paper, first, the designed components are represented by using graph based models. The graph based models are transformed to assembly precedence constraints and assembly costs. A particle swarm optimization (PSO) approach is presented by encoding a particle using a position matrix defined by the design cases and the assembly sequences. The PSO algorithm simultaneously performs design evaluation and assembly sequence planning with an objective of minimizing the total assembly costs. As a result, the design cases and the assembly sequences can both be optimized. The main contribution lies in the new concept of integrated design evaluation and assembly sequence planning model and the new PSO solution method. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly planning problem. In this paper, an example product is tested and illustrated.
Keywords: assembly sequence planning, design evaluation, design for assembly, particle swarm optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827115 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images
Authors: SP. Chokkalingam, K. Komathy
Abstract:
Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.
Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2479114 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications
Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber
Abstract:
Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.
Keywords: Classification, High dimensional data, Machine learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384113 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines
Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri
Abstract:
This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.Keywords: Wind turbines, aeroelasticity, repetitive control, periodic systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298112 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation
Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu
Abstract:
This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.
Keywords: machine learning, neural network, pressurized water reactor, supervisory controller
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 514111 Budget Optimization for Maintenance of Bridges in Egypt
Authors: Hesham Abd Elkhalek, Sherif M. Hafez, Yasser M. El Fahham
Abstract:
Allocating limited budget to maintain bridge networks and selecting effective maintenance strategies for each bridge represent challenging tasks for maintenance managers and decision makers. In Egypt, bridges are continuously deteriorating. In many cases, maintenance works are performed due to user complaints. The objective of this paper is to develop a practical and reliable framework to manage the maintenance, repair, and rehabilitation (MR&R) activities of Bridges network considering performance and budget limits. The model solves an optimization problem that maximizes the average condition of the entire network given the limited available budget using Genetic Algorithm (GA). The framework contains bridge inventory, condition assessment, repair cost calculation, deterioration prediction, and maintenance optimization. The developed model takes into account multiple parameters including serviceability requirements, budget allocation, element importance on structural safety and serviceability, bridge impact on network, and traffic. A questionnaire is conducted to complete the research scope. The proposed model is implemented in software, which provides a friendly user interface. The framework provides a multi-year maintenance plan for the entire network for up to five years. A case study of ten bridges is presented to validate and test the proposed model with data collected from Transportation Authorities in Egypt. Different scenarios are presented. The results are reasonable, feasible and within acceptable domain.Keywords: Bridge Management Systems (BMS), cost optimization condition assessment, fund allocation, Markov chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958110 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.
Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610109 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band
Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant Kumar Srivastava
Abstract:
An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986 and 0.9214 respectively at HHpolarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373 and 0.9428 respectively.Keywords: Bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3227108 Rotation Invariant Fusion of Partial Image Parts in Vista Creation using Missing View Regeneration
Authors: H. B. Kekre, Sudeep D. Thepade
Abstract:
The automatic construction of large, high-resolution image vistas (mosaics) is an active area of research in the fields of photogrammetry [1,2], computer vision [1,4], medical image processing [4], computer graphics [3] and biometrics [8]. Image stitching is one of the possible options to get image mosaics. Vista Creation in image processing is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transforming and stitching multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. Vista creation process aligns two partial images over each other and blends them together. Image mosaics allow one to compensate for differences in viewing geometry. Thus they can be used to simplify tasks by simulating the condition in which the scene is viewed from a fixed position with single camera. While obtaining partial images the geometric anomalies like rotation, scaling are bound to happen. To nullify effect of rotation of partial images on process of vista creation, we are proposing rotation invariant vista creation algorithm in this paper. Rotation of partial image parts in the proposed method of vista creation may introduce some missing region in the vista. To correct this error, that is to fill the missing region further we have used image inpainting method on the created vista. This missing view regeneration method also overcomes the problem of missing view [31] in vista due to cropping, irregular boundaries of partial image parts and errors in digitization [35]. The method of missing view regeneration generates the missing view of vista using the information present in vista itself.Keywords: Vista, Overlap Estimation, Rotation Invariance, Missing View Regeneration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723107 Cumulative Learning based on Dynamic Clustering of Hierarchical Production Rules(HPRs)
Authors: Kamal K.Bharadwaj, Rekha Kandwal
Abstract:
An important structuring mechanism for knowledge bases is building clusters based on the content of their knowledge objects. The objects are clustered based on the principle of maximizing the intraclass similarity and minimizing the interclass similarity. Clustering can also facilitate taxonomy formation, that is, the organization of observations into a hierarchy of classes that group similar events together. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form Decision If < condition> Generality
Keywords: Cumulative learning, clustering, data mining, hierarchical production rules.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438106 A Control Strategy Based on UTT and ISCT for 3P4W UPQC
Authors: Yash Pal, A.Swarup, Bhim Singh
Abstract:
This paper presents a novel control strategy of a threephase four-wire Unified Power Quality (UPQC) for an improvement in power quality. The UPQC is realized by integration of series and shunt active power filters (APFs) sharing a common dc bus capacitor. The shunt APF is realized using a thee-phase, four leg voltage source inverter (VSI) and the series APF is realized using a three-phase, three leg VSI. A control technique based on unit vector template technique (UTT) is used to get the reference signals for series APF, while instantaneous sequence component theory (ISCT) is used for the control of Shunt APF. The performance of the implemented control algorithm is evaluated in terms of power-factor correction, load balancing, neutral source current mitigation and mitigation of voltage and current harmonics, voltage sag and swell in a three-phase four-wire distribution system for different combination of linear and non-linear loads. In this proposed control scheme of UPQC, the current/voltage control is applied over the fundamental supply currents/voltages instead of fast changing APFs currents/voltages, there by reducing the computational delay and the required sensors. MATLAB/Simulink based simulations are obtained, which support the functionality of the UPQC. MATLAB/Simulink based simulations are obtained, which support the functionality of the UPQC.Keywords: Power Quality, UPQC, Harmonics, Load Balancing, Power Factor Correction, voltage harmonic mitigation, currentharmonic mitigation, voltage sag, swell
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2270105 Profile Calculation in Water Phantom of Symmetric and Asymmetric Photon Beam
Authors: N. Chegeni, M. J. Tahmasebi Birgani
Abstract:
Nowadays, in most radiotherapy departments, the commercial treatment planning systems (TPS) used to calculate dose distributions needs to be verified; therefore, quick, easy-to-use and low cost dose distribution algorithms are desirable to test and verify the performance of the TPS. In this paper, we put forth an analytical method to calculate the phantom scatter contribution and depth dose on the central axis based on the equivalent square concept. Then, this method was generalized to calculate the profiles at any depth and for several field shapes regular or irregular fields under symmetry and asymmetry photon beam conditions. Varian 2100 C/D and Siemens Primus Plus Linacs with 6 and 18 MV photon beam were used for irradiations. Percentage depth doses (PDDs) were measured for a large number of square fields for both energies, and for 45º wedges which were employed to obtain the profiles in any depth. To assess the accuracy of the calculated profiles, several profile measurements were carried out for some treatment fields. The calculated and measured profiles were compared by gamma-index calculation. All γ–index calculations were based on a 3% dose criterion and a 3 mm dose-to-agreement (DTA) acceptance criterion. The γ values were less than 1 at most points. However, the maximum γ observed was about 1.10 in the penumbra region in most fields and in the central area for the asymmetric fields. This analytical approach provides a generally quick and fairly accurate algorithm to calculate dose distribution for some treatment fields in conventional radiotherapy.
Keywords: Dose distribution, equivalent field, asymmetric field, irregular field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3043104 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas
Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi
Abstract:
In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.
Keywords: Thermal remote sensing, insolation model, land surface temperature, geothermal anomalies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1025103 CFD-Parametric Study in Stator Heat Transfer of an Axial Flux Permanent Magnet Machine
Authors: Alireza Rasekh, Peter Sergeant, Jan Vierendeels
Abstract:
This paper copes with the numerical simulation for convective heat transfer in the stator disk of an axial flux permanent magnet (AFPM) electrical machine. Overheating is one of the main issues in the design of AFMPs, which mainly occurs in the stator disk, so that it needs to be prevented. A rotor-stator configuration with 16 magnets at the periphery of the rotor is considered. Air is allowed to flow through openings in the rotor disk and channels being formed between the magnets and in the gap region between the magnets and the stator surface. The rotating channels between the magnets act as a driving force for the air flow. The significant non-dimensional parameters are the rotational Reynolds number, the gap size ratio, the magnet thickness ratio, and the magnet angle ratio. The goal is to find correlations for the Nusselt number on the stator disk according to these non-dimensional numbers. Therefore, CFD simulations have been performed with the multiple reference frame (MRF) technique to model the rotary motion of the rotor and the flow around and inside the machine. A minimization method is introduced by a pattern-search algorithm to find the appropriate values of the reference temperature. It is found that the correlations are fast, robust and is capable of predicting the stator heat transfer with a good accuracy. The results reveal that the magnet angle ratio diminishes the stator heat transfer, whereas the rotational Reynolds number and the magnet thickness ratio improve the convective heat transfer. On the other hand, there a certain gap size ratio at which the stator heat transfer reaches a maximum.
Keywords: Axial flux permanent magnet, CFD, magnet parameters, stator heat transfer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1479102 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 779101 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795100 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.
Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91399 A Microcontroller Implementation of Model Predictive Control
Authors: Amira Abbes Kheriji, Faouzi Bouani, Mekki Ksouri, Mohamed Ben Ahmed
Abstract:
Model Predictive Control (MPC) is increasingly being proposed for real time applications and embedded systems. However comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprises as well as a transformer of organizations and markets. Recently, advances in microelectronics and software allow such technique to be implemented in embedded systems. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In fact in this paper, we propose an efficient framework for implementation of Generalized Predictive Control (GPC) in the performed STM32 microcontroller. The STM32 keil starter kit based on a JTAG interface and the STM32 board was used to implement the proposed GPC firmware. Besides the GPC, the PID anti windup algorithm was also implemented using Keil development tools designed for ARM processor-based microcontroller devices and working with C/Cµ langage. A performances comparison study was done between both firmwares. This performances study show good execution speed and low computational burden. These results encourage to develop simple predictive algorithms to be programmed in industrial standard hardware. The main features of the proposed framework are illustrated through two examples and compared with the anti windup PID controller.Keywords: Embedded systems, Model Predictive Control, microcontroller, Keil tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 549798 Real-time Haptic Modeling and Simulation for Prosthetic Insertion
Authors: Catherine A. Todd, Fazel Naghdy
Abstract:
In this work a surgical simulator is produced which enables a training otologist to conduct a virtual, real-time prosthetic insertion. The simulator provides the Ear, Nose and Throat surgeon with real-time visual and haptic responses during virtual cochlear implantation into a 3D model of the human Scala Tympani (ST). The parametric model is derived from measured data as published in the literature and accounts for human morphological variance, such as differences in cochlear shape, enabling patient-specific pre- operative assessment. Haptic modeling techniques use real physical data and insertion force measurements, to develop a force model which mimics the physical behavior of an implant as it collides with the ST walls during an insertion. Output force profiles are acquired from the insertion studies conducted in the work, to validate the haptic model. The simulator provides the user with real-time, quantitative insertion force information and associated electrode position as user inserts the virtual implant into the ST model. The information provided by this study may also be of use to implant manufacturers for design enhancements as well as for training specialists in optimal force administration, using the simulator. The paper reports on the methods for anatomical modeling and haptic algorithm development, with focus on simulator design, development, optimization and validation. The techniques may be transferrable to other medical applications that involve prosthetic device insertions where user vision is obstructed.Keywords: Haptic modeling, medical device insertion, real-time visualization of prosthetic implantation, surgical simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204497 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values
Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi
Abstract:
A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.
Keywords: eXtreme Gradient Boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impairment, multiclass classification, ADNI, support vector machine, random forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95896 Applying Kinect on the Development of a Customized 3D Mannequin
Authors: Shih-Wen Hsiao, Rong-Qi Chen
Abstract:
In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.Keywords: 3D Mannequin, kinect scanner, interactive closest point, shape morphing, subdivision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 206295 Spatial Query Localization Method in Limited Reference Point Environment
Authors: Victor Krebss
Abstract:
Task of object localization is one of the major challenges in creating intelligent transportation. Unfortunately, in densely built-up urban areas, localization based on GPS only produces a large error, or simply becomes impossible. New opportunities arise for the localization due to the rapidly emerging concept of a wireless ad-hoc network. Such network, allows estimating potential distance between these objects measuring received signal level and construct a graph of distances in which nodes are the localization objects, and edges - estimates of the distances between pairs of nodes. Due to the known coordinates of individual nodes (anchors), it is possible to determine the location of all (or part) of the remaining nodes of the graph. Moreover, road map, available in digital format can provide localization routines with valuable additional information to narrow node location search. However, despite abundance of well-known algorithms for solving the problem of localization and significant research efforts, there are still many issues that currently are addressed only partially. In this paper, we propose localization approach based on the graph mapped distances on the digital road map data basis. In fact, problem is reduced to distance graph embedding into the graph representing area geo location data. It makes possible to localize objects, in some cases even if only one reference point is available. We propose simple embedding algorithm and sample implementation as spatial queries over sensor network data stored in spatial database, allowing employing effectively spatial indexing, optimized spatial search routines and geometry functions.Keywords: Intelligent Transportation System, Sensor Network, Localization, Spatial Query, GIS, Graph Embedding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 153594 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules
Authors: Tamanna Siddiqui, M. Afshar Alam
Abstract:
Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality
Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152793 Improved Fuzzy Neural Modeling for Underwater Vehicles
Authors: O. Hassanein, Sreenatha G. Anavatti, Tapabrata Ray
Abstract:
The dynamics of the Autonomous Underwater Vehicles (AUVs) are highly nonlinear and time varying and the hydrodynamic coefficients of vehicles are difficult to estimate accurately because of the variations of these coefficients with different navigation conditions and external disturbances. This study presents the on-line system identification of AUV dynamics to obtain the coupled nonlinear dynamic model of AUV as a black box. This black box has an input-output relationship based upon on-line adaptive fuzzy model and adaptive neural fuzzy network (ANFN) model techniques to overcome the uncertain external disturbance and the difficulties of modelling the hydrodynamic forces of the AUVs instead of using the mathematical model with hydrodynamic parameters estimation. The models- parameters are adapted according to the back propagation algorithm based upon the error between the identified model and the actual output of the plant. The proposed ANFN model adopts a functional link neural network (FLNN) as the consequent part of the fuzzy rules. Thus, the consequent part of the ANFN model is a nonlinear combination of input variables. Fuzzy control system is applied to guide and control the AUV using both adaptive models and mathematical model. Simulation results show the superiority of the proposed adaptive neural fuzzy network (ANFN) model in tracking of the behavior of the AUV accurately even in the presence of noise and disturbance.Keywords: AUV, AUV dynamic model, fuzzy control, fuzzy modelling, adaptive fuzzy control, back propagation, system identification, neural fuzzy model, FLNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215392 Verification of On-Line Vehicle Collision Avoidance Warning System using DSRC
Authors: C. W. Hsu, C. N. Liang, L. Y. Ke, F. Y. Huang
Abstract:
Many accidents were happened because of fast driving, habitual working overtime or tired spirit. This paper presents a solution of remote warning for vehicles collision avoidance using vehicular communication. The development system integrates dedicated short range communication (DSRC) and global position system (GPS) with embedded system into a powerful remote warning system. To transmit the vehicular information and broadcast vehicle position; DSRC communication technology is adopt as the bridge. The proposed system is divided into two parts of the positioning andvehicular units in a vehicle. The positioning unit is used to provide the position and heading information from GPS module, and furthermore the vehicular unit is used to receive the break, throttle, and othersignals via controller area network (CAN) interface connected to each mechanism. The mobile hardware are built with an embedded system using X86 processor in Linux system. A vehicle is communicated with other vehicles via DSRC in non-addressed protocol with wireless access in vehicular environments (WAVE) short message protocol. From the position data and vehicular information, this paper provided a conflict detection algorithm to do time separation and remote warning with error bubble consideration. And the warning information is on-line displayed in the screen. This system is able to enhance driver assistance service and realize critical safety by using vehicular information from the neighbor vehicles.KeywordsDedicated short range communication, GPS, Control area network, Collision avoidance warning system.
Keywords: Dedicated short range communication, GPS, Control area network, Collision avoidance warning system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 220691 Numerical Investigation of Developing Mixed Convection in Isothermal Circular and Annular Sector Ducts
Authors: Ayad A. Abdalla, Elhadi I. Elhadi, Hisham A. Elfergani
Abstract:
Developing mixed convection in circular and annular sector ducts is investigated numerically for steady laminar flow of an incompressible Newtonian fluid with Pr = 0.7 and a wide range of Grashof number (0 £ Gr £ 107). Investigation is limited to the case of heating in circular and annular sector ducts with apex angle of 2ϕ = π/4 for the thermal boundary condition of uniform wall temperature axially and peripherally. A numerical, finite control volume approach based on the SIMPLER algorithm is employed to solve the 3D governing equations. Numerical analysis is conducted using marching technique in the axial direction with axial conduction, axial mass diffusion, and viscous dissipation within the fluid are assumed negligible. The results include developing secondary flow patterns, developing temperature and axial velocity fields, local Nusselt number, local friction factor, and local apparent friction factor. Comparisons are made with the literature and satisfactory agreement is obtained. It is found that free convection enhances the local heat transfer in some cases by up to 2.5 times from predictions which account for forced convection only and the enhancement increases as Grashof number increases. Duct geometry and Grashof number strongly influence the heat transfer and pressure drop characteristics.
Keywords: Mixed convection, annular and circular sector ducts, heat transfer enhancement, pressure drop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 54790 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis
Authors: A.K. Tangirala, S. Babji
Abstract:
In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 153489 A Hybrid Fuzzy AGC in a Competitive Electricity Environment
Authors: H. Shayeghi, A. Jalili
Abstract:
This paper presents a new Hybrid Fuzzy (HF) PID type controller based on Genetic Algorithms (GA-s) for solution of the Automatic generation Control (AGC) problem in a deregulated electricity environment. In order for a fuzzy rule based control system to perform well, the fuzzy sets must be carefully designed. A major problem plaguing the effective use of this method is the difficulty of accurately constructing the membership functions, because it is a computationally expensive combinatorial optimization problem. On the other hand, GAs is a technique that emulates biological evolutionary theories to solve complex optimization problems by using directed random searches to derive a set of optimal solutions. For this reason, the membership functions are tuned automatically using a modified GA-s based on the hill climbing method. The motivation for using the modified GA-s is to reduce fuzzy system effort and take large parametric uncertainties into account. The global optimum value is guaranteed using the proposed method and the speed of the algorithm-s convergence is extremely improved, too. This newly developed control strategy combines the advantage of GA-s and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed GA based HF (GAHF) controller is tested on a threearea deregulated power system under different operating conditions and contract variations. The results of the proposed GAHF controller are compared with those of Multi Stage Fuzzy (MSF) controller, robust mixed H2/H∞ and classical PID controllers through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes.
Keywords: AGC, Hybrid Fuzzy Controller, Deregulated Power System, Power System Control, GAs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173588 FPGA Hardware Implementation and Evaluation of a Micro-Network Architecture for Multi-Core Systems
Authors: Yahia Salah, Med Lassaad Kaddachi, Rached Tourki
Abstract:
This paper presents the design, implementation and evaluation of a micro-network, or Network-on-Chip (NoC), based on a generic pipeline router architecture. The router is designed to efficiently support traffic generated by multimedia applications on embedded multi-core systems. It employs a simplest routing mechanism and implements the round-robin scheduling strategy to resolve output port contentions and minimize latency. A virtual channel flow control is applied to avoid the head-of-line blocking problem and enhance performance in the NoC. The hardware design of the router architecture has been implemented at the register transfer level; its functionality is evaluated in the case of the two dimensional Mesh/Torus topology, and performance results are derived from ModelSim simulator and Xilinx ISE 9.2i synthesis tool. An example of a multi-core image processing system utilizing the NoC structure has been implemented and validated to demonstrate the capability of the proposed micro-network architecture. To reduce complexity of the image compression and decompression architecture, the system use image processing algorithm based on classical discrete cosine transform with an efficient zonal processing approach. The experimental results have confirmed that both the proposed image compression scheme and NoC architecture can achieve a reasonable image quality with lower processing time.
Keywords: Generic Pipeline Network-on-Chip Router Architecture, JPEG Image Compression, FPGA Hardware Implementation, Performance Evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 309787 Fake Account Detection in Twitter Based on Minimum Weighted Feature set
Authors: Ahmed El Azab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny
Abstract:
Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, and then the determined factors are applied using different classification techniques. A comparison of the results of these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent researches in the same area; this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts; moreover, the study can be applied on different social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.Keywords: Fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5837