Search results for: robust estimator
79 A Software Framework for Predicting Oil-Palm Yield from Climate Data
Authors: Mohd. Noor Md. Sap, A. Majid Awan
Abstract:
Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197978 ECG Based Reliable User Identification Using Deep Learning
Authors: R. N. Begum, Ambalika Sharma, G. K. Singh
Abstract:
Identity theft has serious ramifications beyond data and personal information loss. This necessitates the implementation of robust and efficient user identification systems. Therefore, automatic biometric recognition systems are the need of the hour, and electrocardiogram (ECG)-based systems are unquestionably the best choice due to their appealing inherent characteristics. The Convolutional Neural Networks (CNNs) are the recent state-of-the-art techniques for ECG-based user identification systems. However, the results obtained are significantly below standards, and the situation worsens as the number of users and types of heartbeats in the dataset grows. As a result, this study proposes a highly accurate and resilient ECG-based person identification system using CNN's dense learning framework. The proposed research explores explicitly the caliber of dense CNNs in the field of ECG-based human recognition. The study tests four different configurations of dense CNN which are trained on a dataset of recordings collected from eight popular ECG databases. With the highest False Acceptance Rate (FAR) of 0.04% and the highest False Rejection Rate (FRR) of 5%, the best performing network achieved an identification accuracy of 99.94%. The best network is also tested with various train/test split ratios. The findings show that DenseNets are not only extremely reliable, but also highly efficient. Thus, they might also be implemented in real-time ECG-based human recognition systems.
Keywords: Biometrics, dense networks, identification rate, train/test split ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 54177 Hybrid Collaborative-Context Based Recommendations for Civil Affairs Operations
Authors: Patrick Cummings, Laura Cassani, Deirdre Kelliher
Abstract:
In this paper we present findings from a research effort to apply a hybrid collaborative-context approach for a system focused on Marine Corps civil affairs data collection, aggregation, and analysis called the Marine Civil Information Management System (MARCIMS). The goal of this effort is to provide operators with information to make sense of the interconnectedness of entities and relationships in their area of operation and discover existing data to support civil military operations. Our approach to build a recommendation engine was designed to overcome several technical challenges, including 1) ensuring models were robust to the relatively small amount of data collected by the Marine Corps civil affairs community; 2) finding methods to recommend novel data for which there are no interactions captured; and 3) overcoming confirmation bias by ensuring content was recommended that was relevant for the mission despite being obscure or less well known. We solve this by implementing a combination of collective matrix factorization (CMF) and graph-based random walks to provide recommendations to civil military operations users. We also present a method to resolve the challenge of computation complexity inherent from highly connected nodes through a precomputed process.
Keywords: Recommendation engine, collaborative filtering, context based recommendation, graph analysis, coverage, civil affairs operations, Marine Corps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38976 Speech Enhancement Using Wavelet Coefficients Masking with Local Binary Patterns
Authors: Christian Arcos, Marley Vellasco, Abraham Alcaim
Abstract:
In this paper, we present a wavelet coefficients masking based on Local Binary Patterns (WLBP) approach to enhance the temporal spectra of the wavelet coefficients for speech enhancement. This technique exploits the wavelet denoising scheme, which splits the degraded speech into pyramidal subband components and extracts frequency information without losing temporal information. Speech enhancement in each high-frequency subband is performed by binary labels through the local binary pattern masking that encodes the ratio between the original value of each coefficient and the values of the neighbour coefficients. This approach enhances the high-frequency spectra of the wavelet transform instead of eliminating them through a threshold. A comparative analysis is carried out with conventional speech enhancement algorithms, demonstrating that the proposed technique achieves significant improvements in terms of PESQ, an international recommendation of objective measure for estimating subjective speech quality. Informal listening tests also show that the proposed method in an acoustic context improves the quality of speech, avoiding the annoying musical noise present in other speech enhancement techniques. Experimental results obtained with a DNN based speech recognizer in noisy environments corroborate the superiority of the proposed scheme in the robust speech recognition scenario.Keywords: Binary labels, local binary patterns, mask, wavelet coefficients, speech enhancement, speech recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101775 Multiple Subcarrier Indoor Geolocation System in MIMO-OFDM WLAN APs Structure
Authors: Abdul Hafiizh, Shigeki Obote, Kenichi Kagoshima
Abstract:
This report aims to utilize existing and future Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing Wireless Local Area Network (MIMO-OFDM WLAN) systems characteristics–such as multiple subcarriers, multiple antennas, and channel estimation characteristics–for indoor location estimation systems based on the Direction of Arrival (DOA) and Radio Signal Strength Indication (RSSI) methods. Hybrid of DOA-RSSI methods also evaluated. In the experimental data result, we show that location estimation accuracy performances can be increased by minimizing the multipath fading effect. This is done using multiple subcarrier frequencies over wideband frequencies to estimate one location. The proposed methods are analyzed in both a wide indoor environment and a typical room-sized office. In the experiments, WLAN terminal locations are estimated by measuring multiple subcarriers from arrays of three dipole antennas of access points (AP). This research demonstrates highly accurate, robust and hardware-free add-on software for indoor location estimations based on a MIMO-OFDM WLAN system.
Keywords: Direction of Arrival (DOA), Indoor location estimation method, Multipath Fading, MIMO-OFDM, Received Signal Strength Indication (RSSI), WLAN, Hybrid DOA-RSSI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179874 Simulation and Analysis of Control System for a Solar Desalination System
Authors: R. Prakash, B. Meenakshipriya, R. Kumaravelan
Abstract:
Fresh water is one of the resources which is getting depleted day by day. A wise method to address this issue is by the application of renewable energy-sun irradiation and by means of decentralized, cheap, energetically self-sufficient, robust and simple to operate plants, distillates can be obtained from sea, river or even sewage. Solar desalination is a technique used to desalinate water using solar energy. The present work deals with the comprehensive design and simulation of solar tracking system using LabVIEW, temperature and mass flow rate control of the solar desalination plant using LabVIEW and also analysis of single phase inverter circuit with LC filters for solar pumping system in MATLAB. The main objective of this work is to improve the performance of solar desalination system using automatic tracking system, output control using temperature and mass flow rate control system and also to reduce the harmonic distortion in the solar pumping system by means of LC filters. The simulation of single phase inverter was carried out using MATLAB and the output waveforms were analyzed. Simulations were performed for optimum output temperature control, which in turn controls the mass flow rate of water in the thermal collectors. Solar tracking system was accomplished using LABVIEW and was tested successfully. The thermal collectors are tracked in accordance with the sun’s irradiance levels, thereby increasing the efficiency of the thermal collectors.Keywords: Desalination, Electro dialysis, LabVIEW, MATLAB, PWM inverter, Reverse osmosis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 239773 Using Environmental Sensitivity Index (ESI) to Assess and Manage Environmental Risks of Pipelines in GIS Environment: A Case Study ofa Near Coastline and Fragile Ecosystem Located Pipeline
Authors: Jahangir Jafari, Nematollah Khorasani, Afshin Danehkar
Abstract:
Having a very many number of pipelines all over the country, Iran is one of the countries consists of various ecosystems with variable degrees of fragility and robusticity as well as geographical conditions. This study presents a state-of-the-art method to estimate environmental risks of pipelines by recommending rational equations including FES, URAS, SRS, RRS, DRS, LURS and IRS as well as FRS to calculate the risks. This study was carried out by a relative semi-quantitative approach based on land uses and HVAs (High-Value Areas). GIS as a tool was used to create proper maps regarding the environmental risks, land uses and distances. The main logic for using the formulas was the distance-based approaches and ESI as well as intersections. Summarizing the results of the study, a risk geographical map based on the ESIs and final risk score (FRS) was created. The study results showed that the most sensitive and so of high risk area would be an area comprising of mangrove forests located in the pipeline neighborhood. Also, salty lands were the most robust land use units in the case of pipeline failure circumstances. Besides, using a state-of-the-art method, it showed that mapping the risks of pipelines out with the applied method is of more reliability and convenience as well as relative comprehensiveness in comparison to present non-holistic methods for assessing the environmental risks of pipelines. The focus of the present study is “assessment" than that of “management". It is suggested that new policies are to be implemented to reduce the negative effects of the pipeline that has not yet been constructed completelyKeywords: ERM, ESI, ERA, Pipeline, Assalouyeh
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 217072 Modern Vibration Signal Processing Techniques for Vehicle Gearbox Fault Diagnosis
Authors: Mohamed El Morsy, Gabriela Achtenová
Abstract:
This paper presents modern vibration signalprocessing techniques for vehicle gearbox fault diagnosis, via the wavelet analysis and the Squared Envelope (SE) technique. The wavelet analysis is regarded as a powerful tool for the detection of sudden changes in non-stationary signals. The Squared Envelope (SE) technique has been extensively used for rolling bearing diagnostics. In the present work a scheme of using the Squared Envelope technique for early detection of gear tooth pit. The pitting defect is manufactured on the tooth side of a fifth speed gear on the intermediate shaft of a vehicle gearbox. The objective is to supplement the current techniques of gearbox fault diagnosis based on using the raw vibration and ordered signals. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers introduce the load on the flanges of output joint shafts. The gearbox used for experimental measurements is the type most commonly used in modern small to mid-sized passenger cars with transversely mounted powertrain and front wheel drive; a five-speed gearbox with final drive gear and front wheel differential. The results show that the approaches methods are effective for detecting and diagnosing localized gear faults in early stage under different operation conditions, and are more sensitive and robust than current gear diagnostic techniques.
Keywords: Wavelet analysis, Squared Envelope, gear faults.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 257671 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique
Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran
Abstract:
Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.Keywords: Channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73270 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations
Authors: Yehjune Heo
Abstract:
Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.
Keywords: Anti-spoofing, CNN, fingerprint recognition, GAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59369 Does Material Choice Drive Sustainability of 3D Printing?
Authors: Jeremy Faludi, Zhongyin Hu, Shahd Alrashed, Christopher Braunholz, Suneesh Kaul, Leulekal Kassaye
Abstract:
Environmental impacts of six 3D printers using various materials were compared to determine if material choice drove sustainability, or if other factors such as machine type, machine size, or machine utilization dominate. Cradle-to-grave life-cycle assessments were performed, comparing a commercial-scale FDM machine printing in ABS plastic, a desktop FDM machine printing in ABS, a desktop FDM machine printing in PET and PLA plastics, a polyjet machine printing in its proprietary polymer, an SLA machine printing in its polymer, and an inkjet machine hacked to print in salt and dextrose. All scenarios were scored using ReCiPe Endpoint H methodology to combine multiple impact categories, comparing environmental impacts per part made for several scenarios per machine. Results showed that most printers’ ecological impacts were dominated by electricity use, not materials, and the changes in electricity use due to different plastics was not significant compared to variation from one machine to another. Variation in machine idle time determined impacts per part most strongly. However, material impacts were quite important for the inkjet printer hacked to print in salt: In its optimal scenario, it had up to 1/38th the impacts coreper part as the worst-performing machine in the same scenario. If salt parts were infused with epoxy to make them more physically robust, then much of this advantage disappeared, and material impacts actually dominated or equaled electricity use. Future studies should also measure DMLS and SLS processes / materials.
Keywords: 3D printing, Additive Manufacturing, Sustainability, Life-cycle assessment, Design for Environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 360968 Systems Engineering Management Using Transdisciplinary Quality System Development Lifecycle Model
Authors: Mohamed Asaad Abdelrazek, Amir Taher El-Sheikh, M. Zayan, A.M. Elhady
Abstract:
The successful realization of complex systems is dependent not only on the technology issues and the process for implementing them, but on the management issues as well. Managing the systems development lifecycle requires technical management. Systems engineering management is the technical management. Systems engineering management is accomplished by incorporating many activities. The three major activities are development phasing, systems engineering process and lifecycle integration. Systems engineering management activities are performed across the system development lifecycle. Due to the ever-increasing complexity of systems as well the difficulty of managing and tracking the development activities, new ways to achieve systems engineering management activities are required. This paper presents a systematic approach used as a design management tool applied across systems engineering management roles. In this approach, Transdisciplinary System Development Lifecycle (TSDL) Model has been modified and integrated with Quality Function Deployment. Hereinafter, the name of the systematic approach is the Transdisciplinary Quality System Development Lifecycle (TQSDL) Model. The QFD translates the voice of customers (VOC) into measurable technical characteristics. The modified TSDL model is based on Axiomatic Design developed by Suh which is applicable to all designs: products, processes, systems and organizations. The TQSDL model aims to provide a robust structure and systematic thinking to support the implementation of systems engineering management roles. This approach ensures that the customer requirements are fulfilled as well as satisfies all the systems engineering manager roles and activities.Keywords: Axiomatic design, quality function deployment, systems engineering management, system development lifecycle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175667 Study of Integrated Vehicle Image System Including LDW, FCW, and AFS
Authors: Yi-Feng Su, Chia-Tseng Chen, Hsueh-Lung Liao
Abstract:
The objective of this research is to develop an advanced driver assistance system characterized with the functions of lane departure warning (LDW), forward collision warning (FCW) and adaptive front-lighting system (AFS). The system is mainly configured a CCD/CMOS camera to acquire the images of roadway ahead in association with the analysis made by an image-processing unit concerning the lane ahead and the preceding vehicles. The input image captured by a camera is used to recognize the lane and the preceding vehicle positions by image detection and DROI (Dynamic Range of Interesting) algorithms. Therefore, the system is able to issue real-time auditory and visual outputs of warning when a driver is departing the lane or driving too close to approach the preceding vehicle unwittingly so that the danger could be prevented from occurring. During the nighttime, in addition to the foregoing warning functions, the system is able to control the bending light of headlamp to provide an immediate light illumination when making a turn at a curved lane and adjust the level automatically to reduce the lighting interference against the oncoming vehicles driving in the opposite direction by the curvature of lane and the vanishing point estimations. The experimental results show that the integrated vehicle image system is robust to most environments such as the lane detection and preceding vehicle detection average accuracy performances are both above 90 %.
Keywords: Lane mark detection, lane departure warning (LDW), dynamic range of interesting (DROI), forward collision warning (FCW), adaptive front-lighting system (AFS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215766 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration
Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith
Abstract:
Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.Keywords: Multimodal image registration, GAN, cycle consistency, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81065 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.
Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61064 Degradation of Heating, Ventilation, and Air Conditioning Components across Locations
Authors: Timothy E. Frank, Josh R. Aldred, Sophie B. Boulware, Michelle K. Cabonce, Justin H. White
Abstract:
Materials degrade at different rates in different environments depending on factors such as temperature, aridity, salinity, and solar radiation. Therefore, predicting asset longevity depends, in part, on the environmental conditions to which the asset is exposed. Heating, ventilation, and air conditioning (HVAC) systems are critical to building operations yet are responsible for a significant proportion of their energy consumption. HVAC energy use increases substantially with slight operational inefficiencies. Understanding the environmental influences on HVAC degradation in detail will inform maintenance schedules and capital investment, reduce energy use, and increase lifecycle management efficiency. HVAC inspection records spanning 14 years from 21 locations across the United States were compiled and associated with the climate conditions to which they were exposed. Three environmental features were explored in this study: average high temperature, average low temperature, and annual precipitation, as well as four non-environmental features. Initial insights showed no correlations between individual features and the rate of HVAC component degradation. Using neighborhood component analysis, however, the most critical features related to degradation were identified. Two models were considered, and results varied between them. However, longitude and latitude emerged as potentially the best predictors of average HVAC component degradation. Further research is needed to evaluate additional environmental features, increase the resolution of the environmental data, and develop more robust models to achieve more conclusive results.
Keywords: Climate, infrastructure degradation, HVAC, neighborhood component analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17263 CFD-Parametric Study in Stator Heat Transfer of an Axial Flux Permanent Magnet Machine
Authors: Alireza Rasekh, Peter Sergeant, Jan Vierendeels
Abstract:
This paper copes with the numerical simulation for convective heat transfer in the stator disk of an axial flux permanent magnet (AFPM) electrical machine. Overheating is one of the main issues in the design of AFMPs, which mainly occurs in the stator disk, so that it needs to be prevented. A rotor-stator configuration with 16 magnets at the periphery of the rotor is considered. Air is allowed to flow through openings in the rotor disk and channels being formed between the magnets and in the gap region between the magnets and the stator surface. The rotating channels between the magnets act as a driving force for the air flow. The significant non-dimensional parameters are the rotational Reynolds number, the gap size ratio, the magnet thickness ratio, and the magnet angle ratio. The goal is to find correlations for the Nusselt number on the stator disk according to these non-dimensional numbers. Therefore, CFD simulations have been performed with the multiple reference frame (MRF) technique to model the rotary motion of the rotor and the flow around and inside the machine. A minimization method is introduced by a pattern-search algorithm to find the appropriate values of the reference temperature. It is found that the correlations are fast, robust and is capable of predicting the stator heat transfer with a good accuracy. The results reveal that the magnet angle ratio diminishes the stator heat transfer, whereas the rotational Reynolds number and the magnet thickness ratio improve the convective heat transfer. On the other hand, there a certain gap size ratio at which the stator heat transfer reaches a maximum.
Keywords: Axial flux permanent magnet, CFD, magnet parameters, stator heat transfer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147962 Q-Map: Clinical Concept Mining from Clinical Documents
Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala
Abstract:
Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77961 Personalized Applications for Advanced Healthcare through AI-ML and Blockchain
Authors: Anuja Vyas, Aikel Indurkhya, Hari Krishna Garg
Abstract:
Nearly 25 years have passed since the landmark publication of the Human Genome Project, yet scientists have only begun to scratch the surface of its potential benefits. To bridge this gap, a personalized genomic application has been envisioned as a transformative tool accessible to people worldwide. This innovative solution proposes an integrated framework combining blockchain technology, genome-specific applications, and data compression techniques, ensuring operations to be swift, secure, transparent, and space-efficient. The software harnesses advanced Artificial Intelligence and Machine Learning methodologies, such as neural networks, evaluation matrices, fuzzy logic, and expert systems, to analyze individual genomic data. It generates personalized reports by comparing a user's genome with a reference genome, highlighting significant differences. Blockchain technology, with its inherent security, encryption, and immutability features, is leveraged for robust data transport and storage. In addition, a 'Data Abbreviation' technique ensures that genetic data and reports occupy minimal space. This integrated approach promises to be a significant leap forward, potentially transforming human health and well-being on a global scale.
Keywords: Artificial intelligence in genomics, blockchain technology, data abbreviation, data compression, data security in genomics, data storage, expert systems, fuzzy logic, genome applications, genomic data analysis, human genome project, neural networks, personalized genomics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4060 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.
Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 248059 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.
Keywords: Image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 116058 Accurate Control of a Pneumatic System using an Innovative Fuzzy Gain-Scheduling Pattern
Authors: M. G. Papoutsidakis, G. Chamilothoris, F. Dailami, N. Larsen, A Pipe
Abstract:
Due to their high power-to-weight ratio and low cost, pneumatic actuators are attractive for robotics and automation applications; however, achieving fast and accurate control of their position have been known as a complex control problem. A methodology for obtaining high position accuracy with a linear pneumatic actuator is presented. During experimentation with a number of PID classical control approaches over many operations of the pneumatic system, the need for frequent manual re-tuning of the controller could not be eliminated. The reason for this problem is thermal and energy losses inside the cylinder body due to the complex friction forces developed by the piston displacements. Although PD controllers performed very well over short periods, it was necessary in our research project to introduce some form of automatic gain-scheduling to achieve good long-term performance. We chose a fuzzy logic system to do this, which proved to be an easily designed and robust approach. Since the PD approach showed very good behaviour in terms of position accuracy and settling time, it was incorporated into a modified form of the 1st order Tagaki- Sugeno fuzzy method to build an overall controller. This fuzzy gainscheduler uses an input variable which automatically changes the PD gain values of the controller according to the frequency of repeated system operations. Performance of the new controller was significantly improved and the need for manual re-tuning was eliminated without a decrease in performance. The performance of the controller operating with the above method is going to be tested through a high-speed web network (GRID) for research purposes.Keywords: Fuzzy logic, gain scheduling, leaky integrator, pneumatic actuator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175057 Signing the First Packet in Amortization Scheme for Multicast Stream Authentication
Authors: Mohammed Shatnawi, Qusai Abuein, Susumu Shibusawa
Abstract:
Signature amortization schemes have been introduced for authenticating multicast streams, in which, a single signature is amortized over several packets. The hash value of each packet is computed, some hash values are appended to other packets, forming what is known as hash chain. These schemes divide the stream into blocks, each block is a number of packets, the signature packet in these schemes is either the first or the last packet of the block. Amortization schemes are efficient solutions in terms of computation and communication overhead, specially in real-time environment. The main effictive factor of amortization schemes is it-s hash chain construction. Some studies show that signing the first packet of each block reduces the receiver-s delay and prevents DoS attacks, other studies show that signing the last packet reduces the sender-s delay. To our knowledge, there is no studies that show which is better, to sign the first or the last packet in terms of authentication probability and resistance to packet loss. In th is paper we will introduce another scheme for authenticating multicast streams that is robust against packet loss, reduces the overhead, and prevents the DoS attacks experienced by the receiver in the same time. Our scheme-The Multiple Connected Chain signing the First packet (MCF) is to append the hash values of specific packets to other packets,then append some hashes to the signature packet which is sent as the first packet in the block. This scheme is aspecially efficient in terms of receiver-s delay. We discuss and evaluate the performance of our proposed scheme against those that sign the last packet of the block.Keywords: multicast stream authentication, hash chain construction, signature amortization, authentication probability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151856 Numerical Investigation of Nozzle Shape Effect on Shock Wave in Natural Gas Processing
Authors: Esam I. Jassim, Mohamed M. Awad
Abstract:
Natural gas flow contains undesirable solid particles, liquid condensation, and/or oil droplets and requires reliable removing equipment to perform filtration. Recent natural gas processing applications are demanded compactness and reliability of process equipment. Since conventional means are sophisticated in design, poor in efficiency, and continue lacking robust, a supersonic nozzle has been introduced as an alternative means to meet such demands. A 3-D Convergent-Divergent Nozzle is simulated using commercial Code for pressure ratio (NPR) varies from 1.2 to 2. Six different shapes of nozzle are numerically examined to illustrate the position of shock-wave as such spot could be considered as a benchmark of particle separation. Rectangle, triangle, circular, elliptical, pentagon, and hexagon nozzles are simulated using Fluent Code with all have same cross-sectional area. The simple one-dimensional inviscid theory does not describe the actual features of fluid flow precisely as it ignores the impact of nozzle configuration on the flow properties. CFD Simulation results, however, show that nozzle geometry influences the flow structures including location of shock wave. The CFD analysis predicts shock appearance when p01/pa>1.2 for almost all geometry and locates at the lower area ratio (Ae/At). Simulation results showed that shock wave in Elliptical nozzle has the farthest distance from the throat among the others at relatively small NPR. As NPR increases, hexagon would be the farthest. The numerical result is compared with available experimental data and has shown good agreement in terms of shock location and flow structure.Keywords: CFD, Particle Separation, Shock wave, Supersonic Nozzle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 325055 Low Temperature Biological Treatment of Chemical Oxygen Demand for Agricultural Water Reuse Application Using Robust Biocatalysts
Authors: Vedansh Gupta, Allyson Lutz, Ameen Razavi, Fatemeh Shirazi
Abstract:
The agriculture industry is especially vulnerable to forecasted water shortages. In the fresh and fresh-cut produce sector, conventional flume-based washing with recirculation exhibits high water demand. This leads to a large water footprint and possible cross-contamination of pathogens. These can be alleviated through advanced water reuse processes, such as membrane technologies including reverse osmosis (RO). Water reuse technologies effectively remove dissolved constituents but can easily foul without pre-treatment. Biological treatment is effective for the removal of organic compounds responsible for fouling, but not at the low temperatures encountered at most produce processing facilities. This study showed that the Microvi MicroNiche Engineering (MNE) technology effectively removes organic compounds (> 80%) at low temperatures (6-8 °C) from wash water. The MNE technology uses synthetic microorganism-material composites with negligible solids production, making it advantageously situated as an effective bio-pretreatment for RO. A preliminary technoeconomic analysis showed 60-80% savings in operation and maintenance costs (OPEX) when using the Microvi MNE technology for organics removal. This study and the accompanying economic analysis indicated that the proposed technology process will substantially reduce the cost barrier for adopting water reuse practices, thereby contributing to increased food safety and furthering sustainable water reuse processes across the agricultural industry.
Keywords: Biological pre-treatment, innovative technology, vegetable processing, water reuse, agriculture, reverse osmosis, MNE biocatalysts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61654 Efficient Compact Micro DBD Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems
Authors: Kuvshinov, D., Siswanto, A., Lozano-Parada, J., Zimmerman, W. B.
Abstract:
Ozone is well known as a powerful, fast reacting oxidant. Ozone based processes produce no by-product residual as non-reacted ozone decomposes to molecular oxygen. Therefore an application of ozone is widely accepted as one of the main approaches for a Sustainable and Clean Technologies development.
There are number of technologies which require ozone to be delivered to specific points of a production network or reactors construction. Due to space constraints, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units.
Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented.
At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28*10-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure.
The MROG construction makes it applicable for both submerged and dry systems. With a robust compact design MROG can be used as an integrated module for production lines of high complexity.
Keywords: DBD, micro reactor, ozone, plasma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 300453 Security Analysis of Password Hardened Multimodal Biometric Fuzzy Vault
Authors: V. S. Meenakshi, G. Padmavathi
Abstract:
Biometric techniques are gaining importance for personal authentication and identification as compared to the traditional authentication methods. Biometric templates are vulnerable to variety of attacks due to their inherent nature. When a person-s biometric is compromised his identity is lost. In contrast to password, biometric is not revocable. Therefore, providing security to the stored biometric template is very crucial. Crypto biometric systems are authentication systems, which blends the idea of cryptography and biometrics. Fuzzy vault is a proven crypto biometric construct which is used to secure the biometric templates. However fuzzy vault suffer from certain limitations like nonrevocability, cross matching. Security of the fuzzy vault is affected by the non-uniform nature of the biometric data. Fuzzy vault when hardened with password overcomes these limitations. Password provides an additional layer of security and enhances user privacy. Retina has certain advantages over other biometric traits. Retinal scans are used in high-end security applications like access control to areas or rooms in military installations, power plants, and other high risk security areas. This work applies the idea of fuzzy vault for retinal biometric template. Multimodal biometric system performance is well compared to single modal biometric systems. The proposed multi modal biometric fuzzy vault includes combined feature points from retina and fingerprint. The combined vault is hardened with user password for achieving high level of security. The security of the combined vault is measured using min-entropy. The proposed password hardened multi biometric fuzzy vault is robust towards stored biometric template attacks.Keywords: Biometric Template Security, Crypto Biometric Systems, Hardening Fuzzy Vault, Min-Entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215952 Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization
Authors: V. H. Mankar, T. S. Das, S. K. Sarkar
Abstract:
In this paper, we have proposed a novel blind watermarking architecture towards its hardware implementation in VLSI. In order to facilitate this hardware realization, cellular automata (CA) concept is introduced. The CA has been already accepted as an attractive structure for VLSI implementation because of its modularity, parallelism, high performance and reliability. The hardware realizable multiresolution spread spectrum watermarking techniques are very few in numbers in spite of their best ever resiliency against signal impairments. This is because of the computational cost and complexity associated with their different filter banks and lifting techniques. The concept of cellular automata theory in order to form a new transform domain technique i.e. Cellular Automata Transform (CAT) have been incorporated. Since CA provides spreading sequences having very low cross-correlation properties, the CA based pseudorandom sequence generator is considered in the present work. Considering the watermarking technique as a digital communication process, an error control coding (ECC) must be incorporated in the data hiding schemes. Besides the hardware implementation of entire CA based data hiding technique, the individual blocks of the algorithm using CA provide the best result than that of some other methods irrespective of the hardware and software technique. The Cellular Automata Transform, CA based PN sequence generator, and CA ECC are the requisite blocks that are developed not only to meet the reliable hardware requirements but also for the basic spread spectrum watermarking features. The proposed algorithm shows statistical invisibility and resiliency against various common signal-processing operations. This algorithmic design utilizes the existing allocated bandwidth in the data transmission channel in a more efficient manner.
Keywords: Cellular automata, watermarking, error control coding, PN sequence, VLSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 206751 Uncertainty Multiple Criteria Decision Making Analysis for Stealth Combat Aircraft Selection
Authors: C. Ardil
Abstract:
Fuzzy set theory and its extensions (intuitionistic fuzzy sets, picture fuzzy sets, and neutrosophic sets) have been widely used to address imprecision and uncertainty in complex decision-making. However, they may struggle with inherent indeterminacy and inconsistency in real-world situations. This study introduces uncertainty sets as a promising alternative, offering a structured framework for incorporating both types of uncertainty into decision-making processes.This work explores the theoretical foundations and applications of uncertainty sets. A novel decision-making algorithm based on uncertainty set-based proximity measures is developed and demonstrated through a practical application: selecting the most suitable stealth combat aircraft.
The results highlight the effectiveness of uncertainty sets in ranking alternatives under uncertainty. Uncertainty sets offer several advantages, including structured uncertainty representation, robust ranking mechanisms, and enhanced decision-making capabilities due to their ability to account for ambiguity.Future research directions are also outlined, including comparative analysis with existing MCDM methods under uncertainty, sensitivity analysis to assess the robustness of rankings,and broader application to various MCDM problems with diverse complexities. By exploring these avenues, uncertainty sets can be further established as a valuable tool for navigating uncertainty in complex decision-making scenarios.
Keywords: Uncertainty set, stealth combat aircraft selection multiple criteria decision-making analysis, MCDM, uncertainty proximity analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18750 Iris Recognition Based On the Low Order Norms of Gradient Components
Authors: Iman A. Saad, Loay E. George
Abstract:
Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.
Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965