Search results for: fast Fourier algorithms
3659 Radiation Annealing of Radiation Embrittlement of the Reactor Pressure Vessel
Authors: E. A. Krasikov
Abstract:
Influence of neutron irradiation on RPV steel degradation are examined with reference to the possible reasons of the substantial experimental data scatter and furthermore – nonstandard (non-monotonous) and oscillatory embrittlement behavior. In our glance, this phenomenon may be explained by presence of the wavelike component in the embrittlement kinetics. We suppose that the main factor affecting steel anomalous embrittlement is fast neutron intensity (dose rate or flux), flux effect manifestation depends on state-of-the-art fluence level. At low fluencies, radiation degradation has to exceed normative value, then approaches to normative meaning and finally became sub normative. Data on radiation damage change including through the ex-service RPVs taking into account chemical factor, fast neutron fluence and neutron flux were obtained and analyzed. In our opinion, controversy in the estimation on neutron flux on radiation degradation impact may be explained by presence of the wavelike component in the embrittlement kinetics. Therefore, flux effect manifestation depends on fluence level. At low fluencies, radiation degradation has to exceed normative value, then approaches to normative meaning and finally became sub normative. Moreover as a hypothesis we suppose that at some stages of irradiation damaged metal have to be partially restored by irradiation i.e. neutron bombardment. Nascent during irradiation structure undergo occurring once or periodically transformation in a direction both degradation and recovery of the initial properties. According to our hypothesis, at some stage(s) of metal structure degradation neutron bombardment became recovering factor. As a result, oscillation arises that in turn leads to enhanced data scatter.Keywords: annealing, embrittlement, radiation, RPV steel
Procedia PDF Downloads 3423658 Fast Switching Mechanism for Multicasting Failure in OpenFlow Networks
Authors: Alaa Allakany, Koji Okamura
Abstract:
Multicast technology is an efficient and scalable technology for data distribution in order to optimize network resources. However, in the IP network, the responsibility for management of multicast groups is distributed among network routers, which causes some limitations such as delays in processing group events, high bandwidth consumption and redundant tree calculation. Software Defined Networking (SDN) represented by OpenFlow presented as a solution for many problems, in SDN the control plane and data plane are separated by shifting the control and management to a remote centralized controller, and the routers are used as a forwarder only. In this paper we will proposed fast switching mechanism for solving the problem of link failure in multicast tree based on Tabu Search heuristic algorithm and modifying the functions of OpenFlow switch to fasts switch to the pack up sub tree rather than sending to the controller. In this work we will implement multicasting OpenFlow controller, this centralized controller is a core part in our multicasting approach, which is responsible for 1- constructing the multicast tree, 2- handling the multicast group events and multicast state maintenance. And finally modifying OpenFlow switch functions for fasts switch to pack up paths. Forwarders, forward the multicast packet based on multicast routing entries which were generated by the centralized controller. Tabu search will be used as heuristic algorithm for construction near optimum multicast tree and maintain multicast tree to still near optimum in case of join or leave any members from multicast group (group events).Keywords: multicast tree, software define networks, tabu search, OpenFlow
Procedia PDF Downloads 2643657 Artificial Intelligence for Generative Modelling
Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta
Abstract:
As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques
Procedia PDF Downloads 1503656 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model
Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul
Abstract:
Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma
Procedia PDF Downloads 823655 Diagnosis of Static Eccentricity in 400 kW Induction Machine Based on the Analysis of Stator Currents
Authors: Saleh Elawgali
Abstract:
Current spectrums of a four pole-pair, 400 kW induction machine were calculated for the cases of full symmetry and static eccentricity. The calculations involve integration of 93 electrical plus four mechanical ordinary differential equations. Electrical equations account for variable inductances affected by slotting and eccentricities. The calculations were followed by Fourier analysis of the stator currents in steady state operation. Zooms of the current spectrums, around the 50 Hz fundamental harmonic as well as of the main slot harmonic zone, were included. The spectrums included refer to both calculated and measured currents.Keywords: diagnostic, harmonic, induction machine, spectrum
Procedia PDF Downloads 5253654 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System
Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin
Abstract:
The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.Keywords: TB smears, automated microscope, artificial intelligence, medical imaging
Procedia PDF Downloads 2303653 Improved Classification Procedure for Imbalanced and Overlapped Situations
Authors: Hankyu Lee, Seoung Bum Kim
Abstract:
The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.Keywords: classification, imbalanced data with class overlap, split data space, support vector machine
Procedia PDF Downloads 3083652 The 10,000 Fold Effect Retrograde Neurotransmission: A Newer Concept for Paraplegia’s Physiological Revival by the Use of Intrathecal Sodium Nitroprusside
Authors: V. K. Tewari, M. Hussain, H. K. D. Gupta
Abstract:
B-Methylprednisolone-level-1-benefit (20%) usually given in paraplegia (but within 8hrs). Patients wait-long-duration for physiological-recovery. Intrathecal-Sodium-Nitroprusside(ITSNP) has been used-in vasospasm-due-to-subarachnoid-hemorrhage. ITSNP-has been studied-here for wide-window-period-range for-treatment, fast-recovery/affordability. 2- for acute-cases-and 1-mechanism-for chronic-cases, which-are-interrelated, are being-proposed-for-physiological-recovery. retrograde-neurotransmission, vasospasm and long-term-potentiation-(ltp) mechanisms are proposed here for recovery. It’s a case-control-prospective-study. 82paraplegia-patients(10patients taken as control-no superfusion or dextrose5% superfusion and 72patients as ITSNP-group). The mean time for superfusion was 14.11 days. ITSNP administered at a dosage of 0.2 mg/kg bo wt. Pre/post ITSNP monitored by SSEP/MEP. After-2-Hours in ITSNP-group Mean-Change-From-Baseline-Asia Motor/Sensory-Score 13.84%/13.10%, after-24-hours MOTOR-1.27-points decrease(3.77%) and SENSORY 10.5points-increase(6.22%)as compared to Control-group no-change noted upto 24-hours, At-7days ITSNP motor/sensory;11.56%/6.22% as compared to Control-group 7.60/4.48%, At-2-months in ITSNP 27.69%/6.22% as compared to Control-group 16.02/4.5%. SSEP/MEP-documented-improvements-noted. ITSNP, a-swift-acting-drug in treatment-of-paraplegia, is effective within-two-hours(mean-change-MOTOR-13.84% and SENSORY-13.10%) on-mean14.11th postparaplegia-day with a small-detrimental-response after-24-hours which-recovers-fast.Keywords: paraplegias, intrathecal sodium nitroprusside, retrograde transmission, the 10, 000 fold effect, perforators, vasodilatations, long term potenciations
Procedia PDF Downloads 4093651 Automated Feature Detection and Matching Algorithms for Breast IR Sequence Images
Authors: Chia-Yen Lee, Hao-Jen Wang, Jhih-Hao Lai
Abstract:
In recent years, infrared (IR) imaging has been considered as a potential tool to assess the efficacy of chemotherapy and early detection of breast cancer. Regions of tumor growth with high metabolic rate and angiogenesis phenomenon lead to the high temperatures. Observation of differences between the heat maps in long term is useful to help assess the growth of breast cancer cells and detect breast cancer earlier, wherein the multi-time infrared image alignment technology is a necessary step. Representative feature points detection and matching are essential steps toward the good performance of image registration and quantitative analysis. However, there is no clear boundary on the infrared images and the subject's posture are different for each shot. It cannot adhesive markers on a body surface for a very long period, and it is hard to find anatomic fiducial markers on a body surface. In other words, it’s difficult to detect and match features in an IR sequence images. In this study, automated feature detection and matching algorithms with two type of automatic feature points (i.e., vascular branch points and modified Harris corner) are developed respectively. The preliminary results show that the proposed method could identify the representative feature points on the IR breast images successfully of 98% accuracy and the matching results of 93% accuracy.Keywords: Harris corner, infrared image, feature detection, registration, matching
Procedia PDF Downloads 3043650 Spectral Efficiency Improvement in 5G Systems by Polyphase Decomposition
Authors: Wilson Enríquez, Daniel Cardenas
Abstract:
This article proposes a filter bank format combined with the mathematical tool called polyphase decomposition and the discrete Fourier transform (DFT) with the purpose of improving the performance of the fifth-generation communication systems (5G). We started with a review of the literature and the study of the filter bank theory and its combination with DFT in order to improve the performance of wireless communications since it reduces the computational complexity of these communication systems. With the proposed technique, several experiments were carried out in order to evaluate the structures in 5G systems. Finally, the results are presented in graphical form in terms of bit error rate against the ratio bit energy/noise power spectral density (BER vs. Eb / No).Keywords: multi-carrier system (5G), filter bank, polyphase decomposition, FIR equalizer
Procedia PDF Downloads 2043649 Applications of Artificial Intelligence (AI) in Cardiac imaging
Authors: Angelis P. Barlampas
Abstract:
The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine
Procedia PDF Downloads 793648 A Supervised Approach for Detection of Singleton Spam Reviews
Authors: Atefeh Heydari, Mohammadali Tavakoli, Naomie Salim
Abstract:
In recent years, we have witnessed that online reviews are the most important source of customers’ opinion. They are progressively more used by individuals and organisations to make purchase and business decisions. Unfortunately, for the reason of profit or fame, frauds produce deceptive reviews to hoodwink potential customers. Their activities mislead not only potential customers to make appropriate purchasing decisions and organisations to reshape their business, but also opinion mining techniques by preventing them from reaching accurate results. Spam reviews could be divided into two main groups, i.e. multiple and singleton spam reviews. Detecting a singleton spam review that is the only review written by a user ID is extremely challenging due to lack of clue for detection purposes. Singleton spam reviews are very harmful and various features and proofs used in multiple spam reviews detection are not applicable in this case. Current research aims to propose a novel supervised technique to detect singleton spam reviews. To achieve this, various features are proposed in this study and are to be combined with the most appropriate features extracted from literature and employed in a classifier. In order to compare the performance of different classifiers, SVM and naive Bayes classification algorithms were used for model building. The results revealed that SVM was more accurate than naive Bayes and our proposed technique is capable to detect singleton spam reviews effectively.Keywords: classification algorithms, Naïve Bayes, opinion review spam detection, singleton review spam detection, support vector machine
Procedia PDF Downloads 3093647 Using Wavelet Uncertainty Relations in Quantum Mechanics: From Trajectories Foam to Newtonian Determinism
Authors: Paulo Castro, J. R. Croca, M. Gatta, R. Moreira
Abstract:
Owing to the development of quantum mechanics, we will contextualize the foundations of the theory on the Fourier analysis framework, thus stating the unavoidable philosophical conclusions drawn by Niels Bohr. We will then introduce an alternative way of describing the undulatory aspects of quantum entities by using gaussian Morlet wavelets. The description has its roots in de Broglie's realistic program for quantum physics. It so happens that using wavelets it is possible to formulate a more general set of uncertainty relations. A set from which it is possible to theoretically describe both ends of the behavioral spectrum in reality: the indeterministic quantum trajectorial foam and the perfectly drawn Newtonian trajectories.Keywords: philosophy of quantum mechanics, quantum realism, morlet wavelets, uncertainty relations, determinism
Procedia PDF Downloads 1723646 Commuters Trip Purpose Decision Tree Based Model of Makurdi Metropolis, Nigeria and Strategic Digital City Project
Authors: Emmanuel Okechukwu Nwafor, Folake Olubunmi Akintayo, Denis Alcides Rezende
Abstract:
Decision tree models are versatile and interpretable machine learning algorithms widely used for both classification and regression tasks, which can be related to cities, whether physical or digital. The aim of this research is to assess how well decision tree algorithms can predict trip purposes in Makurdi, Nigeria, while also exploring their connection to the strategic digital city initiative. The research methodology involves formalizing household demographic and trips information datasets obtained from extensive survey process. Modelling and Prediction were achieved using Python Programming Language and the evaluation metrics like R-squared and mean absolute error were used to assess the decision tree algorithm's performance. The results indicate that the model performed well, with accuracies of 84% and 68%, and low MAE values of 0.188 and 0.314, on training and validation data, respectively. This suggests the model can be relied upon for future prediction. The conclusion reiterates that This model will assist decision-makers, including urban planners, transportation engineers, government officials, and commuters, in making informed decisions on transportation planning and management within the framework of a strategic digital city. Its application will enhance the efficiency, sustainability, and overall quality of transportation services in Makurdi, Nigeria.Keywords: decision tree algorithm, trip purpose, intelligent transport, strategic digital city, travel pattern, sustainable transport
Procedia PDF Downloads 243645 Identifying Pathogenic Mycobacterium Species Using Multiple Gene Phylogenetic Analysis
Authors: Lemar Blake, Chris Oura, Ayanna C. N. Phillips Savage
Abstract:
Improved DNA sequencing technology has greatly enhanced bacterial identification, especially for organisms that are difficult to culture. Mycobacteriosis with consistent hyphema, bilateral exophthalmia, open mouth gape and ocular lesions, were observed in various fish populations at the School of Veterinary Medicine, Aquaculture/Aquatic Animal Health Unit. Objective: To identify the species of Mycobacterium that is affecting aquarium fish at the School of Veterinary Medicine, Aquaculture/Aquatic Animal Health Unit. Method: A total of 13 fish samples were collected and analyzed via: Ziehl-Neelsen, conventional polymerase chain reaction (PCR) and real-time PCR. These tests were carried out simultaneously for confirmation. The following combination of conventional primers: 16s rRNA (564 bp), rpoB (396 bp), sod (408 bp) were used. Concatenation of the gene fragments was carried out to phylogenetically classify the organism. Results: Acid fast non-branching bacilli were detected in all samples from homogenized internal organs. All 13 acid fast samples were positive for Mycobacterium via real-time PCR. Partial gene sequences using all three primer sets were obtained from two samples and demonstrated a novel strain. A strain 99% related to Mycobacterium marinum was also confirmed in one sample, using 16srRNA and rpoB genes. The two novel strains were clustered with the rapid growers and strains that are known to affect humans. Conclusions: Phylogenetic analysis demonstrated two novel Mycobacterium strains with the potential of being zoonotic and one strain 99% related to Mycobacterium marinum.Keywords: polymerase chain reaction, phylogenetic, DNA sequencing, zoonotic
Procedia PDF Downloads 1443644 Optimal Pricing Based on Real Estate Demand Data
Authors: Vanessa Kummer, Maik Meusel
Abstract:
Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning
Procedia PDF Downloads 2903643 The Classification Accuracy of Finance Data through Holder Functions
Authors: Yeliz Karaca, Carlo Cattani
Abstract:
This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).Keywords: artificial neural networks, finance data, Holder regularity, multifractals
Procedia PDF Downloads 2473642 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy
Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren
Abstract:
Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment
Procedia PDF Downloads 403641 Properties of Poly(Amide-Imide) with Low Residual Stress for Electronic Material
Authors: Kwangin Kim, Taewon Yoo, Haksoo Han
Abstract:
Polyimide is a superior polymer in the electronics industry, and we conducted a study to synthesize poly(amide-imide) at low temperatures. Poly(amide-imide) was synthesized at low-temperature curing to offer a thermal stable membrane with low residual stress and good processability. As a result, the low crack polymer with good processability could be used to various applications such as semiconductors, integrated circuits, coating materials, membranes, and display. The synthesis of poly(amide-imide) at low temperatures was confirmed by Fourier transform infrared spectroscopy (FT-IR). Thermal stabilities of the polymer was confirmed by thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC).Keywords: poly(amide-imide), residual stress, thermal stability
Procedia PDF Downloads 4203640 General Architecture for Automation of Machine Learning Practices
Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain
Abstract:
Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler
Procedia PDF Downloads 583639 Rank-Based Chain-Mode Ensemble for Binary Classification
Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu
Abstract:
In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.Keywords: consensus, curse of correlation, imbalance classification, rank-based chain-mode ensemble
Procedia PDF Downloads 1383638 Bias Prevention in Automated Diagnosis of Melanoma: Augmentation of a Convolutional Neural Network Classifier
Authors: Kemka Ihemelandu, Chukwuemeka Ihemelandu
Abstract:
Melanoma remains a public health crisis, with incidence rates increasing rapidly in the past decades. Improving diagnostic accuracy to decrease misdiagnosis using Artificial intelligence (AI) continues to be documented. Unfortunately, unintended racially biased outcomes, a product of lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone, have increasingly been recognized as a problem.Resulting in noted limitations of the accuracy of the Convolutional neural network (CNN)models. CNN models are prone to biased output due to biases in the dataset used to train them. Our aim in this study was the optimization of convolutional neural network algorithms to mitigate bias in the automated diagnosis of melanoma. We hypothesized that our proposed training algorithms based on a data augmentation method to optimize the diagnostic accuracy of a CNN classifier by generating new training samples from the original ones will reduce bias in the automated diagnosis of melanoma. We applied geometric transformation, including; rotations, translations, scale change, flipping, and shearing. Resulting in a CNN model that provided a modifiedinput data making for a model that could learn subtle racial features. Optimal selection of the momentum and batch hyperparameter increased our model accuracy. We show that our augmented model reduces bias while maintaining accuracy in the automated diagnosis of melanoma.Keywords: bias, augmentation, melanoma, convolutional neural network
Procedia PDF Downloads 2133637 Measurement of Innovation Performance
Authors: M. Chobotová, Ž. Rylková
Abstract:
Time full of changes which is associated with globalization, tougher competition, changes in the structures of markets and economic downturn, that all force companies to think about their competitive advantages. These changes can bring the company a competitive advantage and that can help improve competitive position in the market. Policy of the European Union is focused on the fast growing innovative companies which quickly respond to market demands and consequently increase its competitiveness. To meet those objectives companies need the right conditions and support of their state.Keywords: innovation, performance, measurements metrics, indices
Procedia PDF Downloads 3763636 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem
Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly
Abstract:
We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard
Procedia PDF Downloads 5273635 Impact of Harmonic Resonance and V-THD in Sohar Industrial Port–C Substation
Authors: R. S. Al Abri, M. H. Albadi, M. H. Al Abri, U. K. Al Rasbi, M. H. Al Hasni, S. M. Al Shidi
Abstract:
This paper presents an analysis study on the impacts of the changes of the capacitor banks, the loss of a transformer, and the installation of distributed generation on the voltage total harmonic distortion and harmonic resonance. The study is applied in a real system in Oman, Sohar Industrial Port–C Substation Network. Frequency scan method and Fourier series analysis method are used with the help of EDSA software. Moreover, the results are compared with limits specified by national Oman distribution code.Keywords: power quality, capacitor bank, voltage total harmonics distortion, harmonic resonance, frequency scan
Procedia PDF Downloads 6193634 Decision Support: How Explainable A.I. Can Improve Transparency and Trust with Human Users
Authors: Devon Brown, Liu Chunmei
Abstract:
This paper will present an analysis as part of the researchers dissertation topic focusing on the intersection of affective and analytical directed acyclic graphs (DAGs) in the context of Decision Support Systems (DSS). The researcher’s work involves analyzing decision theory models like Affective and Bayesian Decision theory models and how they could be implemented under an Affective Computing Framework using Information Fusion and Human-Centered Design. Additionally, the researcher is beginning research on an Affective-Analytic Decision Framework (AADF) model for their dissertation research and are looking to merge logic and analytic models with empathetic insights into affective DAGs. Data-collection efforts begin Fall 2024 and in preparation for the efforts this paper looks to analyze previous research in this area and introduce the AADF framework and propose conceptual models for consideration. For this paper, the research emphasis is placed on analyzing Bayesian networks and Markov models which offer probabilistic techniques during uncertainty in decision-making. Ideally, including affect into analytic models will ensure algorithms can increase user trust with algorithms by including emotional states and the user’s experience with the goal of developing emotionally intelligent A.I. systems that can start to navigate the complex fabric of human emotion during decision-making.Keywords: decision support systems, explainable AI, HCAI techniques, affective-analytical decision framework
Procedia PDF Downloads 253633 Efficient Study of Substrate Integrated Waveguide Devices
Authors: J. Hajri, H. Hrizi, N. Sboui, H. Baudrand
Abstract:
This paper presents a study of SIW circuits (Substrate Integrated Waveguide) with a rigorous and fast original approach based on Iterative process (WCIP). The theoretical suggested study is validated by the simulation of two different examples of SIW circuits. The obtained results are in good agreement with those of measurement and with software HFSS.Keywords: convergence study, HFSS, modal decomposition, SIW circuits, WCIP method
Procedia PDF Downloads 4983632 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data
Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates
Abstract:
Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.
Procedia PDF Downloads 983631 Synthesis, Structural and Magnetic Properties of CdFe2O4 Ferrite
Authors: Justice Zakhele Msomi
Abstract:
Nanoparticles of CdFe2O4 with particle size of about 10 nm have been synthesized by high energy ball milling and co-precipitation processes. The synthesis route appears to have some effects on the properties. The compounds have been characterized by X-ray diffraction, Fourier Transform Infrared (FTIR), transmission electron microscopy (TEM), Mössbauer and magnetization measurements. The XRD pattern of CdFe2O4 provides information about single-phase formation of spinel structure with cubic symmetry. The FTIR measurements between 400 and 4000 cm-1 indicate intrinsic cation vibration of the spinel structure. The Mössbauer spectra were recorded at 4 K and 300 K. The hyperfine fields appear to be highly sensitive on particle size. The evolution of the properties as a function of particle size is also presented.Keywords: ferrite, nanoparticles, magnetization, Mössbauer
Procedia PDF Downloads 4033630 Hybrid Intelligent Optimization Methods for Optimal Design of Horizontal-Axis Wind Turbine Blades
Authors: E. Tandis, E. Assareh
Abstract:
Designing the optimal shape of MW wind turbine blades is provided in a number of cases through evolutionary algorithms associated with mathematical modeling (Blade Element Momentum Theory). Evolutionary algorithms, among the optimization methods, enjoy many advantages, particularly in stability. However, they usually need a large number of function evaluations. Since there are a large number of local extremes, the optimization method has to find the global extreme accurately. The present paper introduces a new population-based hybrid algorithm called Genetic-Based Bees Algorithm (GBBA). This algorithm is meant to design the optimal shape for MW wind turbine blades. The current method employs crossover and neighborhood searching operators taken from the respective Genetic Algorithm (GA) and Bees Algorithm (BA) to provide a method with good performance in accuracy and speed convergence. Different blade designs, twenty-one to be exact, were considered based on the chord length, twist angle and tip speed ratio using GA results. They were compared with BA and GBBA optimum design results targeting the power coefficient and solidity. The results suggest that the final shape, obtained by the proposed hybrid algorithm, performs better compared to either BA or GA. Furthermore, the accuracy and speed convergence increases when the GBBA is employedKeywords: Blade Design, Optimization, Genetic Algorithm, Bees Algorithm, Genetic-Based Bees Algorithm, Large Wind Turbine
Procedia PDF Downloads 317