Search results for: slice thickness accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5256

Search results for: slice thickness accuracy

3696 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 76
3695 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods

Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo

Abstract:

The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.

Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines

Procedia PDF Downloads 622
3694 Seismic Bearing Capacity Estimation of Shallow Foundations on Dense Sand Underlain by Loose Sand Strata by Using Finite Elements Limit Analysis

Authors: Pragyan Paramita Das, Vishwas N. Khatri

Abstract:

By using the lower- and upper- bound finite elements to limit analysis in conjunction with second-order conic programming (SOCP), the effect of seismic forces on the bearing capacity of surface strip footing resting on dense sand underlain by loose sand deposit is explored. The soil is assumed to obey the Mohr-Coulomb’s yield criterion and an associated flow rule. The angle of internal friction (ϕ) of the top and the bottom layer is varied from 42° to 44° and 32° to 34° respectively. The coefficient of seismic acceleration is varied from 0 to 0.3. The variation of bearing capacity with different thickness of top layer for various seismic acceleration coefficients is generated. A comparison will be made with the available solutions from literature wherever applicable.

Keywords: bearing capacity, conic programming, finite elements, seismic forces

Procedia PDF Downloads 173
3693 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia

Authors: Carol Anne Hargreaves

Abstract:

A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.

Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system

Procedia PDF Downloads 159
3692 A Closed-Form Solution and Comparison for a One-Dimensional Orthorhombic Quasicrystal and Crystal Plate

Authors: Arpit Bhardwaj, Koushik Roy

Abstract:

The work includes derivation of the exact-closed form solution for simply supported quasicrystal and crystal plates by using propagator matrix method under surface loading and free vibration. As a numerical example a quasicrystal and a crystal plate are considered, and after investigation, the variation of displacement and stress fields along the thickness of these two plates are presented. Further, it includes analyzing the displacement and stress fields for two plates having two different stacking arrangement, i.e., QuasiCrystal/Crystal/QuasiCrystal and Crystal/QuasiCrystal/Crystal and comparing their results. This will not only tell us the change in the behavior of displacement and stress fields in two different materials but also how these get changed after trying their different combinations. For the free vibration case, Crystal and Quasicrystal plates along with their different stacking arrangements are considered, and displacements are plotted in all directions for different Mode Shapes.

Keywords: free vibration, multilayered plates, surface loading, quasicrystals

Procedia PDF Downloads 149
3691 Calculation of Stress Intensity Factors in Rotating Disks Containing 3D Semi-Elliptical Cracks

Authors: Mahdi Fakoor, Seyed Mohammad Navid Ghoreishi

Abstract:

Initiation and propagation of cracks may cause catastrophic failures in rotating disks, and hence determination of fracture parameter in rotating disks under the different working condition is very important issue. In this paper, a comprehensive study of stress intensity factors in rotating disks containing 3D semi-elliptical cracks under the different working condition is investigated. In this regard, after verification of modeling and analytical procedure, the effects of mechanical properties, rotational velocity, and orientation of cracks on Stress Intensity Factors (SIF) in rotating disks under centrifugal loading are investigated. Also, the effects of using composite patch in reduction of SIF in rotating disks are studied. By that way, the effects of patching design variables like mechanical properties, thickness, and ply angle are investigated individually.

Keywords: stress intensity factor, semi-elliptical crack, rotating disk, finite element analysis (FEA)

Procedia PDF Downloads 368
3690 Computation of Thermal Stress Intensity Factor for Bonded Composite Repairs in Aircraft Structures

Authors: Fayçal Benyahia, Abdelmohsen Albedah, Bel Abbes Bachir Bouiadjra

Abstract:

In this study the Finite element method is used to analyse the effect of the thermal residual stresses resulting from adhesive curing on the performances of the bonded composite repair in aircraft structures. The stress intensity factor at the crack tip is chosen as fracture criterion in order to estimate the repair performances. The obtained results show that the presence of the thermal residual stresses reduces considerably the repair performances and consequently decreases the fatigue life of cracked structures. The effects of the curing temperature, the adhesive properties and the adhesive thickness on the Stress Intensity Factor (SIF) variation with thermal stresses are also analysed.

Keywords: bonded composite repair, residual stress, adhesion, stress transfer, finite element analysis

Procedia PDF Downloads 421
3689 Performance Comparison and Visualization of COMSOL Multiphysics, Matlab, and Fortran for Predicting the Reservoir Pressure on Oil Production in a Multiple Leases Reservoir with Boundary Element Method

Authors: N. Alias, W. Z. W. Muhammad, M. N. M. Ibrahim, M. Mohamed, H. F. S. Saipol, U. N. Z. Ariffin, N. A. Zakaria, M. S. Z. Suardi

Abstract:

This paper presents the performance comparison of some computation software for solving the boundary element method (BEM). BEM formulation is the numerical technique and high potential for solving the advance mathematical modeling to predict the production of oil well in arbitrarily shaped based on multiple leases reservoir. The limitation of data validation for ensuring that a program meets the accuracy of the mathematical modeling is considered as the research motivation of this paper. Thus, based on this limitation, there are three steps involved to validate the accuracy of the oil production simulation process. In the first step, identify the mathematical modeling based on partial differential equation (PDE) with Poisson-elliptic type to perform the BEM discretization. In the second step, implement the simulation of the 2D BEM discretization using COMSOL Multiphysic and MATLAB programming languages. In the last step, analyze the numerical performance indicators for both programming languages by using the validation of Fortran programming. The performance comparisons of numerical analysis are investigated in terms of percentage error, comparison graph and 2D visualization of pressure on oil production of multiple leases reservoir. According to the performance comparison, the structured programming in Fortran programming is the alternative software for implementing the accurate numerical simulation of BEM. As a conclusion, high-level language for numerical computation and numerical performance evaluation are satisfied to prove that Fortran is well suited for capturing the visualization of the production of oil well in arbitrarily shaped.

Keywords: performance comparison, 2D visualization, COMSOL multiphysic, MATLAB, Fortran, modelling and simulation, boundary element method, reservoir pressure

Procedia PDF Downloads 493
3688 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers' equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile

Procedia PDF Downloads 175
3687 The Study of Groundcover for Heat Reduction

Authors: Winai Mankhatitham

Abstract:

This research investigated groundcover on the roof (green roof) which can reduce the temperature and carbon monoxide. This study is divided into 3 main aspects: 1) Types of groundcover affecting heat reduction, 2) The efficiency on heat reduction of 3 types of groundcover, i.e. lawn, arachis pintoi, and purslane, 3) Database for designing green roof. This study has been designed as an experimental research by simulating the 3 types of groundcover in 3 trays placed in the green house for recording the temperature change for 24 hours. The results showed that the groundcover with the highest heat reduction efficiency was lawn. The dense of the lawn can protect the heat transfer to the soil. For the further study, there should be a comparative study of the thickness and the types of soil to get more information for the suitable types of groundcover and the soil for designing the energy saving green roof.

Keywords: green roof, heat reduction, groundcover, energy saving

Procedia PDF Downloads 520
3686 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis

Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin

Abstract:

Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.

Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis

Procedia PDF Downloads 205
3685 Numerical Prediction of Entropy Generation in Heat Exchangers

Authors: Nadia Allouache

Abstract:

The concept of second law is assumed to be important to optimize the energy losses in heat exchangers. The present study is devoted to the numerical prediction of entropy generation due to heat transfer and friction in a double tube heat exchanger partly or fully filled with a porous medium. The goal of this work is to find the optimal conditions that allow minimizing entropy generation. For this purpose, numerical modeling based on the control volume method is used to describe the flow and heat transfer phenomena in the fluid and the porous medium. Effects of the porous layer thickness, its permeability, and the effective thermal conductivity have been investigated. Unexpectedly, the fully porous heat exchanger yields a lower entropy generation than the partly porous case or the fluid case even if the friction increases the entropy generation.

Keywords: heat exchangers, porous medium, second law approach, turbulent flow

Procedia PDF Downloads 302
3684 Ulnar Nerve Changes Associated with Carpal Tunnel Syndrome and Effect on Median Ersus Ulnar Comparative Studies

Authors: Emmanuel K. Aziz Saba, Sarah S. El-Tawab

Abstract:

Objectives: Carpal tunnel syndrome (CTS) was found to be associated with high pressure within the Guyon’s canal. The aim of this study was to assess the involvement of sensory and/or motor ulnar nerve fibers in patients with CTS and whether this affects the accuracy of the median versus ulnar sensory and motor comparative tests. Patients and methods: The present study included 145 CTS hands and 71 asymptomatic control hands. Clinical examination was done for all patients. The following tests were done for the patients and control: (1) Sensory conduction studies: median nerve, ulnar nerve, dorsal ulnar cutaneous nerve and median versus ulnar digit (D) four sensory comparative study; (2) Motor conduction studies: median nerve, ulnar nerve and median versus ulnar motor comparative study. Results: There were no statistically significant differences between patients and control group as regards parameters of ulnar motor study and dorsal ulnar cutaneous sensory conduction study. It was found that 17 CTS hands (11.7%) had ulnar sensory abnormalities in 17 different patients. The median versus ulnar sensory and motor comparative studies were abnormal among all these 17 CTS hands. There were statistically significant negative correlations between median motor latency and both ulnar sensory amplitudes recording D5 and D4. There were statistically significant positive correlations between median sensory conduction velocity and both ulnar sensory nerve action potential amplitude recording D5 and D4. Conclusions: There is ulnar sensory nerve abnormality among CTS patients. This abnormality affects the amplitude of ulnar sensory nerve action potential. The presence of abnormalities in ulnar nerve occurs in moderate and severe degrees of CTS. This does not affect the median versus ulnar sensory and motor comparative tests accuracy and validity for use in electrophysiological diagnosis of CTS.

Keywords: carpal tunnel syndrome, ulnar nerve, median nerve, median versus ulnar comparative study, dorsal ulnar cutaneous nerve

Procedia PDF Downloads 570
3683 Transformer-Driven Multi-Category Classification for an Automated Academic Strand Recommendation Framework

Authors: Ma Cecilia Siva

Abstract:

This study introduces a Bidirectional Encoder Representations from Transformers (BERT)-based machine learning model aimed at improving educational counseling by automating the process of recommending academic strands for students. The framework is designed to streamline and enhance the strand selection process by analyzing students' profiles and suggesting suitable academic paths based on their interests, strengths, and goals. Data was gathered from a sample of 200 grade 10 students, which included personal essays and survey responses relevant to strand alignment. After thorough preprocessing, the text data was tokenized, label-encoded, and input into a fine-tuned BERT model set up for multi-label classification. The model was optimized for balanced accuracy and computational efficiency, featuring a multi-category classification layer with sigmoid activation for independent strand predictions. Performance metrics showed an F1 score of 88%, indicating a well-balanced model with precision at 80% and recall at 100%, demonstrating its effectiveness in providing reliable recommendations while reducing irrelevant strand suggestions. To facilitate practical use, the final deployment phase created a recommendation framework that processes new student data through the trained model and generates personalized academic strand suggestions. This automated recommendation system presents a scalable solution for academic guidance, potentially enhancing student satisfaction and alignment with educational objectives. The study's findings indicate that expanding the data set, integrating additional features, and refining the model iteratively could improve the framework's accuracy and broaden its applicability in various educational contexts.

Keywords: tokenized, sigmoid activation, transformer, multi category classification

Procedia PDF Downloads 14
3682 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements

Authors: Denis A. Sokolov, Andrey V. Mazurkevich

Abstract:

In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.

Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement

Procedia PDF Downloads 62
3681 Performance of Stiffened Slender Built up Steel I-Columns

Authors: M. E. Abou-Hashem El Dib, M. K. Swailem, M. M. Metwally, A. I. El Awady

Abstract:

The present work illustrates a parametric study for the effect of stiffeners on the performance of slender built up steel I-columns. To achieve the desired analysis, finite element technique is used to develop nonlinear three-dimensional models representing the investigated columns. The finite element program (ANSYS 13.0) is used as a calculation tool for the necessary nonlinear analysis. A validation of the obtained numerical results is achieved. The considered parameters in the study are the column slenderness ratio and the horizontal stiffener's dimensions as well as the number of stiffeners. The dimensions of the stiffeners considered in the analysis are the stiffener width and the stiffener thickness. Numerical results signify a considerable effect of stiffeners on the performance and failure load of slender built up steel I-columns.

Keywords: columns, local buckling, slender, stiffener, thin walled section

Procedia PDF Downloads 320
3680 Examination of Internally and Externally Coated Cr3C2 Exhaust Pipe of a Diesel Engine via Plasma Spray Method

Authors: H. Hazar, S. Sap

Abstract:

In this experimental study; internal and external parts of an exhaust pipe were coated with a chromium carbide (Cr3C2) material having a thickness of 100 micron by using the plasma spray method. A diesel engine was used as the test engine. Thus, the results of continuing chemical reaction in coated and uncoated exhaust pipes were investigated. Internally and externally coated exhaust pipe was compared with the standard exhaust system. External heat transfer occurring as a result of coating the internal and external parts of the exhaust pipe was reduced and its effects on harmful exhaust emissions were investigated. As a result of the experiments; a remarkable improvement was determined in emission values as a result of delay in cooling of exhaust gases due to the coating.

Keywords: chrome carbide, diesel engine, exhaust emission, thermal barrier

Procedia PDF Downloads 270
3679 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients

Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori

Abstract:

Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.

Keywords: asthma, datamining, classification, machine learning

Procedia PDF Downloads 450
3678 Thin and Flexible Zn-Air Battery by Inexpensive Screen Printing Technique

Authors: Sira Suren, Soorathep Kheawhom

Abstract:

This work focuses the development of thin and flexible zinc-air battery. The battery with an overall thickness of about 300 μm was fabricated by an inexpensive screen-printing technique. Commercial nano-silver ink was used as both current collectors and catalyst layer. Carbon black ink was used to fabricate cathode electrode. Polypropylene membrane was used as the cathode substrate and separator. 9 M KOH was used as the electrolyte. A mixture of Zn powder and ZnO was used to prepare the anode electrode. Types of conductive materials (Bi2O3, Na2O3Si and carbon black) for the anode and its concentration were investigated. Results showed that the battery using 29% carbon black showed the best performance. The open-circuit voltage and energy density observed were 1.6 V and 694 Wh/kg, respectively. When the battery was discharged at 10 mA/cm2, the potential voltage observed was 1.35 V. Furthermore, the battery was tested for its flexibility. Upon bending, no significant loss in performance was observed.

Keywords: flexible, Gel Electrolyte, screen printing, thin battery, Zn-Air battery

Procedia PDF Downloads 213
3677 ARABEX: Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder and Custom Convolutional Recurrent Neural Network

Authors: Hozaifa Zaki, Ghada Soliman

Abstract:

In this paper, we introduced an approach for Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder (ARABEX) with bidirectional LSTM. This approach is used for translating the Arabic dot-matrix expiration dates into their corresponding filled-in dates. A custom lightweight Convolutional Recurrent Neural Network (CRNN) model is then employed to extract the expiration dates. Due to the lack of available dataset images for the Arabic dot-matrix expiration date, we generated synthetic images by creating an Arabic dot-matrix True Type Font (TTF) matrix to address this limitation. Our model was trained on a realistic synthetic dataset of 3287 images, covering the period from 2019 to 2027, represented in the format of yyyy/mm/dd. We then trained our custom CRNN model using the generated synthetic images to assess the performance of our model (ARABEX) by extracting expiration dates from the translated images. Our proposed approach achieved an accuracy of 99.4% on the test dataset of 658 images, while also achieving a Structural Similarity Index (SSIM) of 0.46 for image translation on our dataset. The ARABEX approach demonstrates its ability to be applied to various downstream learning tasks, including image translation and reconstruction. Moreover, this pipeline (ARABEX+CRNN) can be seamlessly integrated into automated sorting systems to extract expiry dates and sort products accordingly during the manufacturing stage. By eliminating the need for manual entry of expiration dates, which can be time-consuming and inefficient for merchants, our approach offers significant results in terms of efficiency and accuracy for Arabic dot-matrix expiration date recognition.

Keywords: computer vision, deep learning, image processing, character recognition

Procedia PDF Downloads 85
3676 Radiation Dosimetry Using Sintered Pellets of Yellow Beryl (Heliodor) Crystals

Authors: Lucas Sátiro Do Carmo, Betzabel Noemi Silva Carrera, Shigueo Watanabe, J. F. D. Chubaci

Abstract:

Beryl is a silicate with chemical formula Be₃Al₂(SiO₃)₆ commonly found in Brazil. It has a few colored variations used as jewelry, like Aquamarine (blueish), Emerald (green) and Heliodor (yellow). The color of each variation depends on the dopant that is naturally present in the crystal lattice. In this work, Heliodor pellets of 5 mm diameter and 1 mm thickness have been produced and investigated using thermoluminescence (TL) to evaluate its potential for use as gamma ray’s dosimeter. The results show that the pellets exhibited a prominent TL peak at 205 °C that grows linearly with dose when irradiated from 1 Gy to 1000 Gy. A comparison has been made between powdered and sintered dosimeters. The results show that sintered pellets have higher sensitivity than powder dosimeter. The TL response of this mineral is satisfactory for radiation dosimetry applications in the studied dose range.

Keywords: dosimetry, beryl, gamma rays, sintered pellets, new material

Procedia PDF Downloads 100
3675 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 65
3674 Improved Distance Estimation in Dynamic Environments through Multi-Sensor Fusion with Extended Kalman Filter

Authors: Iffat Ara Ebu, Fahmida Islam, Mohammad Abdus Shahid Rafi, Mahfuzur Rahman, Umar Iqbal, John Ball

Abstract:

The application of multi-sensor fusion for enhanced distance estimation accuracy in dynamic environments is crucial for advanced driver assistance systems (ADAS) and autonomous vehicles. Limitations of single sensors such as cameras or radar in adverse conditions motivate the use of combined camera and radar data to improve reliability, adaptability, and object recognition. A multi-sensor fusion approach using an extended Kalman filter (EKF) is proposed to combine sensor measurements with a dynamic system model, achieving robust and accurate distance estimation. The research utilizes the Mississippi State University Autonomous Vehicular Simulator (MAVS) to create a controlled environment for data collection. Data analysis is performed using MATLAB. Qualitative (visualization of fused data vs ground truth) and quantitative metrics (RMSE, MAE) are employed for performance assessment. Initial results with simulated data demonstrate accurate distance estimation compared to individual sensors. The optimal sensor measurement noise variance and plant noise variance parameters within the EKF are identified, and the algorithm is validated with real-world data from a Chevrolet Blazer. In summary, this research demonstrates that multi-sensor fusion with an EKF significantly improves distance estimation accuracy in dynamic environments. This is supported by comprehensive evaluation metrics, with validation transitioning from simulated to real-world data, paving the way for safer and more reliable autonomous vehicle control.

Keywords: sensor fusion, EKF, MATLAB, MAVS, autonomous vehicle, ADAS

Procedia PDF Downloads 50
3673 Evaluation of Subsurface Drilling and Geo Mechanic Properties Based on Stratum Index Factor for Humanities Environment

Authors: Abdull Halim Abdul, Muhaimin Sulam

Abstract:

This paper is about a subsurface study of Taman Pudu Ulu, Cheras, Kuala Lumpur with emphasize of Geo mechanic properties based on stratum index factor in humanities environment. Subsurface drilling and seismic data were used to understand the subsurface condition of the study area such as the type and thickness of the strata. Borehole and soil samples were recovered Geo mechanic properties of the area by conducting number of experiments. Taman Pudu Ulu overlies the Kuala Lumpur Limestone formation that is known for its karstic features such as caves and cavities. Hence by knowing the Geo mechanic properties such as the normal strain and shear strain we can plan a safer and economics construction that is plan at the area in the future.

Keywords: stratum, index factor, geo mechanic properties, humanities environment

Procedia PDF Downloads 497
3672 Semiconductor Nanofilm Based Schottky-Barrier Solar Cells

Authors: Mariyappan Shanmugam, Bin Yu

Abstract:

Schottky-barrier solar cells are demonstrated employing 2D-layered MoS2 and WS2 semiconductor nanofilms as photo-active material candidates synthesized by chemical vapor deposition method. Large area MoS2 and WS2 nanofilms are stacked by layer transfer process to achieve thicker photo-active material studied by atomic force microscopy showing a thickness in the range of ~200 nm. Two major vibrational active modes associated with 2D-layered MoS2 and WS2 are studied by Raman spectroscopic technique to estimate the quality of the nanofilms. Schottky-barrier solar cells employed MoS2 and WS2 active materials exhibited photoconversion efficiency of 1.8 % and 1.7 % respectively. Fermi-level pinning at metal/semiconductor interface, electronic transport and possible recombination mechanisms are studied in the Schottky-barrier solar cells.

Keywords: two-dimensional nanosheet, graphene, hexagonal boron nitride, solar cell, Schottky barrier

Procedia PDF Downloads 334
3671 Static Test Pad for Solid Rocket Motors

Authors: Svanik Garg

Abstract:

Static Test Pads are stationary mechanisms that hold a solid rocket motor, measuring the different parameters of its operation including thrust and temperature to better calibrate it for launch. This paper outlines a specific STP designed to test high powered rocket motors with a thrust upwards of 4000N and limited to 6500N. The design includes a specific portable mechanism with cost an integral part of the design process to make it accessible to small scale rocket developers with limited resources. Using curved surfaces and an ergonomic design, the STP has a delicately engineered façade/case with a focus on stability and axial calibration of thrust. This paper describes the design, operation and working of the STP and its widescale uses given the growing market of aviation enthusiasts. Simulations on the CAD model in Fusion 360 provided promising results with a safety factor of 2 established and stress limited along with the load coefficient A PCB was also designed as part of the test pad design process to help obtain results, with visual output and various virtual terminals to collect data of different parameters. The circuitry was simulated using ‘proteus’ and a special virtual interface with auditory commands was also created for accessibility and wide-scale implementation. Along with this description of the design, the paper also emphasizes the design principle behind the STP including a description of its vertical orientation to maximize thrust accuracy along with a stable base to prevent micromovements. Given the rise of students and professionals alike building high powered rockets, the STP described in this paper is an appropriate option, with limited cost, portability, accuracy, and versatility. There are two types of STP’s vertical or horizontal, the one discussed in this paper is vertical to utilize the axial component of thrust.

Keywords: static test pad, rocket motor, thrust, load, circuit, avionics, drag

Procedia PDF Downloads 390
3670 Vehicle Activity Characterization Approach to Quantify On-Road Mobile Source Emissions

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. Other methods provided better accuracy utilizing annual average estimates. Travel demand models provided an intermediate level of detail through average daily volumes. Currently, higher accuracy can be established utilizing microscopic analyses by splitting the network links into sub-links and utilizing second-by-second trajectories to calculate emissions. The need to accurately quantify transportation-related emissions from vehicles is essential. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited access highway in Orlando, Florida. First, (at the most basic level), emissions were estimated for the entire 10-mile section 'by hand' using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NOx, PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach.

Keywords: limited access highways, MOVES, operating mode distribution (OPMODE), transportation emissions, vehicle specific power (VSP)

Procedia PDF Downloads 341
3669 Studying the Effect of Silicon Substrate Intrinsic Carrier Concentration on Performance of ZnO/Si Solar Cells

Authors: Syed Sadique Anwer Askari, Mukul Kumar Das

Abstract:

Zinc Oxide (ZnO) solar cells have drawn great attention due to the enhanced efficiency and low-cost fabrication process. In this study, ZnO thin film is used as the active layer, hole blocking layer, antireflection coating (ARC) as well as transparent conductive oxide. To improve the conductivity of ZnO, top layer of ZnO is doped with aluminum, for top contact. Intrinsic carrier concentration of silicon substrate plays an important role in enhancing the power conversion efficiency (PCE) of ZnO/Si solar cell. With the increase of intrinsic carrier concentration PCE decreased due to increase in dark current in solar cell. At 80nm ZnO and 160µm Silicon substrate thickness, power conversion efficiency of 26.45% and 21.64% is achieved with intrinsic carrier concentration of 1x109/cm3, 1.4x1010/cm3 respectively.

Keywords: hetero-junction solar cell, solar cell, substrate intrinsic carrier concentration, ZnO/Si

Procedia PDF Downloads 602
3668 Dielectric Thickness Modulation Based Optically Transparent Leaky Wave Antenna Design

Authors: Waqar Ali Khan

Abstract:

A leaky-wave antenna design is proposed which is based on the realization of a certain kind of surface impedance profile that allows the existence of a perturbed surface wave (fast wave) that radiates. The antenna is realized by using optically transparent material Plexiglas. Plexiglas behaves as a dielectric at radio frequencies and is transparent at optical frequencies. In order to have a ground plane for the microwave frequencies, metal strips are used parallel to the E field of the operating mode. The microwave wavelength chosen is large enough such that it does not resolve the metal strip ground plane and sees it to be a uniform ground plane. While, at optical frequencies, the metal strips do have some shadowing effect. However still, about 62% of optical power can be transmitted through the antenna.

Keywords: Plexiglass, surface-wave, optically transparent, metal strip

Procedia PDF Downloads 147
3667 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 134