Search results for: empathic accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3745

Search results for: empathic accuracy

1315 Efficient Model Order Reduction of Descriptor Systems Using Iterative Rational Krylov Algorithm

Authors: Muhammad Anwar, Ameen Ullah, Intakhab Alam Qadri

Abstract:

This study presents a technique utilizing the Iterative Rational Krylov Algorithm (IRKA) to reduce the order of large-scale descriptor systems. Descriptor systems, which incorporate differential and algebraic components, pose unique challenges in Model Order Reduction (MOR). The proposed method partitions the descriptor system into polynomial and strictly proper parts to minimize approximation errors, applying IRKA exclusively to the strictly adequate component. This approach circumvents the unbounded errors that arise when IRKA is directly applied to the entire system. A comparative analysis demonstrates the high accuracy of the reduced model and a significant reduction in computational burden. The reduced model enables more efficient simulations and streamlined controller designs. The study highlights IRKA-based MOR’s effectiveness in optimizing complex systems’ performance across various engineering applications. The proposed methodology offers a promising solution for reducing the complexity of large-scale descriptor systems while maintaining their essential characteristics and facilitating their analysis, simulation, and control design.

Keywords: model order reduction, descriptor systems, iterative rational Krylov algorithm, interpolatory model reduction, computational efficiency, projection methods, H₂-optimal model reduction

Procedia PDF Downloads 31
1314 Multistage Data Envelopment Analysis Model for Malmquist Productivity Index Using Grey's System Theory to Evaluate Performance of Electric Power Supply Chain in Iran

Authors: Mesbaholdin Salami, Farzad Movahedi Sobhani, Mohammad Sadegh Ghazizadeh

Abstract:

Evaluation of organizational performance is among the most important measures that help organizations and entities continuously improve their efficiency. Organizations can use the existing data and results from the comparison of units under investigation to obtain an estimation of their performance. The Malmquist Productivity Index (MPI) is an important index in the evaluation of overall productivity, which considers technological developments and technical efficiency at the same time. This article proposed a model based on the multistage MPI, considering limited data (Grey’s theory). This model can evaluate the performance of units using limited and uncertain data in a multistage process. It was applied by the electricity market manager to Iran’s electric power supply chain (EPSC), which contains uncertain data, to evaluate the performance of its actors. Results from solving the model showed an improvement in the accuracy of future performance of the units under investigation, using the Grey’s system theory. This model can be used in all case studies, in which MPI is used and there are limited or uncertain data.

Keywords: Malmquist Index, Grey's Theory, CCR Model, network data envelopment analysis, Iran electricity power chain

Procedia PDF Downloads 165
1313 Major Depressive Disorder: Diagnosis based on Electroencephalogram Analysis

Authors: Wajid Mumtaz, Aamir Saeed Malik, Syed Saad Azhar Ali, Mohd Azhar Mohd Yasin

Abstract:

In this paper, a technique based on electroencephalogram (EEG) analysis is presented, aiming for diagnosing major depressive disorder (MDD) among a potential population of MDD patients and healthy controls. EEG is recognized as a clinical modality during applications such as seizure diagnosis, index for anesthesia, detection of brain death or stroke. However, its usability for psychiatric illnesses such as MDD is less studied. Therefore, in this study, for the sake of diagnosis, 2 groups of study participants were recruited, 1) MDD patients, 2) healthy people as controls. EEG data acquired from both groups were analyzed involving inter-hemispheric asymmetry and composite permutation entropy index (CPEI). To automate the process, derived quantities from EEG were utilized as inputs to classifier such as logistic regression (LR) and support vector machine (SVM). The learning of these classification models was tested with a test dataset. Their learning efficiency is provided as accuracy of classifying MDD patients from controls, their sensitivities and specificities were reported, accordingly (LR =81.7 % and SVM =81.5 %). Based on the results, it is concluded that the derived measures are indicators for diagnosing MDD from a potential population of normal controls. In addition, the results motivate further exploring other measures for the same purpose.

Keywords: major depressive disorder, diagnosis based on EEG, EEG derived features, CPEI, inter-hemispheric asymmetry

Procedia PDF Downloads 546
1312 Blood Volume Pulse Extraction for Non-Contact Photoplethysmography Measurement from Facial Images

Authors: Ki Moo Lim, Iman R. Tayibnapis

Abstract:

According to WHO estimation, 38 out of 56 million (68%) global deaths in 2012, were due to noncommunicable diseases (NCDs). To avert NCD, one of the solutions is early detection of diseases. In order to do that, we developed 'U-Healthcare Mirror', which is able to measure vital sign such as heart rate (HR) and respiration rate without any physical contact and consciousness. To measure HR in the mirror, we utilized digital camera. The camera records red, green, and blue (RGB) discoloration from user's facial image sequences. We extracted blood volume pulse (BVP) from the RGB discoloration because the discoloration of the facial skin is accordance with BVP. We used blind source separation (BSS) to extract BVP from the RGB discoloration and adaptive filters for removing noises. We utilized singular value decomposition (SVD) method to implement the BSS and the adaptive filters. HR was estimated from the obtained BVP. We did experiment for HR measurement by using our method and previous method that used independent component analysis (ICA) method. We compared both of them with HR measurement from commercial oximeter. The experiment was conducted under various distance between 30~110 cm and light intensity between 5~2000 lux. For each condition, we did measurement 7 times. The estimated HR showed 2.25 bpm of mean error and 0.73 of pearson correlation coefficient. The accuracy has improved compared to previous work. The optimal distance between the mirror and user for HR measurement was 50 cm with medium light intensity, around 550 lux.

Keywords: blood volume pulse, heart rate, photoplethysmography, independent component analysis

Procedia PDF Downloads 329
1311 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization

Procedia PDF Downloads 175
1310 Iris Feature Extraction and Recognition Based on Two-Dimensional Gabor Wavelength Transform

Authors: Bamidele Samson Alobalorun, Ifedotun Roseline Idowu

Abstract:

Biometrics technologies apply the human body parts for their unique and reliable identification based on physiological traits. The iris recognition system is a biometric–based method for identification. The human iris has some discriminating characteristics which provide efficiency to the method. In order to achieve this efficiency, there is a need for feature extraction of the distinct features from the human iris in order to generate accurate authentication of persons. In this study, an approach for an iris recognition system using 2D Gabor for feature extraction is applied to iris templates. The 2D Gabor filter formulated the patterns that were used for training and equally sent to the hamming distance matching technique for recognition. A comparison of results is presented using two iris image subjects of different matching indices of 1,2,3,4,5 filter based on the CASIA iris image database. By comparing the two subject results, the actual computational time of the developed models, which is measured in terms of training and average testing time in processing the hamming distance classifier, is found with best recognition accuracy of 96.11% after capturing the iris localization or segmentation using the Daughman’s Integro-differential, the normalization is confined to the Daugman’s rubber sheet model.

Keywords: Daugman rubber sheet, feature extraction, Hamming distance, iris recognition system, 2D Gabor wavelet transform

Procedia PDF Downloads 65
1309 Applicability of Cameriere’s Age Estimation Method in a Sample of Turkish Adults

Authors: Hatice Boyacioglu, Nursel Akkaya, Humeyra Ozge Yilanci, Hilmi Kansu, Nihal Avcu

Abstract:

The strong relationship between the reduction in the size of the pulp cavity and increasing age has been reported in the literature. This relationship can be utilized to estimate the age of an individual by measuring the pulp cavity size using dental radiographs as a non-destructive method. The purpose of this study is to develop a population specific regression model for age estimation in a sample of Turkish adults by applying Cameriere’s method on panoramic radiographs. The sample consisted of 100 panoramic radiographs of Turkish patients (40 men, 60 women) aged between 20 and 70 years. Pulp and tooth area ratios (AR) of the maxilla¬¬ry canines were measured by two maxillofacial radiologists and then the results were subjected to regression analysis. There were no statistically significant intra-observer and inter-observer differences. The correlation coefficient between age and the AR of the maxillary canines was -0.71 and the following regression equation was derived: Estimated Age = 77,365 – ( 351,193 × AR ). The mean prediction error was 4 years which is within acceptable errors limits for age estimation. This shows that the pulp/tooth area ratio is a useful variable for assessing age with reasonable accuracy. Based on the results of this research, it was concluded that Cameriere’s method is suitable for dental age estimation and it can be used for forensic procedures in Turkish adults. These instructions give you guidelines for preparing papers for conferences or journals.

Keywords: age estimation by teeth, forensic dentistry, panoramic radiograph, Cameriere's method

Procedia PDF Downloads 450
1308 Classification of Potential Biomarkers in Breast Cancer Using Artificial Intelligence Algorithms and Anthropometric Datasets

Authors: Aref Aasi, Sahar Ebrahimi Bajgani, Erfan Aasi

Abstract:

Breast cancer (BC) continues to be the most frequent cancer in females and causes the highest number of cancer-related deaths in women worldwide. Inspired by recent advances in studying the relationship between different patient attributes and features and the disease, in this paper, we have tried to investigate the different classification methods for better diagnosis of BC in the early stages. In this regard, datasets from the University Hospital Centre of Coimbra were chosen, and different machine learning (ML)-based and neural network (NN) classifiers have been studied. For this purpose, we have selected favorable features among the nine provided attributes from the clinical dataset by using a random forest algorithm. This dataset consists of both healthy controls and BC patients, and it was noted that glucose, BMI, resistin, and age have the most importance, respectively. Moreover, we have analyzed these features with various ML-based classifier methods, including Decision Tree (DT), K-Nearest Neighbors (KNN), eXtreme Gradient Boosting (XGBoost), Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machine (SVM) along with NN-based Multi-Layer Perceptron (MLP) classifier. The results revealed that among different techniques, the SVM and MLP classifiers have the most accuracy, with amounts of 96% and 92%, respectively. These results divulged that the adopted procedure could be used effectively for the classification of cancer cells, and also it encourages further experimental investigations with more collected data for other types of cancers.

Keywords: breast cancer, diagnosis, machine learning, biomarker classification, neural network

Procedia PDF Downloads 136
1307 Routing Medical Images with Tabu Search and Simulated Annealing: A Study on Quality of Service

Authors: Mejía M. Paula, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

In telemedicine, the image repository service is important to increase the accuracy of diagnostic support of medical personnel. This study makes comparison between two routing algorithms regarding the quality of service (QoS), to be able to analyze the optimal performance at the time of loading and/or downloading of medical images. This study focused on comparing the performance of Tabu Search with other heuristic and metaheuristic algorithms that improve QoS in telemedicine services in Colombia. For this, Tabu Search and Simulated Annealing heuristic algorithms are chosen for their high usability in this type of applications; the QoS is measured taking into account the following metrics: Delay, Throughput, Jitter and Latency. In addition, routing tests were carried out on ten images in digital image and communication in medicine (DICOM) format of 40 MB. These tests were carried out for ten minutes with different traffic conditions, reaching a total of 25 tests, from a server of Universidad Militar Nueva Granada (UMNG) in Bogotá-Colombia to a remote user in Universidad de Santiago de Chile (USACH) - Chile. The results show that Tabu search presents a better QoS performance compared to Simulated Annealing, managing to optimize the routing of medical images, a basic requirement to offer diagnostic images services in telemedicine.

Keywords: medical image, QoS, simulated annealing, Tabu search, telemedicine

Procedia PDF Downloads 219
1306 Experimental Validation of a Mathematical Model for Sizing End-of-Production-Line Test Benches for Electric Motors of Electric Vehicle

Authors: Emiliano Lustrissimi, Bonifacio Bianco, Sebastiano Caravaggi, Antonio Rosato

Abstract:

A mathematical framework has been designed to enhance the configuration of an end-of-production-line (EOL) test bench. This system can be used to assess the performance of electric motors or axles intended for electric vehicles. The model has been developed to predict the behaviour of EOL test benches and electric motors/axles under various boundary conditions, eliminating the need for extensive physical testing and reducing the corresponding power consumption. The suggested model is versatile, capable of being utilized across various types of electric motors or axles, and adaptable to accommodate varying power ratings of electric motors or axles. The maximum performance to be guaranteed by the EMs according to the car maker's specifications are taken as inputs in the model. Then, the required performance of each main EOL test bench component is calculated, and the corresponding systems available on the market are selected based on manufacturers’ catalogues. In this study, an EOL test bench has been designed according to the proposed model outputs for testing a low-power (about 22 kW) electric axle. The performance of the designed EOL test bench has been measured and used to validate the proposed model and assess both the consistency of the constraints as well as the accuracy of predictions in terms of electric demands. The comparison between experimental and predicted data exhibited a reasonable agreement, allowing to demonstrate that, despite some discrepancies, the model gives an accurate representation of the EOL test benches' performance.

Keywords: electric motors, electric vehicles, end-of-production-line test bench, mathematical model, field tests

Procedia PDF Downloads 51
1305 Principle Component Analysis on Colon Cancer Detection

Authors: N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Rita Magdalena, R. D. Atmaja, Sofia Saidah, Ocky Tiaramukti

Abstract:

Colon cancer or colorectal cancer is a type of cancer that attacks the last part of the human digestive system. Lymphoma and carcinoma are types of cancer that attack human’s colon. Colon cancer causes deaths about half a million people every year. In Indonesia, colon cancer is the third largest cancer case for women and second in men. Unhealthy lifestyles such as minimum consumption of fiber, rarely exercising and lack of awareness for early detection are factors that cause high cases of colon cancer. The aim of this project is to produce a system that can detect and classify images into type of colon cancer lymphoma, carcinoma, or normal. The designed system used 198 data colon cancer tissue pathology, consist of 66 images for Lymphoma cancer, 66 images for carcinoma cancer and 66 for normal / healthy colon condition. This system will classify colon cancer starting from image preprocessing, feature extraction using Principal Component Analysis (PCA) and classification using K-Nearest Neighbor (K-NN) method. Several stages in preprocessing are resize, convert RGB image to grayscale, edge detection and last, histogram equalization. Tests will be done by trying some K-NN input parameter setting. The result of this project is an image processing system that can detect and classify the type of colon cancer with high accuracy and low computation time.

Keywords: carcinoma, colorectal cancer, k-nearest neighbor, lymphoma, principle component analysis

Procedia PDF Downloads 205
1304 A Finite Element Based Predictive Stone Lofting Simulation Methodology for Automotive Vehicles

Authors: Gaurav Bisht, Rahul Rathnakumar, Ravikumar Duggirala

Abstract:

Predictive simulations are one of the key focus areas in safety-critical industries such as aerospace and high-performance automotive engineering. The stone-chipping study is one such effort taken up by the industry to predict and evaluate the damage caused due to gravel impact on vehicles. This paper describes a finite elements based method that can simulate the ejection of gravel chips from a vehicle tire. The FE simulations were used to obtain the initial ejection velocity of the stones for various driving conditions using a computational contact mechanics approach. To verify the accuracy of the tire model, several parametric studies were conducted. The FE simulations resulted in stone loft velocities ranging from 0–8 m/s, regardless of tire speed. The stress on the tire at the instant of initial contact with the stone increased linearly with vehicle speed. Mesh convergence studies indicated that a highly resolved tire mesh tends to result in better momentum transfer between the tire and the stone. A fine tire mesh also showed a linearly increasing relationship between the tire forward speed and stone lofting speed, which was not observed in coarser meshes. However, it also highlighted a potential challenge, in that the ejection velocity vector of the stone seemed to be sensitive to the mesh, owing to the FE-based contact mechanical formulation of the problem.

Keywords: abaqus, contact mechanics, foreign object debris, stone chipping

Procedia PDF Downloads 263
1303 D3Advert: Data-Driven Decision Making for Ad Personalization through Personality Analysis Using BiLSTM Network

Authors: Sandesh Achar

Abstract:

Personalized advertising holds greater potential for higher conversion rates compared to generic advertisements. However, its widespread application in the retail industry faces challenges due to complex implementation processes. These complexities impede the swift adoption of personalized advertisement on a large scale. Personalized advertisement, being a data-driven approach, necessitates consumer-related data, adding to its complexity. This paper introduces an innovative data-driven decision-making framework, D3Advert, which personalizes advertisements by analyzing personalities using a BiLSTM network. The framework utilizes the Myers–Briggs Type Indicator (MBTI) dataset for development. The employed BiLSTM network, specifically designed and optimized for D3Advert, classifies user personalities into one of the sixteen MBTI categories based on their social media posts. The classification accuracy is 86.42%, with precision, recall, and F1-Score values of 85.11%, 84.14%, and 83.89%, respectively. The D3Advert framework personalizes advertisements based on these personality classifications. Experimental implementation and performance analysis of D3Advert demonstrate a 40% improvement in impressions. D3Advert’s innovative and straightforward approach has the potential to transform personalized advertising and foster widespread personalized advertisement adoption in marketing.

Keywords: personalized advertisement, deep Learning, MBTI dataset, BiLSTM network, NLP.

Procedia PDF Downloads 44
1302 The Map of Cassini: An Accurate View of Current Border Between Spain and France

Authors: Barbara Polo Martin

Abstract:

During the 18th century, the border between Spain and France underwent various changes, primarily due to territorial agreements, wars, and treaties between the two nations and other European powers. For studying these changes, the Cassini maps remain valuable historical documents, offering a glimpse into the landscape and geography of 18th-century France and its neighboring regions, including the border between Spain and France. However, it's essential to recognize that these maps may not reflect modern political boundaries or territorial changes that have occurred since their creation. The project was initiated by King Louis XV in 1744 and continued by his successor, Louis XVI. The primary objective was to produce accurate maps of France, which would serve various purposes, including military, administrative, and scientific. The Cassini maps were groundbreaking for their time, as they were among the earliest attempts to create topographic maps on a national scale. They covered the entirety of France and were based on meticulous surveying and cartographic techniques. The maps featured precise geographic details, including elevation contours, rivers, roads, forests, and settlements. This study aims to analyze this rich and unknown cartography of France, study the rich place names it offers, as well as the accuracy of delimitations created over time between both empires in a historical way but also through a Geographical Information System. This study will offer a deeper knowledge about the cartography that supposes the beginning of topography in Europe.

Keywords: cartography, engineering, borders, Spain, France, Cassini

Procedia PDF Downloads 60
1301 Estimation of Consolidating Settlement Based on a Time-Dependent Skin Friction Model Considering Column Surface Roughness

Authors: Jiang Zhenbo, Ishikura Ryohei, Yasufuku Noriyuki

Abstract:

Improvement of soft clay deposits by the combination of surface stabilization and floating type cement-treated columns is one of the most popular techniques worldwide. On the basis of one dimensional consolidation model, a time-dependent skin friction model for the column-soil interaction is proposed. The nonlinear relationship between column shaft shear stresses and effective vertical pressure of the surrounding soil can be described in this model. The influence of column-soil surface roughness can be represented using a roughness coefficient R, which plays an important role in the design of column length. Based on the homogenization method, a part of floating type improved ground will be treated as an unimproved portion, which with a length of αH1 is defined as a time-dependent equivalent skin friction length. The compression settlement of this unimproved portion can be predicted only using the soft clay parameters. Apart from calculating the settlement of this composited ground, the load transfer mechanism is discussed utilizing model tests. The proposed model is validated by comparing with calculations and laboratory results of model and ring shear tests, which indicate the suitability and accuracy of the solutions in this paper.

Keywords: floating type improved foundation, time-dependent skin friction, roughness, consolidation

Procedia PDF Downloads 468
1300 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 260
1299 Modeling Aeration of Sharp Crested Weirs by Using Support Vector Machines

Authors: Arun Goel

Abstract:

The present paper attempts to investigate the prediction of air entrainment rate and aeration efficiency of a free over-fall jets issuing from a triangular sharp crested weir by using regression based modelling. The empirical equations, support vector machine (polynomial and radial basis function) models and the linear regression techniques were applied on the triangular sharp crested weirs relating the air entrainment rate and the aeration efficiency to the input parameters namely drop height, discharge, and vertex angle. It was observed that there exists a good agreement between the measured values and the values obtained using empirical equations, support vector machine (Polynomial and rbf) models, and the linear regression techniques. The test results demonstrated that the SVM based (Poly & rbf) model also provided acceptable prediction of the measured values with reasonable accuracy along with empirical equations and linear regression techniques in modelling the air entrainment rate and the aeration efficiency of a free over-fall jets issuing from triangular sharp crested weir. Further sensitivity analysis has also been performed to study the impact of input parameter on the output in terms of air entrainment rate and aeration efficiency.

Keywords: air entrainment rate, dissolved oxygen, weir, SVM, regression

Procedia PDF Downloads 436
1298 A New Family of Integration Methods for Nonlinear Dynamic Analysis

Authors: Shuenn-Yih Chang, Chiu-LI Huang, Ngoc-Cuong Tran

Abstract:

A new family of structure-dependent integration methods, whose coefficients of the difference equation for displacement increment are functions of the initial structural properties and the step size for time integration, is proposed in this work. This family method can simultaneously integrate the controllable numerical dissipation, explicit formulation and unconditional stability together. In general, its numerical dissipation can be continuously controlled by a parameter and it is possible to achieve zero damping. In addition, it can have high-frequency damping to suppress or even remove the spurious oscillations high frequency modes. Whereas, the low frequency modes can be very accurately integrated due to the almost zero damping for these low frequency modes. It is shown herein that the proposed family method can have exactly the same numerical properties as those of HHT-α method for linear elastic systems. In addition, it still preserves the most important property of a structure-dependent integration method, which is an explicit formulation for each time step. Consequently, it can save a huge computational efforts in solving inertial problems when compared to the HHT-α method. In fact, it is revealed by numerical experiments that the CPU time consumed by the proposed family method is only about 1.6% of that consumed by the HHT-α method for the 125-DOF system while it reduces to be 0.16% for the 1000-DOF system. Apparently, the saving of computational efforts is very significant.

Keywords: structure-dependent integration method, nonlinear dynamic analysis, unconditional stability, numerical dissipation, accuracy

Procedia PDF Downloads 639
1297 Impacts on the Modification of a Two-Blade Mobile on the Agitation of Newtonian Fluids

Authors: Abderrahim Sidi Mohammed Nekrouf, Sarra Youcefi

Abstract:

Fluid mixing plays a crucial role in numerous industries as it has a significant impact on the final product quality and performance. In certain cases, the circulation of viscous fluids presents challenges, leading to the formation of stagnant zones. To overcome this issue, stirring devices are employed for fluid mixing. This study focuses on a numerical analysis aimed at understanding the behavior of Newtonian fluids when agitated by a two-blade agitator in a cylindrical vessel. We investigate the influence of the agitator shape on fluid motion. Bi-blade agitators of this type are commonly used in the food, cosmetic, and chemical industries to agitate both viscous and non-viscous liquids. Numerical simulations were conducted using Computational Fluid Dynamics (CFD) software to obtain velocity profiles, streamlines, velocity contours, and the associated power number. The obtained results were compared with experimental data available in the literature, validating the accuracy of our numerical approach. The results clearly demonstrate that modifying the agitator shape has a significant impact on fluid motion. This modification generates an axial flow that enhances the efficiency of the fluid flow. The various velocity results convincingly reveal that the fluid is more uniformly agitated with this modification, resulting in improved circulation and a substantial reduction in stagnant zones.

Keywords: Newtonian fluids, numerical modeling, two blade., CFD

Procedia PDF Downloads 78
1296 A Numerical Investigation of Total Temperature Probes Measurement Performance

Authors: Erdem Meriç

Abstract:

Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.

Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes

Procedia PDF Downloads 134
1295 Defect Identification in Partial Discharge Patterns of Gas Insulated Switchgear and Straight Cable Joint

Authors: Chien-Kuo Chang, Yu-Hsiang Lin, Yi-Yun Tang, Min-Chiu Wu

Abstract:

With the trend of technological advancement, the harm caused by power outages is substantial, mostly due to problems in the power grid. This highlights the necessity for further improvement in the reliability of the power system. In the power system, gas-insulated switches (GIS) and power cables play a crucial role. Long-term operation under high voltage can cause insulation materials in the equipment to crack, potentially leading to partial discharges. If these partial discharges (PD) can be analyzed, preventative maintenance and replacement of equipment can be carried out, there by improving the reliability of the power grid. This research will diagnose defects by identifying three different defects in GIS and three different defects in straight cable joints, for a total of six types of defects. The partial discharge data measured will be converted through phase analysis diagrams and pulse sequence analysis. Discharge features will be extracted using convolutional image processing, and three different deep learning models, CNN, ResNet18, and MobileNet, will be used for training and evaluation. Class Activation Mapping will be utilized to interpret the black-box problem of deep learning models, with each model achieving an accuracy rate of over 95%. Lastly, the overall model performance will be enhanced through an ensemble learning voting method.

Keywords: partial discharge, gas-insulated switches, straight cable joint, defect identification, deep learning, ensemble learning

Procedia PDF Downloads 78
1294 Development of Partial Discharge Defect Recognition and Status Diagnosis System with Adaptive Deep Learning

Authors: Chien-kuo Chang, Bo-wei Wu, Yi-yun Tang, Min-chiu Wu

Abstract:

This paper proposes a power equipment diagnosis system based on partial discharge (PD), which is characterized by increasing the readability of experimental data and the convenience of operation. This system integrates a variety of analysis programs of different data formats and different programming languages and then establishes a set of interfaces that can follow and expand the structure, which is also helpful for subsequent maintenance and innovation. This study shows a case of using the developed Convolutional Neural Networks (CNN) to integrate with this system, using the designed model architecture to simplify the complex training process. It is expected that the simplified training process can be used to establish an adaptive deep learning experimental structure. By selecting different test data for repeated training, the accuracy of the identification system can be enhanced. On this platform, the measurement status and partial discharge pattern of each equipment can be checked in real time, and the function of real-time identification can be set, and various training models can be used to carry out real-time partial discharge insulation defect identification and insulation state diagnosis. When the electric power equipment entering the dangerous period, replace equipment early to avoid unexpected electrical accidents.

Keywords: partial discharge, convolutional neural network, partial discharge analysis platform, adaptive deep learning

Procedia PDF Downloads 78
1293 Experimental Modeling and Simulation of Zero-Surface Temperature of Controlled Water Jet Impingement Cooling System for Hot-Rolled Steel Plates

Authors: Thomas Okechukwu Onah, Onyekachi Marcel Egwuagu

Abstract:

Zero-surface temperature, which controlled the cooling profile, was modeled and used to investigate the effect of process parameters on the hot-rolled steel plates. The parameters include impingement gaps of 40mm to 70mm; pipe diameters of 20mm to 45mm feeding jet nozzle with 30 holes of 8mm diameters each; and flow rates within 2.896x10-⁶m³/s and 3.13x10-⁵m³/s. The developed simulation model of the Zero-Surface Temperature, upon validation, showed 99% prediction accuracy with dimensional homogeneity established. The evaluated Zero-Surface temperature of Controlled Water Jet Impingement Steel plates showed a high cooling rate of 36.31 Celsius degree/sec at an optimal cooling nozzle diameter of 20mm, impingement gap of 70mm and a flow rate of 1.77x10-⁵m³/s resulting in Reynold's number 2758.586, in the turbulent regime was obtained. It was also deduced that as the nozzle diameter was increasing, the impingement gap was reducing. This achieved a faster rate of cooling to an optimum temperature of 300oC irrespective of the starting surface cooling temperature. The results additionally showed that with a tested-plate initial temperature of 550oC, a controlled cooling temperature of about 160oC produced a film and nucleated boiling heat extraction that was particularly beneficial at the end of controlled cooling and influenced the microstructural properties of the test plates.

Keywords: temperature, mechanistic-model, plates, impingements, dimensionless-numbers

Procedia PDF Downloads 46
1292 Iterative Estimator-Based Nonlinear Backstepping Control of a Robotic Exoskeleton

Authors: Brahmi Brahim, Mohammad Habibur Rahman, Maarouf Saad, Cristóbal Ochoa Luna

Abstract:

A repetitive training movement is an efficient method to improve the ability and movement performance of stroke survivors and help them to recover their lost motor function and acquire new skills. The ETS-MARSE is seven degrees of freedom (DOF) exoskeleton robot developed to be worn on the lateral side of the right upper-extremity to assist and rehabilitate the patients with upper-extremity dysfunction resulting from stroke. Practically, rehabilitation activities are repetitive tasks, which make the assistive/robotic systems to suffer from repetitive/periodic uncertainties and external perturbations induced by the high-order dynamic model (seven DOF) and interaction with human muscle which impact on the tracking performance and even on the stability of the exoskeleton. To ensure the robustness and the stability of the robot, a new nonlinear backstepping control was implemented with designed tests performed by healthy subjects. In order to limit and to reject the periodic/repetitive disturbances, an iterative estimator was integrated into the control of the system. The estimator does not need the precise dynamic model of the exoskeleton. Experimental results confirm the robustness and accuracy of the controller performance to deal with the external perturbation, and the effectiveness of the iterative estimator to reject the repetitive/periodic disturbances.

Keywords: backstepping control, iterative control, Rehabilitation, ETS-MARSE

Procedia PDF Downloads 286
1291 Effects of Pore-Water Pressure on the Motion of Debris Flow

Authors: Meng-Yu Lin, Wan-Ju Lee

Abstract:

Pore-water pressure, which mediates effective stress and shear strength at grain contacts, has a great influence on the motion of debris flow. The factors that control the diffusion of excess pore-water pressure play very important roles in the debris-flow motion. This research investigates these effects by solving the distribution of pore-water pressure numerically in an unsteady, surging motion of debris flow. The governing equations are the depth-averaged equations for the motion of debris-flow surges coupled with the one-dimensional diffusion equation for excess pore-water pressures. The pore-pressure diffusion equation is solved using a Fourier series, which may improve the accuracy of the solution. The motion of debris-flow surge is modelled using a Lagrangian particle method. From the computational results, the effects of pore-pressure diffusivities and the initial excess pore pressure on the formations of debris-flow surges are investigated. Computational results show that the presence of pore water can increase surge velocities and then changes the profiles of depth distribution. Due to the linear distribution of the vertical component of pore-water velocity, pore pressure dissipates rapidly near the bottom and forms a parabolic distribution in the vertical direction. Increases in the diffusivity of pore-water pressure cause the pore pressures decay more rapidly and then decrease the mobility of the surge.

Keywords: debris flow, diffusion, Lagrangian particle method, pore-pressure diffusivity, pore-water pressure

Procedia PDF Downloads 143
1290 Air Quality Forecast Based on Principal Component Analysis-Genetic Algorithm and Back Propagation Model

Authors: Bin Mu, Site Li, Shijin Yuan

Abstract:

Under the circumstance of environment deterioration, people are increasingly concerned about the quality of the environment, especially air quality. As a result, it is of great value to give accurate and timely forecast of AQI (air quality index). In order to simplify influencing factors of air quality in a city, and forecast the city’s AQI tomorrow, this study used MATLAB software and adopted the method of constructing a mathematic model of PCA-GABP to provide a solution. To be specific, this study firstly made principal component analysis (PCA) of influencing factors of AQI tomorrow including aspects of weather, industry waste gas and IAQI data today. Then, we used the back propagation neural network model (BP), which is optimized by genetic algorithm (GA), to give forecast of AQI tomorrow. In order to verify validity and accuracy of PCA-GABP model’s forecast capability. The study uses two statistical indices to evaluate AQI forecast results (normalized mean square error and fractional bias). Eventually, this study reduces mean square error by optimizing individual gene structure in genetic algorithm and adjusting the parameters of back propagation model. To conclude, the performance of the model to forecast AQI is comparatively convincing and the model is expected to take positive effect in AQI forecast in the future.

Keywords: AQI forecast, principal component analysis, genetic algorithm, back propagation neural network model

Procedia PDF Downloads 228
1289 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool

Procedia PDF Downloads 369
1288 Numerical Determination of Transition of Cup Height between Hydroforming Processes

Authors: H. Selcuk Halkacı, Mevlüt Türköz, Ekrem Öztürk, Murat Dilmec

Abstract:

Various attempts concerning the low formability issue for lightweight materials like aluminium and magnesium alloys are being investigated in many studies. Advanced forming processes such as hydroforming is one of these attempts. In last decades sheet hydroforming process has an increasing interest, particularly in the automotive and aerospace industries. This process has many advantages such as enhanced formability, the capability to form complex parts, higher dimensional accuracy and surface quality, reduction of tool costs and reduced die wear compared to the conventional sheet metal forming processes. There are two types of sheet hydroforming. One of them is hydromechanical deep drawing (HDD) that is a special drawing process in which pressurized fluid medium is used instead of one of the die half compared to the conventional deep drawing (CDD) process. Another one is sheet hydroforming with die (SHF-D) in which blank is formed with the act of fluid pressure and it takes the shape of die half. In this study, transition of cup height according to cup diameter between the processes was determined by performing simulation of the processes in Finite Element Analysis. Firstly SHF-D process was simulated for 40 mm cup diameter at different cup heights chancing from 10 mm to 30 mm and the cup height to diameter ratio value in which it is not possible to obtain a successful forming was determined. Then the same ratio was checked for a different cup diameter of 60 mm. Then thickness distributions of the cups formed by SHF-D and HDD processes were compared for the cup heights. Consequently, it was found that the thickness distribution in HDD process in the analyses was more uniform.

Keywords: finite element analysis, HDD, hydroforming sheet metal forming, SHF-D

Procedia PDF Downloads 429
1287 The Attentional Focus Impact on the Decision Making in Three-Game Situations in Tennis

Authors: Marina Tsetseli, Eleni Zetou, Maria Michalopoulou, Nikos Vernadakis

Abstract:

Game performance, besides the accuracy and the quality skills execution, depends heavily on where the athletes will focus their attention while performing a skill. The purpose of the present study was to examine and compare the effect of internal and external focus of attention instructions on the decision making in tennis at players 8-9 years old (M=8.4, SD=0.49). The participants (N=40) were divided into two groups and followed an intervention training program that lasted 4 weeks; first group (N=20) under internal focus of attention instructions and the second group (N=20) under external focus of attention instructions. Three measurements took place (pre-test, post-test, and retention test) in which the participants were video recorded while playing matches in real scoring conditions. GPAI (Game Performance Assessment Instrument) was used to evaluate decision making in three game situations; service, return of the service, baseline game. ANOVA repeated measures (2 groups x 3 measurements) revealed a significant interaction between groups and measurements. Specifically, the data analysis showed superiority of the group that was instructed to focus externally. The high scores of the external attention group were maintained at the same level at the third measurement as well, which indicates that the impact was concerning not only performance but also learning. Thus, cues that lead to an external focus of attention enhance the decision-making skill and therefore the game performance of the young tennis players.

Keywords: decision making, evaluation, focus of attention, game performance, tennis

Procedia PDF Downloads 351
1286 Prediction Factor of Recurrence Supraventricular Tachycardia After Adenosine Treatment in the Emergency Department

Authors: Chaiyaporn Yuksen

Abstract:

Backgroud: Supraventricular tachycardia (SVT) is an abnormally fast atrial tachycardia characterized by narrow (≤ 120 ms) and constant QRS. Adenosine was the drug of choice; the first dose was 6 mg. It can be repeated with the second and third doses of 12 mg, with greater than 90% success. The study found that patients observed at 4 hours after normal sinus rhythm was no recurrence within 24 hours. The objective of this study was to investigate the factors that influence the recurrence of SVT after adenosine in the emergency department (ED). Method: The study was conducted retrospectively exploratory model, prognostic study at the Emergency Department (ED) in Faculty of Medicine, Ramathibodi Hospital, a university-affiliated super tertiary care hospital in Bangkok, Thailand. The study was conducted for ten years period between 2010 and 2020. The inclusion criteria were age > 15 years, visiting the ED with SVT, and treating with adenosine. Those patients were recorded with the recurrence SVT in ED. The multivariable logistic regression model developed the predictive model and prediction score for recurrence PSVT. Result: 264 patients met the study criteria. Of those, 24 patients (10%) had recurrence PSVT. Five independent factors were predictive of recurrence PSVT. There was age>65 years, heart rate (after adenosine) > 100 per min, structural heart disease, and dose of adenosine. The clinical risk score to predict recurrence PSVT is developed accuracy 74.41%. The score of >6 had the likelihood ratio of recurrence PSVT by 5.71 times Conclusion: The clinical predictive score of > 6 was associated with recurrence PSVT in ED.

Keywords: clinical prediction score, SVT, recurrence, emergency department

Procedia PDF Downloads 155