Search results for: computational accuracy
5194 Impact of a Virtual Reality-Training on Real-World Hockey Skill: An Intervention Trial
Authors: Matthew Buns
Abstract:
Training specificity is imperative for successful performance of the elite athlete. Virtual reality (VR) has been successfully applied to a broad range of training domains. However, to date there is little research investigating the use of VR for sport training. The purpose of this study was to address the question of whether virtual reality (VR) training can improve real world hockey shooting performance. Twenty four volunteers were recruited and randomly selected to complete the virtual training intervention or enter a control group with no training. Four primary types of data were collected: 1) participant’s experience with video games and hockey, 2) participant’s motivation toward video game use, 3) participants technical performance on real-world hockey, and 4) participant’s technical performance in virtual hockey. One-way multivariate analysis of variance (ANOVA) indicated that that the intervention group demonstrated significantly more real-world hockey accuracy [F(1,24) =15.43, p <.01, E.S. = 0.56] while shooting on goal than their control group counterparts [intervention M accuracy = 54.17%, SD=12.38, control M accuracy = 46.76%, SD=13.45]. One-way multivariate analysis of variance (MANOVA) repeated measures indicated significantly higher outcome scores on real-world accuracy (35.42% versus 54.17%; ES = 1.52) and velocity (51.10 mph versus 65.50 mph; ES=0.86) of hockey shooting on goal. This research supports the idea that virtual training is an effective tool for increasing real-world hockey skill.Keywords: virtual training, hockey skills, video game, esports
Procedia PDF Downloads 1475193 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis
Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen
Abstract:
Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.Keywords: hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection
Procedia PDF Downloads 3065192 Using of Particle Swarm Optimization for Loss Minimization of Vector-Controlled Induction Motors
Authors: V. Rashtchi, H. Bizhani, F. R. Tatari
Abstract:
This paper presents a new online loss minimization for an induction motor drive. Among the many loss minimization algorithms (LMAs) for an induction motor, a particle swarm optimization (PSO) has the advantages of fast response and high accuracy. However, the performance of the PSO and other optimization algorithms depend on the accuracy of the modeling of the motor drive and losses. In the development of the loss model, there is always a trade off between accuracy and complexity. This paper presents a new online optimization to determine an optimum flux level for the efficiency optimization of the vector-controlled induction motor drive. An induction motor (IM) model in d-q coordinates is referenced to the rotor magnetizing current. This transformation results in no leakage inductance on the rotor side, thus the decomposition into d-q components in the steady-state motor model can be utilized in deriving the motor loss model. The suggested algorithm is simple for implementation.Keywords: induction machine, loss minimization, magnetizing current, particle swarm optimization
Procedia PDF Downloads 6335191 Computational Model of Human Cardiopulmonary System
Authors: Julian Thrash, Douglas Folk, Michael Ciracy, Audrey C. Tseng, Kristen M. Stromsodt, Amber Younggren, Christopher Maciolek
Abstract:
The cardiopulmonary system is comprised of the heart, lungs, and many dynamic feedback mechanisms that control its function based on a multitude of variables. The next generation of cardiopulmonary medical devices will involve adaptive control and smart pacing techniques. However, testing these smart devices on living systems may be unethical and exceedingly expensive. As a solution, a comprehensive computational model of the cardiopulmonary system was implemented in Simulink. The model contains over 240 state variables and over 100 equations previously described in a series of published articles. Simulink was chosen because of its ease of introducing machine learning elements. Initial results indicate that physiologically correct waveforms of pressures and volumes were obtained in the simulation. With the development of a comprehensive computational model, we hope to pioneer the future of predictive medicine by applying our research towards the initial stages of smart devices. After validation, we will introduce and train reinforcement learning agents using the cardiopulmonary model to assist in adaptive control system design. With our cardiopulmonary model, we will accelerate the design and testing of smart and adaptive medical devices to better serve those with cardiovascular disease.Keywords: adaptive control, cardiopulmonary, computational model, machine learning, predictive medicine
Procedia PDF Downloads 1815190 MRI Quality Control Using Texture Analysis and Spatial Metrics
Authors: Kumar Kanudkuri, A. Sandhya
Abstract:
Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy
Procedia PDF Downloads 1705189 Artificial Neural Networks with Decision Trees for Diagnosis Issues
Authors: Y. Kourd, D. Lefebvre, N. Guersi
Abstract:
This paper presents a new idea for fault detection and isolation (FDI) technique which is applied to industrial system. This technique is based on Neural Networks fault-free and Faulty behaviors Models (NNFM's). NNFM's are used for residual generation, while decision tree architecture is used for residual evaluation. The decision tree is realized with data collected from the NNFM’s outputs and is used to isolate detectable faults depending on computed threshold. Each part of the tree corresponds to specific residual. With the decision tree, it becomes possible to take the appropriate decision regarding the actual process behavior by evaluating few numbers of residuals. In comparison to usual systematic evaluation of all residuals, the proposed technique requires less computational effort and can be used for on line diagnosis. An application example is presented to illustrate and confirm the effectiveness and the accuracy of the proposed approach.Keywords: neural networks, decision trees, diagnosis, behaviors
Procedia PDF Downloads 5055188 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 6215187 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 1145186 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 535185 A Comparison of Clinical and Pathological TNM Staging in a COVID-19 Era
Authors: Sophie Mills, Leila L. Touil, Richard Sisson
Abstract:
Introduction: The TNM classification is the global standard for the staging of head and neck cancers. Accurate clinical-radiological staging of tumours (cTNM) is essential to predict prognosis, facilitate surgical planning and determine the need for other therapeutic modalities. This study aims to determine the accuracy of pre-operative cTNM staging using pathological TNM (pTNM) and consider possible causes of TNM stage migration, noting any variation throughout the COVID-19 pandemic. Materials and Methods: A retrospective cohort study examined records of patients with surgical management of head and neck cancer at a tertiary head and neck centre from November 2019 to November 2020. Data was extracted from Somerset Cancer Registry and histopathology reports. cTNM and pTNM were compared before and during the first wave of COVID-19, as well as with other potential prognostic factors such as tumour site and tumour stage. Results: 119 cases were identified, of which 52.1% (n=62) were male, and 47.9% (n=57) were female with a mean age of 67 years. Clinical and pathological staging differed in 54.6% (n=65) of cases. Of the patients with stage migration, 40.4% (n=23) were up-staged and 59.6% (n=34) were down-staged compared with pTNM. There was no significant difference in the accuracy of cTNM staging compared with age, sex, or tumour site. There was a statistically highly significant (p < 0.001) correlation between cTNM accuracy and tumour stage, with the accuracy of cTNM staging decreasing with the advancement of pTNM staging. No statistically significant variation was noted between patients staged prior to and during COVID-19. Conclusions: Discrepancies in staging can impact management and outcomes for patients. This study found that the higher the pTNM, the more likely stage migration will occur. These findings are concordant with the oncology literature, which highlights the need to improve the accuracy of cTNM staging for more advanced tumours.Keywords: COVID-19, head and neck cancer, stage migration, TNM staging
Procedia PDF Downloads 1095184 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 735183 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods
Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin
Abstract:
Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.Keywords: Burgers' equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile
Procedia PDF Downloads 1695182 Rapid Algorithm for GPS Signal Acquisition
Authors: Fabricio Costa Silva, Samuel Xavier de Souza
Abstract:
A Global Positioning System (GPS) receiver is responsible to determine position, velocity and timing information by using satellite information. To get this information are necessary to combine an incoming and a locally generated signal. The procedure called acquisition need to found two information, the frequency and phase of the incoming signal. This is very time consuming, so there are several techniques to reduces the computational complexity, but each of then put projects issues in conflict. I this papers we present a method that can reduce the computational complexity by reducing the search space and paralleling the search.Keywords: GPS, acquisition, complexity, parallelism
Procedia PDF Downloads 5385181 Investigation and Analysis of Vortex-Induced Vibrations in Sliding Gate Valves Using Computational Fluid Dynamics
Authors: Kianoosh Ahadi, Mustafa Ergil
Abstract:
In this study, the event of vibrations caused by vortexes and the distribution of induced hydrodynamic forces due to vortexes on the sliding gate valves has been investigated. For this reason, a sliding valve with the help of computational fluid dynamics (CFD) software was simulated in two-dimensional )2D(, where the flow and turbulence equations were solved for three different valve openings (full, half, and 16.7 %) models. The variety of vortexes formed within the vicinity of the valve structure was investigated based on time where the trend of fluctuations and their occurrence regions have been detected. From the gathered solution dataset of the numerical simulations, the pressure coefficient (CP), the lift force coefficient (CL), the drag force coefficient (CD), and the momentum coefficient due to hydrodynamic forces (CM) were examined, and relevant figures were generated were from these results, the vortex-induced vibrations were analyzed.Keywords: induced vibrations, computational fluid dynamics, sliding gate valves, vortexes
Procedia PDF Downloads 1205180 Machine Vision System for Measuring the Quality of Bulk Sun-dried Organic Raisins
Authors: Navab Karimi, Tohid Alizadeh
Abstract:
An intelligent vision-based system was designed to measure the quality and purity of raisins. A machine vision setup was utilized to capture the images of bulk raisins in ranges of 5-50% mixed pure-impure berries. The textural features of bulk raisins were extracted using Grey-level Histograms, Co-occurrence Matrix, and Local Binary Pattern (a total of 108 features). Genetic Algorithm and neural network regression were used for selecting and ranking the best features (21 features). As a result, the GLCM features set was found to have the highest accuracy (92.4%) among the other sets. Followingly, multiple feature combinations of the previous stage were fed into the second regression (linear regression) to increase accuracy, wherein a combination of 16 features was found to be the optimum. Finally, a Support Vector Machine (SVM) classifier was used to differentiate the mixtures, producing the best efficiency and accuracy of 96.2% and 97.35%, respectively.Keywords: sun-dried organic raisin, genetic algorithm, feature extraction, ann regression, linear regression, support vector machine, south azerbaijan.
Procedia PDF Downloads 735179 Transformer-Driven Multi-Category Classification for an Automated Academic Strand Recommendation Framework
Authors: Ma Cecilia Siva
Abstract:
This study introduces a Bidirectional Encoder Representations from Transformers (BERT)-based machine learning model aimed at improving educational counseling by automating the process of recommending academic strands for students. The framework is designed to streamline and enhance the strand selection process by analyzing students' profiles and suggesting suitable academic paths based on their interests, strengths, and goals. Data was gathered from a sample of 200 grade 10 students, which included personal essays and survey responses relevant to strand alignment. After thorough preprocessing, the text data was tokenized, label-encoded, and input into a fine-tuned BERT model set up for multi-label classification. The model was optimized for balanced accuracy and computational efficiency, featuring a multi-category classification layer with sigmoid activation for independent strand predictions. Performance metrics showed an F1 score of 88%, indicating a well-balanced model with precision at 80% and recall at 100%, demonstrating its effectiveness in providing reliable recommendations while reducing irrelevant strand suggestions. To facilitate practical use, the final deployment phase created a recommendation framework that processes new student data through the trained model and generates personalized academic strand suggestions. This automated recommendation system presents a scalable solution for academic guidance, potentially enhancing student satisfaction and alignment with educational objectives. The study's findings indicate that expanding the data set, integrating additional features, and refining the model iteratively could improve the framework's accuracy and broaden its applicability in various educational contexts.Keywords: tokenized, sigmoid activation, transformer, multi category classification
Procedia PDF Downloads 95178 Small Text Extraction from Documents and Chart Images
Authors: Rominkumar Busa, Shahira K. C., Lijiya A.
Abstract:
Text recognition is an important area in computer vision which deals with detecting and recognising text from an image. The Optical Character Recognition (OCR) is a saturated area these days and with very good text recognition accuracy. However the same OCR methods when applied on text with small font sizes like the text data of chart images, the recognition rate is less than 30%. In this work, aims to extract small text in images using the deep learning model, CRNN with CTC loss. The text recognition accuracy is found to improve by applying image enhancement by super resolution prior to CRNN model. We also observe the text recognition rate further increases by 18% by applying the proposed method, which involves super resolution and character segmentation followed by CRNN with CTC loss. The efficiency of the proposed method shows that further pre-processing on chart image text and other small text images will improve the accuracy further, thereby helping text extraction from chart images.Keywords: small text extraction, OCR, scene text recognition, CRNN
Procedia PDF Downloads 1265177 Syntactic Analyzer for Tamil Language
Authors: Franklin Thambi Jose.S
Abstract:
Computational Linguistics is a branch of linguistics, which deals with the computer and linguistic levels. It is also said, as a branch of language studies which applies computer techniques to linguistics field. In Computational Linguistics, Natural Language Processing plays an important role. This came to exist because of the invention of Information Technology. In computational syntax, the syntactic analyser breaks a sentence into phrases and clauses and identifies the sentence with the syntactic information. Tamil is one of the major Dravidian languages, which has a very long written history of more than 2000 years. It is mainly spoken in Tamilnadu (in India), Srilanka, Malaysia and Singapore. It is an official language in Tamilnadu (in India), Srilanka, Malaysia and Singapore. In Malaysia Tamil speaking people are considered as an ethnic group. In Tamil syntax, the sentences in Tamil are classified into four for this research, namely: 1. Main Sentence 2. Interrogative Sentence 3. Equational Sentence 4. Elliptical Sentence. In computational syntax, the first step is to provide required information regarding the head and its constituent of each sentence. This information will be incorporated to the system using programming languages. Now the system can easily analyse a given sentence with the criteria or mechanisms given to it. Providing needful criteria or mechanisms to the computer to identify the basic types of sentences using Syntactic parser in Tamil language is the major objective of this paper.Keywords: tamil, syntax, criteria, sentences, parser
Procedia PDF Downloads 5175176 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique
Authors: Nishant Shrivastava, D. K. Sehgal
Abstract:
In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.Keywords: finite elements, Lagrangian, optimal stress location, serendipity
Procedia PDF Downloads 1055175 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems
Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu
Abstract:
In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP
Procedia PDF Downloads 405174 Artificial Steady-State-Based Nonlinear MPC for Wheeled Mobile Robot
Authors: M. H. Korayem, Sh. Ameri, N. Yousefi Lademakhi
Abstract:
To ensure the stability of closed-loop nonlinear model predictive control (NMPC) within a finite horizon, there is a need for appropriate design terminal ingredients, which can be a time-consuming and challenging effort. Otherwise, in order to ensure the stability of the control system, it is necessary to consider an infinite predictive horizon. Increasing the prediction horizon increases computational demand and slows down the implementation of the method. In this study, a new technique has been proposed to ensure system stability without terminal ingredients. This technique has been employed in the design of the NMPC algorithm, leading to a reduction in the computational complexity of designing terminal ingredients and computational burden. The studied system is a wheeled mobile robot (WMR) subjected to non-holonomic constraints. Simulation has been investigated for two problems: trajectory tracking and adjustment mode.Keywords: wheeled mobile robot, nonlinear model predictive control, stability, without terminal ingredients
Procedia PDF Downloads 915173 A Novel Method for Face Detection
Authors: H. Abas Nejad, A. R. Teymoori
Abstract:
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model
Procedia PDF Downloads 3385172 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index
Authors: A. Sathiya Susuman, Hamisi F. Hamisi
Abstract:
Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index
Procedia PDF Downloads 4765171 Rapid Parallel Algorithm for GPS Signal Acquisition
Authors: Fabricio Costa Silva, Samuel Xavier de Souza
Abstract:
A Global Positioning System (GPS) receiver is responsible to determine position, velocity and timing information by using satellite information. To get this information's are necessary to combine an incoming and a locally generated signal. The procedure called acquisition need to found two information, the frequency and phase of the incoming signal. This is very time consuming, so there are several techniques to reduces the computational complexity, but each of then put projects issues in conflict. I this papers we present a method that can reduce the computational complexity by reducing the search space and paralleling the search.Keywords: GPS, acquisition, low complexity, parallelism
Procedia PDF Downloads 5015170 Empirical Study of Correlation between the Cost Performance Index Stability and the Project Cost Forecast Accuracy in Construction Projects
Authors: Amin AminiKhafri, James M. Dawson-Edwards, Ryan M. Simpson, Simaan M. AbouRizk
Abstract:
Earned value management (EVM) has been introduced as an integrated method to combine schedule, budget, and work breakdown structure (WBS). EVM provides various indices to demonstrate project performance including the cost performance index (CPI). CPI is also used to forecast final project cost at completion based on the cost performance during the project execution. Knowing the final project cost during execution can initiate corrective actions, which can enhance project outputs. CPI, however, is not constant during the project, and calculating the final project cost using a variable index is an inaccurate and challenging task for practitioners. Since CPI is based on the cumulative progress values and because of the learning curve effect, CPI variation dampens and stabilizes as project progress. Although various definitions for the CPI stability have been proposed in literature, many scholars have agreed upon the definition that considers a project as stable if the CPI at 20% completion varies less than 0.1 from the final CPI. While 20% completion point is recognized as the stability point for military development projects, construction projects stability have not been studied. In the current study, an empirical study was first conducted using construction project data to determine the stability point for construction projects. Early findings have demonstrated that a majority of construction projects stabilize towards completion (i.e., after 70% completion point). To investigate the effect of CPI stability on cost forecast accuracy, the correlation between CPI stability and project cost at completion forecast accuracy was also investigated. It was determined that as projects progress closer towards completion, variation of the CPI decreases and final project cost forecast accuracy increases. Most projects were found to have 90% accuracy in the final cost forecast at 70% completion point, which is inlined with findings from the CPI stability findings. It can be concluded that early stabilization of the project CPI results in more accurate cost at completion forecasts.Keywords: cost performance index, earned value management, empirical study, final project cost
Procedia PDF Downloads 1565169 Presenting a Job Scheduling Algorithm Based on Learning Automata in Computational Grid
Authors: Roshanak Khodabakhsh Jolfaei, Javad Akbari Torkestani
Abstract:
As a cooperative environment for problem-solving, it is necessary that grids develop efficient job scheduling patterns with regard to their goals, domains and structure. Since the Grid environments facilitate distributed calculations, job scheduling appears in the form of a critical problem for the management of Grid sources that influences severely on the efficiency for the whole Grid environment. Due to the existence of some specifications such as sources dynamicity and conditions of the network in Grid, some algorithm should be presented to be adjustable and scalable with increasing the network growth. For this purpose, in this paper a job scheduling algorithm has been presented on the basis of learning automata in computational Grid which the performance of its results were compared with FPSO algorithm (Fuzzy Particle Swarm Optimization algorithm) and GJS algorithm (Grid Job Scheduling algorithm). The obtained numerical results indicated the superiority of suggested algorithm in comparison with FPSO and GJS. In addition, the obtained results classified FPSO and GJS in the second and third position respectively after the mentioned algorithm.Keywords: computational grid, job scheduling, learning automata, dynamic scheduling
Procedia PDF Downloads 3435168 Predicting Indonesia External Debt Crisis: An Artificial Neural Network Approach
Authors: Riznaldi Akbar
Abstract:
In this study, we compared the performance of the Artificial Neural Network (ANN) model with back-propagation algorithm in correctly predicting in-sample and out-of-sample external debt crisis in Indonesia. We found that exchange rate, foreign reserves, and exports are the major determinants to experiencing external debt crisis. The ANN in-sample performance provides relatively superior results. The ANN model is able to classify correctly crisis of 89.12 per cent with reasonably low false alarms of 7.01 per cent. In out-of-sample, the prediction performance fairly deteriorates compared to their in-sample performances. It could be explained as the ANN model tends to over-fit the data in the in-sample, but it could not fit the out-of-sample very well. The 10-fold cross-validation has been used to improve the out-of-sample prediction accuracy. The results also offer policy implications. The out-of-sample performance could be very sensitive to the size of the samples, as it could yield a higher total misclassification error and lower prediction accuracy. The ANN model could be used to identify past crisis episodes with some accuracy, but predicting crisis outside the estimation sample is much more challenging because of the presence of uncertainty.Keywords: debt crisis, external debt, artificial neural network, ANN
Procedia PDF Downloads 4435167 High Accuracy Analytic Approximation for Special Functions Applied to Bessel Functions J₀(x) and Its Zeros
Authors: Fernando Maass, Pablo Martin, Jorge Olivares
Abstract:
The Bessel function J₀(x) is very important in Electrodynamics and Physics, as well as its zeros. In this work, a method to obtain high accuracy approximation is presented through an application to that function. In most of the applications of this function, the values of the zeros are very important. In this work, analytic approximations for this function have been obtained valid for all positive values of the variable x, which have high accuracy for the function as well as for the zeros. The approximation is determined by the simultaneous used of the power series and asymptotic expansion. The structure of the approximation is a combination of two rational functions with elementary functions as trigonometric and fractional powers. Here us in Pade method, rational functions are used, but now there combined with elementary functions us fractional powers hyperbolic or trigonometric functions, and others. The reason of this is that now power series of the exact function are used, but together with the asymptotic expansion, which usually includes fractional powers trigonometric functions and other type of elementary functions. The approximation must be a bridge between both expansions, and this can not be accomplished using only with rational functions. In the simplest approximation using 4 parameters the maximum absolute error is less than 0.006 at x ∼ 4.9. In this case also the maximum relative error for the zeros is less than 0.003 which is for the second zero, but that value decreases rapidly for the other zeros. The same kind of behaviour happens for the relative error of the maximum and minimum of the functions. Approximations with higher accuracy and more parameters will be also shown. All the approximations are valid for any positive value of x, and they can be calculated easily.Keywords: analytic approximations, asymptotic approximations, Bessel functions, quasirational approximations
Procedia PDF Downloads 2525166 Computational Fluid Dynamics-Coupled Optimisation Strategy for Aerodynamic Design
Authors: Anvar Atayev, Karl Steinborn, Aleksander Lovric, Saif Al-Ibadi, Jorg Fliege
Abstract:
In this paper, we present results obtained from optimising the aerodynamic performance of aerostructures in external ow. The optimisation method used was developed to efficiently handle multi-variable problems with numerous black-box objective functions and constraints. To demonstrate these capabilities, a series of CFD problems were considered; (1) a two-dimensional NACA aerofoil with three variables, (2) a two-dimensional morphing aerofoil with 17 variables, and (3) a three-dimensional morphing aeroplane tail with 33 variables. The objective functions considered were related to combinations of the mean aerodynamic coefficients, as well as their relative variations/oscillations. It was observed that for each CFD problem, an improved objective value was found. Notably, the scale-up in variables for the latter problems did not greatly hinder optimisation performance. This makes the method promising for scaled-up CFD problems, which require considerable computational resources.Keywords: computational fluid dynamics, optimisation algorithms, aerodynamic design, engineering design
Procedia PDF Downloads 1205165 Electrodermal Activity Measurement Using Constant Current AC Source
Authors: Cristian Chacha, David Asiain, Jesús Ponce de León, José Ramón Beltrán
Abstract:
This work explores and characterizes the behavior of the AFE AD5941 in impedance measurement using an embedded algorithm with a constant current AC source. The main aim of this research is to improve the exact measurement of impedance values for their application in EDA-focused wearable devices. Through comprehensive study and characterization, it has been observed that employing a measurement sequence with a constant current source produces results with increased dispersion but higher accuracy. As a result, this approach leads to a more accurate system for impedance measurement.Keywords: EDA, constant current AC source, wearable, precision, accuracy, impedance
Procedia PDF Downloads 107