Search results for: classifiers accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3833

Search results for: classifiers accuracy

3413 Evaluate the Possibility of Using ArcGIS Basemaps as GCP for Large Scale Maps

Authors: Jali Octariady, Ida Herliningsih, Ade K. Mulyana, Annisa Fitria, Diaz C. K. Yuwana

Abstract:

Awareness of the importance large-scale maps for development of a country is growing in all walks of life, especially for governments in Indonesia. Various parties, especially local governments throughout Indonesia demanded for immediate availability the large-scale maps of 1:5000 for regional development. But in fact, the large-scale maps of 1:5000 is only available less than 5% of the entire territory of Indonesia. Unavailability precise GCP at the entire territory of Indonesia is one of causes of slow availability the large scale maps of 1:5000. This research was conducted to find an alternative solution to this problem. This study was conducted to assess the accuracy of ArcGIS base maps coordinate when it shall be used as GCP for creating a map scale of 1:5000. The study was conducted by comparing the GCP coordinate from Field survey using GPS Geodetic than the coordinate from ArcGIS basemaps in various locations in Indonesia. Some areas are used as a study area are Lombok Island, Kupang City, Surabaya City and Kediri District. The differences value of the coordinates serve as the basis for assessing the accuracy of ArcGIS basemaps coordinates. The results of the study at various study area show the variation of the amount of the coordinates value given. Differences coordinate value in the range of millimeters (mm) to meters (m) in the entire study area. This is shown the inconsistency quality of ArcGIS base maps coordinates. This inconsistency shows that the coordinate value from ArcGIS Basemaps is careless. The Careless coordinate from ArcGIS Basemaps indicates that its cannot be used as GCP for large-scale mapping on the entire territory of Indonesia.

Keywords: accuracy, ArcGIS base maps, GCP, large scale maps

Procedia PDF Downloads 374
3412 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 48
3411 Tuning Cubic Equations of State for Supercritical Water Applications

Authors: Shyh Ming Chern

Abstract:

Cubic equations of state (EoS), popular due to their simple mathematical form, ease of use, semi-theoretical nature and, reasonable accuracy are normally fitted to vapor-liquid equilibrium P-v-T data. As a result, They often show poor accuracy in the region near and above the critical point. In this study, the performance of the renowned Peng-Robinson (PR) and Patel-Teja (PT) EoS’s around the critical area has been examined against the P-v-T data of water. Both of them display large deviations at critical point. For instance, PR-EoS exhibits discrepancies as high as 47% for the specific volume, 28% for the enthalpy departure and 43% for the entropy departure at critical point. It is shown that incorporating P-v-T data of the supercritical region into the retuning of a cubic EoS can improve its performance above the critical point dramatically. Adopting a retuned acentric factor of 0.5491 instead of its genuine value of 0.344 for water in PR-EoS and a new F of 0.8854 instead of its original value of 0.6898 for water in PT-EoS reduces the discrepancies to about one third or less.

Keywords: equation of state, EoS, supercritical water, SCW

Procedia PDF Downloads 537
3410 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: brain-computer interface, speech recognition, artificial neural network, electroencephalography, EEG, wernicke area

Procedia PDF Downloads 272
3409 Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’

Authors: Yasmeen Bassas, Sandra Kuebler, Allen Riddell

Abstract:

Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus.

Keywords: NLI, NLP, content-based features, content independent features, social media corpus, ML

Procedia PDF Downloads 139
3408 Forward Stable Computation of Roots of Real Polynomials with Only Real Distinct Roots

Authors: Nevena Jakovčević Stor, Ivan Slapničar

Abstract:

Any polynomial can be expressed as a characteristic polynomial of a complex symmetric arrowhead matrix. This expression is not unique. If the polynomial is real with only real distinct roots, the matrix can be chosen as real. By using accurate forward stable algorithm for computing eigen values of real symmetric arrowhead matrices we derive a forward stable algorithm for computation of roots of such polynomials in O(n^2 ) operations. The algorithm computes each root to almost full accuracy. In some cases, the algorithm invokes extended precision routines, but only in the non-iterative part. Our examples include numerically difficult problems, like the well-known Wilkinson’s polynomials. Our algorithm compares favorably to other method for polynomial root-finding, like MPSolve or Newton’s method.

Keywords: roots of polynomials, eigenvalue decomposition, arrowhead matrix, high relative accuracy

Procedia PDF Downloads 418
3407 Particle Filter Implementation of a Non-Linear Dynamic Fall Model

Authors: T. Kobayashi, K. Shiba, T. Kaburagi, Y. Kurihara

Abstract:

For the elderly living alone, falls can be a serious problem encountered in daily life. Some elderly people are unable to stand up without the assistance of a caregiver. They may become unconscious after a fall, which can lead to serious aftereffects such as hypothermia, dehydration, and sometimes even death. We treat the subject as an inverted pendulum and model its angle from the equilibrium position and its angular velocity. As the model is non-linear, we implement the filtering method with a particle filter which can estimate true states of the non-linear model. In order to evaluate the accuracy of the particle filter estimation results, we calculate the root mean square error (RMSE) between the estimated angle/angular velocity and the true values generated by the simulation. The experimental results give the highest accuracy RMSE of 0.0141 rad and 0.1311 rad/s for the angle and angular velocity, respectively.

Keywords: fall, microwave Doppler sensor, non-linear dynamics model, particle filter

Procedia PDF Downloads 217
3406 Discovering Word-Class Deficits in Persons with Aphasia

Authors: Yashaswini Channabasavegowda, Hema Nagaraj

Abstract:

Aim: The current study aims at discovering word-class deficits concerning the noun-verb ratio in confrontation naming, picture description, and picture-word matching tasks. A total of ten persons with aphasia (PWA) and ten age-matched neurotypical individuals (NTI) were recruited for the study. The research includes both behavioural and objective measures to assess the word class deficits in PWA. Objective: The main objective of the research is to identify word class deficits seen in persons with aphasia, using various speech eliciting tasks. Method: The study was conducted in the L1 of the participants, considered to be Kannada. Action naming test and Boston naming test adapted to the Kannada version are administered to the participants; also, a picture description task is carried out. Picture-word matching task was carried out using e-prime software (version 2) to measure the accuracy and reaction time with respect to identification verbs and nouns. The stimulus was presented through auditory and visual modes. Data were analysed to identify errors noticed in the naming of nouns versus verbs, with respect to the Boston naming test and action naming test and also usage of nouns and verbs in the picture description task. Reaction time and accuracy for picture-word matching were extracted from the software. Results: PWA showed a significant difference in sentence structure compared to age-matched NTI. Also, PWA showed impairment in syntactic measures in the picture description task, with fewer correct grammatical sentences and fewer correct usage of verbs and nouns, and they produced a greater proportion of nouns compared to verbs. PWA had poorer accuracy and lesser reaction time in the picture-word matching task compared to NTI, and accuracy was higher for nouns compared to verbs in PWA. The deficits were noticed irrespective of the cause leading to aphasia.

Keywords: nouns, verbs, aphasia, naming, description

Procedia PDF Downloads 103
3405 Automatic Measurement of Garment Sizes Using Deep Learning

Authors: Maulik Parmar, Sumeet Sandhu

Abstract:

The online fashion industry experiences high product return rates. Many returns are because of size/fit mismatches -the size scale on labels can vary across brands, the size parameters may not capture all fit measurements, or the product may have manufacturing defects. Warehouse quality check of garment sizes can be semi-automated to improve speed and accuracy. This paper presents an approach for automatically measuring garment sizes from a single image of the garment -using Deep Learning to learn garment keypoints. The paper focuses on the waist size measurement of jeans and can be easily extended to other garment types and measurements. Experimental results show that this approach can greatly improve the speed and accuracy of today’s manual measurement process.

Keywords: convolutional neural networks, deep learning, distortion, garment measurements, image warping, keypoints

Procedia PDF Downloads 313
3404 Wire Localization Procedures in Non-Palpable Breast Cancers: An Audit Report and Review of Literature

Authors: Waqas Ahmad, Eisha Tahir, Shahper Aqeel, Imran Khalid Niazi, Amjad Iqbal

Abstract:

Background: Breast conservation surgery applies a number of techniques for accurate localization of lesions. Wire localization remains the method of choice in non-palpable breast cancers post-neoadjuvant chemotherapy. Objective: The aim of our study was to determine the accuracy of wire localization procedures in our department and compare it with internationally set protocols as per the Royal College of Radiologists. Post wire mammography, as well as the margin status of the postoperative specimen, assessed the accuracy of the procedure. Methods: We retrospectively reviewed the data of 225 patients who presented to our department from May 2014 to June 2015 post neoadjuvant chemotherapy with non-palpable cancers. These patients are candidates for wire localized lumpectomies either under ultrasound or stereotactic guidance. Metallic marker was placed in all the patients at the time of biopsy. Post wire mammogram was performed in all the patients and the distance of the wire tip from the marker was calculated. The presence or absence of the metallic clip in the postoperative specimen, as well as the marginal status of the postoperative specimen, was noted. Results: 157 sonographic and 68 stereotactic wire localization procedures were performed. 95% of the wire tips were within 1 cm of the metallic marker. Marginal status was negative in 94% of the patients in histopathological specimen. Conclusion: Our audit report declares more than 95% accuracy of image guided wire localization in successful excision of non-palpable breast lesions.

Keywords: breast, cancer, non-palpable, wire localization

Procedia PDF Downloads 312
3403 Documents Emotions Classification Model Based on TF-IDF Weighting Measure

Authors: Amr Mansour Mohsen, Hesham Ahmed Hassan, Amira M. Idrees

Abstract:

Emotions classification of text documents is applied to reveal if the document expresses a determined emotion from its writer. As different supervised methods are previously used for emotion documents’ classification, in this research we present a novel model that supports the classification algorithms for more accurate results by the support of TF-IDF measure. Different experiments have been applied to reveal the applicability of the proposed model, the model succeeds in raising the accuracy percentage according to the determined metrics (precision, recall, and f-measure) based on applying the refinement of the lexicon, integration of lexicons using different perspectives, and applying the TF-IDF weighting measure over the classifying features. The proposed model has also been compared with other research to prove its competence in raising the results’ accuracy.

Keywords: emotion detection, TF-IDF, WEKA tool, classification algorithms

Procedia PDF Downloads 484
3402 An Automatic Speech Recognition Tool for the Filipino Language Using the HTK System

Authors: John Lorenzo Bautista, Yoon-Joong Kim

Abstract:

This paper presents the development of a Filipino speech recognition tool using the HTK System. The system was trained from a subset of the Filipino Speech Corpus developed by the DSP Laboratory of the University of the Philippines-Diliman. The speech corpus was both used in training and testing the system by estimating the parameters for phonetic HMM-based (Hidden-Markov Model) acoustic models. Experiments on different mixture-weights were incorporated in the study. The phoneme-level word-based recognition of a 5-state HMM resulted in an average accuracy rate of 80.13 for a single-Gaussian mixture model, 81.13 after implementing a phoneme-alignment, and 87.19 for the increased Gaussian-mixture weight model. The highest accuracy rate of 88.70% was obtained from a 5-state model with 6 Gaussian mixtures.

Keywords: Filipino language, Hidden Markov Model, HTK system, speech recognition

Procedia PDF Downloads 482
3401 An Observation Approach of Reading Order for Single Column and Two Column Layout Template

Authors: In-Tsang Lin, Chiching Wei

Abstract:

Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.

Keywords: document processing, reading order, observation method, layout recognition

Procedia PDF Downloads 181
3400 Evaluating the Tracking Abilities of Microsoft HoloLens-1 for Small-Scale Industrial Processes

Authors: Kuhelee Chandel, Julia Åhlén, Stefan Seipel

Abstract:

This study evaluates the accuracy of Microsoft HoloLens (Version 1) for small-scale industrial activities, comparing its measurements to ground truth data from a Kuka Robotics arm. Two experiments were conducted to assess its position-tracking capabilities, revealing that the HoloLens device is effective for measuring the position of dynamic objects with small dimensions. However, its precision is affected by the velocity of the trajectory and its position within the device's field of view. While the HoloLens device may be suitable for small-scale tasks, its limitations for more complex and demanding applications requiring high precision and accuracy must be considered. The findings can guide the use of HoloLens devices in industrial applications and contribute to the development of more effective and reliable position-tracking systems.

Keywords: augmented reality (AR), Microsoft HoloLens, object tracking, industrial processes, manufacturing processes

Procedia PDF Downloads 137
3399 Time Efficient Color Coding for Structured-Light 3D Scanner

Authors: Po-Hao Huang, Pei-Ju Chiang

Abstract:

The structured light 3D scanner is commonly used for measuring the 3D shape of an object. Through projecting designed light patterns on the object, deformed patterns can be obtained and used for the geometric shape reconstruction. At present, Gray code is the most reliable and commonly used light pattern in the structured light 3D scanner. However, the trade-off between scanning efficiency and accuracy is a long-standing and challenging problem. The design of light patterns plays a significant role in the scanning efficiency and accuracy. Thereby, we proposed a novel encoding method integrating color information and Gray-code to improve the scanning efficiency. We will demonstrate that with the proposed method, the scanning time can be reduced to approximate half of the one needed by Gray-code without reduction of precision.

Keywords: gray-code, structured light scanner, 3D shape acquisition, 3D reconstruction

Procedia PDF Downloads 461
3398 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 101
3397 Enabling Non-invasive Diagnosis of Thyroid Nodules with High Specificity and Sensitivity

Authors: Sai Maniveer Adapa, Sai Guptha Perla, Adithya Reddy P.

Abstract:

Thyroid nodules can often be diagnosed with ultrasound imaging, although differentiating between benign and malignant nodules can be challenging for medical professionals. This work suggests a novel approach to increase the precision of thyroid nodule identification by combining machine learning and deep learning. The new approach first extracts information from the ultrasound pictures using a deep learning method known as a convolutional autoencoder. A support vector machine, a type of machine learning model, is then trained using these features. With an accuracy of 92.52%, the support vector machine can differentiate between benign and malignant nodules. This innovative technique may decrease the need for pointless biopsies and increase the accuracy of thyroid nodule detection.

Keywords: thyroid tumor diagnosis, ultrasound images, deep learning, machine learning, convolutional auto-encoder, support vector machine

Procedia PDF Downloads 59
3396 Automatic Reporting System for Transcriptome Indel Identification and Annotation Based on Snapshot of Next-Generation Sequencing Reads Alignment

Authors: Shuo Mu, Guangzhi Jiang, Jinsa Chen

Abstract:

The analysis of Indel for RNA sequencing of clinical samples is easily affected by sequencing experiment errors and software selection. In order to improve the efficiency and accuracy of analysis, we developed an automatic reporting system for Indel recognition and annotation based on image snapshot of transcriptome reads alignment. This system includes sequence local-assembly and realignment, target point snapshot, and image-based recognition processes. We integrated high-confidence Indel dataset from several known databases as a training set to improve the accuracy of image processing and added a bioinformatical processing module to annotate and filter Indel artifacts. Subsequently, the system will automatically generate data, including data quality levels and images results report. Sanger sequencing verification of the reference Indel mutation of cell line NA12878 showed that the process can achieve 83% sensitivity and 96% specificity. Analysis of the collected clinical samples showed that the interpretation accuracy of the process was equivalent to that of manual inspection, and the processing efficiency showed a significant improvement. This work shows the feasibility of accurate Indel analysis of clinical next-generation sequencing (NGS) transcriptome. This result may be useful for RNA study for clinical samples with microsatellite instability in immunotherapy in the future.

Keywords: automatic reporting, indel, next-generation sequencing, NGS, transcriptome

Procedia PDF Downloads 193
3395 A Case Study of Bee Algorithm for Ready Mixed Concrete Problem

Authors: Wuthichai Wongthatsanekorn, Nuntana Matheekrieangkrai

Abstract:

This research proposes Bee Algorithm (BA) to optimize Ready Mixed Concrete (RMC) truck scheduling problem from single batch plant to multiple construction sites. This problem is considered as an NP-hard constrained combinatorial optimization problem. This paper provides the details of the RMC dispatching process and its related constraints. BA was then developed to minimize total waiting time of RMC trucks while satisfying all constraints. The performance of BA is then evaluated on two benchmark problems (3 and 5construction sites) according to previous researchers. The simulation results of BA are compared in term of efficiency and accuracy with Genetic Algorithm (GA) and all problems show that BA approach outperforms GA in term of efficiency and accuracy to obtain optimal solution. Hence, BA approach could be practically implemented to obtain the best schedule.

Keywords: bee colony optimization, ready mixed concrete problem, ruck scheduling, multiple construction sites

Procedia PDF Downloads 385
3394 Prediction and Reduction of Cracking Issue in Precision Forging of Engine Valves Using Finite Element Method

Authors: Xi Yang, Bulent Chavdar, Alan Vonseggern, Taylan Altan

Abstract:

Fracture in hot precision forging of engine valves was investigated in this paper. The entire valve forging procedure was described and the possible cause of the fracture was proposed. Finite Element simulation was conducted for the forging process, with commercial Finite Element code DEFORMTM. The effects of material properties, the effect of strain rate and temperature were considered in the FE simulation. Two fracture criteria were discussed and compared, based on the accuracy and reliability of the FE simulation results. The selected criterion predicted the fracture location and shows the trend of damage increasing with good accuracy, which matches the experimental observation. Additional modification of the punch shapes was proposed to further reduce the tendency of fracture in forging. Finite Element comparison shows a great potential of such application in the mass production.

Keywords: hotforging, engine valve, fracture, tooling

Procedia PDF Downloads 280
3393 An Ensemble-based Method for Vehicle Color Recognition

Authors: Saeedeh Barzegar Khalilsaraei, Manoocheher Kelarestaghi, Farshad Eshghi

Abstract:

The vehicle color, as a prominent and stable feature, helps to identify a vehicle more accurately. As a result, vehicle color recognition is of great importance in intelligent transportation systems. Unlike conventional methods which use only a single Convolutional Neural Network (CNN) for feature extraction or classification, in this paper, four CNNs, with different architectures well-performing in different classes, are trained to extract various features from the input image. To take advantage of the distinct capability of each network, the multiple outputs are combined using a stack generalization algorithm as an ensemble technique. As a result, the final model performs better than each CNN individually in vehicle color identification. The evaluation results in terms of overall average accuracy and accuracy variance show the proposed method’s outperformance compared to the state-of-the-art rivals.

Keywords: Vehicle Color Recognition, Ensemble Algorithm, Stack Generalization, Convolutional Neural Network

Procedia PDF Downloads 85
3392 Accuracy of Small Field of View CBCT in Determining Endodontic Working Length

Authors: N. L. S. Ahmad, Y. L. Thong, P. Nambiar

Abstract:

An in vitro study was carried out to evaluate the feasibility of small field of view (FOV) cone beam computed tomography (CBCT) in determining endodontic working length. The objectives were to determine the accuracy of CBCT in measuring the estimated preoperative working lengths (EPWL), endodontic working lengths (EWL) and file lengths. Access cavities were prepared in 27 molars. For each root canal, the baseline electronic working length was determined using an EAL (Raypex 5). The teeth were then divided into overextended, non-modified and underextended groups and the lengths were adjusted accordingly. Imaging and measurements were made using the respective software of the RVG (Kodak RVG 6100) and CBCT units (Kodak 9000 3D). Root apices were then shaved and the apical constrictions viewed under magnification to measure the control working lengths. The paired t-test showed a statistically significant difference between CBCT EPWL and control length but the difference was too small to be clinically significant. From the Bland Altman analysis, the CBCT method had the widest range of 95% limits of agreement, reflecting its greater potential of error. In measuring file lengths, RVG had a bigger window of 95% limits of agreement compared to CBCT. Conclusions: (1) The clinically insignificant underestimation of the preoperative working length using small FOV CBCT showed that it is acceptable for use in the estimation of preoperative working length. (2) Small FOV CBCT may be used in working length determination but it is not as accurate as the currently practiced method of using the EAL. (3) It is also more accurate than RVG in measuring file lengths.

Keywords: accuracy, CBCT, endodontics, measurement

Procedia PDF Downloads 309
3391 Level Set and Morphological Operation Techniques in Application of Dental Image Segmentation

Authors: Abdolvahab Ehsani Rad, Mohd Shafry Mohd Rahim, Alireza Norouzi

Abstract:

Medical image analysis is one of the great effects of computer image processing. There are several processes to analysis the medical images which the segmentation process is one of the challenging and most important step. In this paper the segmentation method proposed in order to segment the dental radiograph images. Thresholding method has been applied to simplify the images and to morphologically open binary image technique performed to eliminate the unnecessary regions on images. Furthermore, horizontal and vertical integral projection techniques used to extract the each individual tooth from radiograph images. Segmentation process has been done by applying the level set method on each extracted images. Nevertheless, the experiments results by 90% accuracy demonstrate that proposed method achieves high accuracy and promising result.

Keywords: integral production, level set method, morphological operation, segmentation

Procedia PDF Downloads 318
3390 Reinforcement Learning for Classification of Low-Resolution Satellite Images

Authors: Khadija Bouzaachane, El Mahdi El Guarmah

Abstract:

The classification of low-resolution satellite images has been a worthwhile and fertile field that attracts plenty of researchers due to its importance in monitoring geographical areas. It could be used for several purposes such as disaster management, military surveillance, agricultural monitoring. The main objective of this work is to classify efficiently and accurately low-resolution satellite images by using novel technics of deep learning and reinforcement learning. The images include roads, residential areas, industrial areas, rivers, sea lakes, and vegetation. To achieve that goal, we carried out experiments on the sentinel-2 images considering both high accuracy and efficiency classification. Our proposed model achieved a 91% accuracy on the testing dataset besides a good classification for land cover. Focus on the parameter precision; we have obtained 93% for the river, 92% for residential, 97% for residential, 96% for the forest, 87% for annual crop, 84% for herbaceous vegetation, 85% for pasture, 78% highway and 100% for Sea Lake.

Keywords: classification, deep learning, reinforcement learning, satellite imagery

Procedia PDF Downloads 214
3389 Estimation and Restoration of Ill-Posed Parameters for Underwater Motion Blurred Images

Authors: M. Vimal Raj, S. Sakthivel Murugan

Abstract:

Underwater images degrade their quality due to atmospheric conditions. One of the major problems in an underwater image is motion blur caused by the imaging device or the movement of the object. In order to rectify that in post-imaging, parameters of the blurred image are to be estimated. So, the point spread function is estimated by the properties, using the spectrum of the image. To improve the estimation accuracy of the parameters, Optimized Polynomial Lagrange Interpolation (OPLI) method is implemented after the angle and length measurement of motion-blurred images. Initially, the data were collected from real-time environments in Chennai and processed. The proposed OPLI method shows better accuracy than the existing classical Cepstral, Hough, and Radon transform estimation methods for underwater images.

Keywords: image restoration, motion blur, parameter estimation, radon transform, underwater

Procedia PDF Downloads 177
3388 A Runge Kutta Discontinuous Galerkin Method for Lagrangian Compressible Euler Equations in Two-Dimensions

Authors: Xijun Yu, Zhenzhen Li, Zupeng Jia

Abstract:

This paper presents a new cell-centered Lagrangian scheme for two-dimensional compressible flow. The new scheme uses a semi-Lagrangian form of the Euler equations. The system of equations is discretized by Discontinuous Galerkin (DG) method using the Taylor basis in Eulerian space. The vertex velocities and the numerical fluxes through the cell interfaces are computed consistently by a nodal solver. The mesh moves with the fluid flow. The time marching is implemented by a class of the Runge-Kutta (RK) methods. A WENO reconstruction is used as a limiter for the RKDG method. The scheme is conservative for the mass, momentum and total energy. The scheme maintains second-order accuracy and has free parameters. Results of some numerical tests are presented to demonstrate the accuracy and the robustness of the scheme.

Keywords: cell-centered Lagrangian scheme, compressible Euler equations, RKDG method

Procedia PDF Downloads 548
3387 An Investigation of Direct and Indirect Geo-Referencing Techniques on the Accuracy of Points in Photogrammetry

Authors: F. Yildiz, S. Y. Oturanc

Abstract:

Advances technology in the field of photogrammetry replaces analog cameras with reflection on aircraft GPS/IMU system with a digital aerial camera. In this system, when determining the position of the camera with the GPS, camera rotations are also determined by the IMU systems. All around the world, digital aerial cameras have been used for the photogrammetry applications in the last ten years. In this way, in terms of the work done in photogrammetry it is possible to use time effectively, costs to be reduced to a minimum level, the opportunity to make fast and accurate. Geo-referencing techniques that are the cornerstone of the GPS / INS systems, photogrammetric triangulation of images required for balancing (interior and exterior orientation) brings flexibility to the process. Also geo-referencing process; needed in the application of photogrammetry targets to help to reduce the number of ground control points. In this study, the use of direct and indirect geo-referencing techniques on the accuracy of the points was investigated in the production of photogrammetric mapping.

Keywords: photogrammetry, GPS/IMU systems, geo-referecing, digital aerial camera

Procedia PDF Downloads 411
3386 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 107
3385 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface

Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves

Abstract:

In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.

Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment

Procedia PDF Downloads 135
3384 Beyond the Beep: Optimizing Flight Controller Performance for Reliable Ultrasonic Sensing

Authors: Raunak Munjal, Mohammad Akif Ali, Prithiv Raj

Abstract:

This study investigates the relative effectiveness of various flight controllers for drone obstacle avoidance. To assess ultrasonic sensors' performance in real-time obstacle detection, they are integrated with ESP32 and Arduino Nano controllers. The study determines which controller is most effective for this particular application by analyzing important parameters such as accuracy (mean absolute error), standard deviation, and mean distance range. Furthermore, the study explores the possibility of incorporating state-driven algorithms into the Arduino Nano configuration to potentially improve obstacle detection performance. The results offer significant perspectives for enhancing sensor integration, choosing the best flight controller for obstacle avoidance, and maybe enhancing drones' general environmental navigation ability.

Keywords: ultrasonic distance measurement, accuracy and consistency, flight controller comparisons, ESP32 vs arduino nano

Procedia PDF Downloads 58