Search results for: interpolation accuracy
2772 A Three-modal Authentication Method for Industrial Robots
Authors: Luo Jiaoyang, Yu Hongyang
Abstract:
In this paper, we explore a method that can be used in the working scene of intelligent industrial robots to confirm the identity information of operators to ensure that the robot executes instructions in a sufficiently safe environment. This approach uses three information modalities, namely visible light, depth, and sound. We explored a variety of fusion modes for the three modalities and finally used the joint feature learning method to improve the performance of the model in the case of noise compared with the single-modal case, making the maximum noise in the experiment. It can also maintain an accuracy rate of more than 90%.Keywords: multimodal, kinect, machine learning, distance image
Procedia PDF Downloads 782771 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech
Authors: Monica Gonzalez Machorro
Abstract:
Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment
Procedia PDF Downloads 1262770 A New Criterion Using Pose and Shape of Objects for Collision Risk Estimation
Authors: DoHyeung Kim, DaeHee Seo, ByungDoo Kim, ByungGil Lee
Abstract:
As many recent researches being implemented in aviation and maritime aspects, strong doubts have been raised concerning the reliability of the estimation of collision risk. It is shown that using position and velocity of objects can lead to imprecise results. In this paper, therefore, a new approach to the estimation of collision risks using pose and shape of objects is proposed. Simulation results are presented validating the accuracy of the new criterion to adapt to collision risk algorithm based on fuzzy logic.Keywords: collision risk, pose, shape, fuzzy logic
Procedia PDF Downloads 5282769 Satellite Photogrammetry for DEM Generation Using Stereo Pair and Automatic Extraction of Terrain Parameters
Authors: Tridipa Biswas, Kamal Pandey
Abstract:
A Digital Elevation Model (DEM) is a simple representation of a surface in 3 dimensional space with elevation as the third dimension along with X (horizontal coordinates) and Y (vertical coordinates) in rectangular coordinates. DEM has wide applications in various fields like disaster management, hydrology and watershed management, geomorphology, urban development, map creation and resource management etc. Cartosat-1 or IRS P5 (Indian Remote Sensing Satellite) is a state-of-the-art remote sensing satellite built by ISRO (May 5, 2005) which is mainly intended for cartographic applications.Cartosat-1 is equipped with two panchromatic cameras capable of simultaneous acquiring images of 2.5 meters spatial resolution. One camera is looking at +26 degrees forward while another looks at –5 degrees backward to acquire stereoscopic imagery with base to height ratio of 0.62. The time difference between acquiring of the stereopair images is approximately 52 seconds. The high resolution stereo data have great potential to produce high-quality DEM. The high-resolution Cartosat-1 stereo image data is expected to have significant impact in topographic mapping and watershed applications. The objective of the present study is to generate high-resolution DEM, quality evaluation in different elevation strata, generation of ortho-rectified image and associated accuracy assessment from CARTOSAT-1 data based Ground Control Points (GCPs) for Aglar watershed (Tehri-Garhwal and Dehradun district, Uttarakhand, India). The present study reveals that generated DEMs (10m and 30m) derived from the CARTOSAT-1 stereo pair is much better and accurate when compared with existing DEMs (ASTER and CARTO DEM) also for different terrain parameters like slope, aspect, drainage, watershed boundaries etc., which are derived from the generated DEMs, have better accuracy and results when compared with the other two (ASTER and CARTO) DEMs derived terrain parameters.Keywords: ASTER-DEM, CARTO-DEM, CARTOSAT-1, digital elevation model (DEM), ortho-rectified image, photogrammetry, RPC, stereo pair, terrain parameters
Procedia PDF Downloads 3062768 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 1192767 Comparing SVM and Naïve Bayes Classifier for Automatic Microaneurysm Detections
Authors: A. Sopharak, B. Uyyanonvara, S. Barman
Abstract:
Diabetic retinopathy is characterized by the development of retinal microaneurysms. The damage can be prevented if disease is treated in its early stages. In this paper, we are comparing Support Vector Machine (SVM) and Naïve Bayes (NB) classifiers for automatic microaneurysm detection in images acquired through non-dilated pupils. The Nearest Neighbor classifier is used as a baseline for comparison. Detected microaneurysms are validated with expert ophthalmologists’ hand-drawn ground-truths. The sensitivity, specificity, precision and accuracy of each method are also compared.Keywords: diabetic retinopathy, microaneurysm, naive Bayes classifier, SVM classifier
Procedia PDF Downloads 3272766 An Experimental Modeling of Steel Surfaces Wear in Injection of Plastic Materials with SGF
Authors: L. Capitanu, V. Floresci, L. L. Badita
Abstract:
Starting from the idea that the greatest pressure and velocity of composite melted is in the die nozzle, was an experimental nozzle with wear samples of sizes and weights which can be measured with precision as good. For a larger accuracy of measurements, we used a method for radiometric measuring, extremely accurate. Different nitriding steels have been studied as nitriding treatments, as well as some special steels and alloyed steels. Besides these, there have been preliminary attempts made to describe and checking corrosive action of thermoplastics on metals.Keywords: plastics, composites with short glass fibres, moulding, wear, experimental modelling, glass fibres content influence
Procedia PDF Downloads 2632765 Microstructures of Si Surfaces Fabricated by Electrochemical Anodic Oxidation with Agarose Stamps
Abstract:
This paper investigates the fabrication of microstructures on Si surfaces by using electrochemical anodic oxidation with agarose stamps. The fabricating process is based on a selective anodic oxidation reaction that occurs in the contact area between a stamp and a Si substrate. The stamp which is soaked in electrolyte previously acts as a current flow channel. After forming the oxide patterns as an etching mask, a KOH aqueous is used for the wet etching of Si. A complicated microstructure array of 1 cm2 was fabricated by the method with high accuracy.Keywords: microstructures, anodic oxidation, silicon, agarose stamps
Procedia PDF Downloads 3032764 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 832763 Detecting Covid-19 Fake News Using Deep Learning Technique
Authors: AnjalI A. Prasad
Abstract:
Nowadays, social media played an important role in spreading misinformation or fake news. This study analyzes the fake news related to the COVID-19 pandemic spread in social media. This paper aims at evaluating and comparing different approaches that are used to mitigate this issue, including popular deep learning approaches, such as CNN, RNN, LSTM, and BERT algorithm for classification. To evaluate models’ performance, we used accuracy, precision, recall, and F1-score as the evaluation metrics. And finally, compare which algorithm shows better result among the four algorithms.Keywords: BERT, CNN, LSTM, RNN
Procedia PDF Downloads 2032762 Affective Transparency in Compound Word Processing
Authors: Jordan Gallant
Abstract:
In the compound word processing literature, much attention has been paid to the relationship between a compound’s denotational meaning and that of its morphological whole-word constituents, which is referred to as ‘semantic transparency’. However, the parallel relationship between a compound’s connotation and that of its constituents has not been addressed at all. For instance, while a compound like ‘painkiller’ might be semantically transparent, it is not ‘affectively transparent’. That is, both constituents have primarily negative connotations, while the whole compound has a positive one. This paper investigates the role of affective transparency on compound processing using two methodologies commonly employed in this field: a lexical decision task and a typing task. The critical stimuli used were 112 English bi-constituent compounds that differed in terms of the effective transparency of their constituents. Of these, 36 stimuli contained constituents with similar connotations to the compound (e.g., ‘dreamland’), 36 contained constituents with more positive connotations (e.g. ‘bedpan’), and 36 contained constituents with more negative connotations (e.g. ‘painkiller’). Connotation of whole-word constituents and compounds were operationalized via valence ratings taken from an off-line ratings database. In Experiment 1, compound stimuli and matched non-word controls were presented visually to participants, who were then asked to indicate whether it was a real word in English. Response times and accuracy were recorded. In Experiment 2, participants typed compound stimuli presented to them visually. Individual keystroke response times and typing accuracy were recorded. The results of both experiments provided positive evidence that compound processing is influenced by effective transparency. In Experiment 1, compounds in which both constituents had more negative connotations than the compound itself were responded to significantly more slowly than compounds in which the constituents had similar or more positive connotations. Typed responses from Experiment 2 showed that inter-keystroke intervals at the morphological constituent boundary were significantly longer when the connotation of the head constituent was either more positive or more negative than that of the compound. The interpretation of this finding is discussed in the context of previous compound typing research. Taken together, these findings suggest that affective transparency plays a role in the recognition, storage, and production of English compound words. This study provides a promising first step in a new direction for research on compound words.Keywords: compound processing, semantic transparency, typed production, valence
Procedia PDF Downloads 1252761 Efficiency of Geocell Reinforcement for Using in Expanded Polystyrene Embankments via Numerical Analysis
Authors: S. N. Moghaddas Tafreshi, S. M. Amin Ghotbi
Abstract:
This paper presents a numerical study for investigating the effectiveness of geocell reinforcement in reducing pressure and settlement over EPS geofoam blocks in road embankments. A 3-D FEM model of soil and geofoam was created in ABAQUS, and geocell was also modeled realistically using membrane elements. The accuracy of the model was tested by comparing its results with previous works. Sensitivity analyses showed that reinforcing the soil cover with geocell has a significant influence on the reduction of imposed stresses over geofoam and consequently decreasing its deformation.Keywords: EPS geofoam, geocell, reinforcement, road embankments, lightweight fill
Procedia PDF Downloads 2712760 Preliminary WRF SFIRE Simulations over Croatia during the Split Wildfire in July 2017
Authors: Ivana Čavlina Tomašević, Višnjica Vučetić, Maja Telišman Prtenjak, Barbara Malečić
Abstract:
The Split wildfire on the mid-Adriatic Coast in July 2017 is one of the most severe wildfires in Croatian history, given the size and unexpected fire behavior, and it is used in this research as a case study to run the Weather Research and Forecasting Spread Fire (WRF SFIRE) model. This coupled fire-atmosphere model was successfully run for the first time ever for one Croatian wildfire case. Verification of coupled simulations was possible by using the detailed reconstruction of the Split wildfire. Specifically, precise information on ignition time and location, together with mapped fire progressions and spotting within the first 30 hours of the wildfire, was used for both – to initialize simulations and to evaluate the model’s ability to simulate fire’s propagation and final fire scar. The preliminary simulations were obtained using high-resolution vegetation and topography data for the fire area, additionally interpolated to fire grid spacing at 33.3 m. The results demonstrated that the WRF SFIRE model has the ability to work with real data from Croatia and produce adequate results for forecasting fire spread. As the model in its setup has the ability to include and exclude the energy fluxes between the fire and the atmosphere, this was used to investigate possible fire-atmosphere interactions during the Split wildfire. Finally, successfully coupled simulations provided the first numerical evidence that a wildfire from the Adriatic coast region can modify the dynamical structure of the surrounding atmosphere, which agrees with observations from fire grounds. This study has demonstrated that the WRF SFIRE model has the potential for operational application in Croatia with more accurate fire predictions in the future, which could be accomplished by inserting the higher-resolution input data into the model without interpolation. Possible uses for fire management in Croatia include prediction of fire spread and intensity that may vary under changing weather conditions, available fuels and topography, planning effective and safe deployment of ground and aerial firefighting forces, preventing wildland-urban interface fires, effective planning of evacuation routes etc. In addition, the WRF SFIRE model results from this research demonstrated that the model is important for fire weather research and education purposes in order to better understand this hazardous phenomenon that occurs in Croatia.Keywords: meteorology, agrometeorology, fire weather, wildfires, couple fire-atmosphere model
Procedia PDF Downloads 882759 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware
Authors: Subham Ghosh, Banani Basu, Marami Das
Abstract:
Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease
Procedia PDF Downloads 42758 The Use of a Novel Visual Kinetic Demonstration Technique in Student Skill Acquisition of the Sellick Cricoid Force Manoeuvre
Authors: L. Nathaniel-Wurie
Abstract:
The Sellick manoeuvre a.k.a the application of cricoid force (CF), was first described by Brian Sellick in 1961. CF is the application of digital pressure against the cricoid cartilage with the intention of posterior force causing oesophageal compression against the vertebrae. This is designed to prevent passive regurgitation of gastric contents, which is a major cause of morbidity and mortality during emergency airway management inside and outside of the hospital. To the authors knowledge, there is no universally standardised training modality and, therefore, no reliable way to examine if there are appropriate outcomes. If force is not measured during training, how can one surmise that appropriate, accurate, or precise amounts of force are being used routinely. Poor homogeneity in teaching and untested outcomes will correlate with reduced efficacy and increased adverse effects. For this study, the accuracy of force delivery in trained professionals was tested, and outcomes contrasted against a novice control and a novice study group. In this study, 20 operating department practitioners were tested (with a mean experience of 5.3years of performing CF). Subsequent contrast with 40 novice students who were randomised into one of two arms. ‘Arm A’ were explained the procedure, then shown the procedure then asked to perform CF with the corresponding force measurement being taken three times. Arm B had the same process as arm A then before being tested, they had 10, and 30 Newtons applied to their hands to increase intuitive understanding of what the required force equated to, then were asked to apply the equivalent amount of force against a visible force metre and asked to hold that force for 20 seconds which allowed direct visualisation and correction of any over or under estimation. Following this, Arm B were then asked to perform the manoeuvre, and the force generated measured three times. This study shows that there is a wide distribution of force produced by trained professionals and novices performing the procedure for the first time. Our methodology for teaching the manoeuvre shows an improved accuracy, precision, and homogeneity within the group when compared to novices and even outperforms trained practitioners. In conclusion, if this methodology is adopted, it may correlate with higher clinical outcomes, less adverse events, and more successful airway management in critical medical scenarios.Keywords: airway, cricoid, medical education, sellick
Procedia PDF Downloads 782757 Scar Removal Stretegy for Fingerprint Using Diffusion
Authors: Mohammad A. U. Khan, Tariq M. Khan, Yinan Kong
Abstract:
Fingerprint image enhancement is one of the most important step in an automatic fingerprint identification recognition (AFIS) system which directly affects the overall efficiency of AFIS. The conventional fingerprint enhancement like Gabor and Anisotropic filters do fill the gaps in ridge lines but they fail to tackle scar lines. To deal with this problem we are proposing a method for enhancing the ridges and valleys with scar so that true minutia points can be extracted with accuracy. Our results have shown an improved performance in terms of enhancement.Keywords: fingerprint image enhancement, removing noise, coherence, enhanced diffusion
Procedia PDF Downloads 5132756 A Case Study Comparing the Effect of Computer Assisted Task-Based Language Teaching and Computer-Assisted Form Focused Language Instruction on Language Production of Students Learning Arabic as a Foreign Language
Authors: Hanan K. Hassanein
Abstract:
Task-based language teaching (TBLT) and focus on form instruction (FFI) methods were proven to improve quality and quantity of immediate language production. However, studies that compare between the effectiveness of the language production when using TBLT versus FFI are very little with results that are not consistent. Moreover, teaching Arabic using TBLT is a new field with few research that has investigated its application inside classrooms. Furthermore, to the best knowledge of the researcher, there are no prior studies that compared teaching Arabic as a foreign language in a classroom setting using computer-assisted task-based language teaching (CATBLT) with computer-assisted form focused language instruction (CAFFI). Accordingly, the focus of this presentation is to display CATBLT and CAFFI tools when teaching Arabic as a foreign language as well as demonstrate an experimental study that aims to identify whether or not CATBLT is a more effective instruction method. The effectiveness will be determined through comparing CATBLT and CAFFI in terms of accuracy, lexical complexity, and fluency of language produced by students. The participants of the study are 20 students enrolled in two intermediate-level Arabic as a foreign language classes. The experiment will take place over the course of 7 days. Based on a study conducted by Abdurrahman Arslanyilmaz for teaching Turkish as a second language, an in-house computer assisted tool for the TBLT and another one for FFI will be designed for the experiment. The experimental group will be instructed using the in-house CATBLT tool and the control group will be taught through the in-house CAFFI tool. The data that will be analyzed are the dialogues produced by students in both the experimental and control groups when completing a task or communicating in conversational activities. The dialogues of both groups will be analyzed to understand the effect of the type of instruction (CATBLT or CAFFI) on accuracy, lexical complexity, and fluency. Thus, the study aims to demonstrate whether or not there is an instruction method that positively affects the language produced by students learning Arabic as a foreign language more than the other.Keywords: computer assisted language teaching, foreign language teaching, form-focused instruction, task based language teaching
Procedia PDF Downloads 2492755 Short Life Cycle Time Series Forecasting
Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar
Abstract:
The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.Keywords: forecast, short life cycle product, structured judgement, time series
Procedia PDF Downloads 3582754 Development of Adaptive Proportional-Integral-Derivative Feeding Mechanism for Robotic Additive Manufacturing System
Authors: Andy Alubaidy
Abstract:
In this work, a robotic additive manufacturing system (RAMS) that is capable of three-dimensional (3D) printing in six degrees of freedom (DOF) with very high accuracy and virtually on any surface has been designed and built. One of the major shortcomings in existing 3D printer technology is the limitation to three DOF, which results in prolonged fabrication time. Depending on the techniques used, it usually takes at least two hours to print small objects and several hours for larger objects. Another drawback is the size of the printed objects, which is constrained by the physical dimensions of most low-cost 3D printers, which are typically small. In such cases, large objects are produced by dividing them into smaller components that fit the printer’s workable area. They are then glued, bonded or otherwise attached to create the required object. Another shortcoming is material constraints and the need to fabricate a single part using different materials. With the flexibility of a six-DOF robot, the RAMS has been designed to overcome these problems. A feeding mechanism using an adaptive Proportional-Integral-Derivative (PID) controller is utilized along with a national instrument compactRIO (NI cRIO), an ABB robot, and off-the-shelf sensors. The RAMS have the ability to 3D print virtually anywhere in six degrees of freedom with very high accuracy. It is equipped with an ABB IRB 120 robot to achieve this level of accuracy. In order to convert computer-aided design (CAD) files to digital format that is acceptable to the robot, Hypertherm Robotic Software Inc.’s state-of-the-art slicing software called “ADDMAN” is used. ADDMAN is capable of converting any CAD file into RAPID code (the programing language for ABB robots). The robot uses the generated code to perform the 3D printing. To control the entire process, National Instrument (NI) compactRIO (cRio 9074), is connected and communicated with the robot and a feeding mechanism that is designed and fabricated. The feeding mechanism consists of two major parts, cold-end and hot-end. The cold-end consists of what is conventionally known as an extruder. Typically, a stepper-motor is used to control the push on the material, however, for optimum control, a DC motor is used instead. The hot-end consists of a melt-zone, nozzle, and heat-brake. The melt zone ensures a thorough melting effect and consistent output from the nozzle. Nozzles are made of brass for thermo-conductivity while the melt-zone is comprised of a heating block and a ceramic heating cartridge to transfer heat to the block. The heat-brake ensures that there is no heat creep-up effect as this would swell the material and prevent consistent extrusion. A control system embedded in the cRio is developed using NI Labview which utilizes adaptive PID to govern the heating cartridge in conjunction with a thermistor. The thermistor sends temperature feedback to the cRio, which will issue heat increase or decrease based on the system output. Since different materials have different melting points, our system will allow us to adjust the temperature and vary the material.Keywords: robotic, additive manufacturing, PID controller, cRIO, 3D printing
Procedia PDF Downloads 2162753 The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme
Authors: Andrey V. Timofeev, Dmitry V. Egorov
Abstract:
This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained.Keywords: classification accuracy, fusion solution, total error rate, multimodal fusion classifier
Procedia PDF Downloads 4652752 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence
Authors: Mohammed Al Sulaimani, Hamad Al Manhi
Abstract:
With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems
Procedia PDF Downloads 312751 Recognizing Human Actions by Multi-Layer Growing Grid Architecture
Authors: Z. Gharaee
Abstract:
Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance
Procedia PDF Downloads 1572750 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP
Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang
Abstract:
Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species
Procedia PDF Downloads 672749 Flicker Detection with Motion Tolerance for Embedded Camera
Authors: Jianrong Wu, Xuan Fu, Akihiro Higashi, Zhiming Tan
Abstract:
CMOS image sensors with a rolling shutter are used broadly in the digital cameras embedded in mobile devices. The rolling shutter suffers the flicker artifacts from the fluorescent lamp, and it could be observed easily. In this paper, the characteristics of illumination flicker in motion case were analyzed, and two efficient detection methods based on matching fragment selection were proposed. According to the experimental results, our methods could achieve as high as 100% accuracy in static scene, and at least 97% in motion scene.Keywords: illumination flicker, embedded camera, rolling shutter, detection
Procedia PDF Downloads 4182748 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers
Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta
Abstract:
The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation
Procedia PDF Downloads 602747 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver
Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin
Abstract:
National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band
Procedia PDF Downloads 3682746 A Framework of Virtualized Software Controller for Smart Manufacturing
Authors: Pin Xiu Chen, Shang Liang Chen
Abstract:
A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing
Procedia PDF Downloads 812745 BERT-Based Chinese Coreference Resolution
Authors: Li Xiaoge, Wang Chaodong
Abstract:
We introduce the first Chinese Coreference Resolution Model based on BERT (CCRM-BERT) and show that it significantly outperforms all previous work. The key idea is to consider the features of the mention, such as part of speech, width of spans, distance between spans, etc. And the influence of each features on the model is analyzed. The model computes mention embeddings that combine BERT with features. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the Chinese OntoNotes benchmark.Keywords: BERT, coreference resolution, deep learning, nature language processing
Procedia PDF Downloads 2152744 An Accurate Prediction of Surface Temperature History in a Supersonic Flight
Authors: A. M. Tahsini, S. A. Hosseini
Abstract:
In the present study, the surface temperature history of the adaptor part in a two-stage supersonic launch vehicle is accurately predicted. The full Navier-Stokes equations are used to estimate the aerodynamic heat flux. The one-dimensional heat conduction in solid phase is used to compute the temperature history. The instantaneous surface temperature is used to improve the applied heat flux, to improve the accuracy of the results.Keywords: aerodynamic heating, heat conduction, numerical simulation, supersonic flight, launch vehicle
Procedia PDF Downloads 4512743 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration
Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger
Abstract:
Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration
Procedia PDF Downloads 46