Search results for: high accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22145

Search results for: high accuracy

21605 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features

Authors: Bo Wang

Abstract:

The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.

Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection

Procedia PDF Downloads 273
21604 Post-Earthquake Road Damage Detection by SVM Classification from Quickbird Satellite Images

Authors: Moein Izadi, Ali Mohammadzadeh

Abstract:

Detection of damaged parts of roads after earthquake is essential for coordinating rescuers. In this study, an approach is presented for the semi-automatic detection of damaged roads in a city using pre-event vector maps and both pre- and post-earthquake QuickBird satellite images. Damage is defined in this study as the debris of damaged buildings adjacent to the roads. Some spectral and texture features are considered for SVM classification step to detect damages. Finally, the proposed method is tested on QuickBird pan-sharpened images from the Bam City earthquake and the results show that an overall accuracy of 81% and a kappa coefficient of 0.71 are achieved for the damage detection. The obtained results indicate the efficiency and accuracy of the proposed approach.

Keywords: SVM classifier, disaster management, road damage detection, quickBird images

Procedia PDF Downloads 611
21603 Method Development for the Determination of Gamma-Aminobutyric Acid in Rice Products by Lc-Ms-Ms

Authors: Cher Rong Matthew Kong, Edmund Tian, Seng Poon Ong, Chee Sian Gan

Abstract:

Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is a functional constituent of certain rice varieties. When consumed, it decreases blood pressure and reduces the risk of hypertension-related diseases. This has led to more research dedicated towards the development of functional food products (e.g. germinated brown rice) with enhanced GABA content, and the development of these functional food products has led to increased demand for instrument-based methods that can efficiently and effectively determine GABA content. Current analytical methods require analyte derivatisation, and have significant disadvantages such as being labour intensive and time-consuming, and being subject to analyte loss due to the increased complexity of the sample preparation process. To address this, an LC-MS-MS method for the determination of GABA in rice products has been developed and validated. This developed method involves a relatively simple sample preparation process before analysis using HILIC LC-MS-MS. This method eliminates the need for derivatisation, thereby significantly reducing the labour and time associated with such an analysis. Using LC-MS-MS also allows for better differentiation of GABA from any potential co-eluting compounds in the sample matrix. Results obtained from the developed method demonstrated high linearity, accuracy, and precision for the determination of GABA (1ng/L to 8ng/L) in a variety of brown rice products. The method can significantly simplify sample preparation steps, improve the accuracy of quantitation, and increase the throughput of analyses, thereby providing a quick but effective alternative to established instrumental analysis methods for GABA in rice.

Keywords: functional food, gamma-aminobutyric acid, germinated brown rice, method development

Procedia PDF Downloads 250
21602 Upon One Smoothing Problem in Project Management

Authors: Dimitri Golenko-Ginzburg

Abstract:

A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.

Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate

Procedia PDF Downloads 295
21601 A Multi-Stage Learning Framework for Reliable and Cost-Effective Estimation of Vehicle Yaw Angle

Authors: Zhiyong Zheng, Xu Li, Liang Huang, Zhengliang Sun, Jianhua Xu

Abstract:

Yaw angle plays a significant role in many vehicle safety applications, such as collision avoidance and lane-keeping system. Although the estimation of the yaw angle has been extensively studied in existing literature, it is still the main challenge to simultaneously achieve a reliable and cost-effective solution in complex urban environments. This paper proposes a multi-stage learning framework to estimate the yaw angle with a monocular camera, which can deal with the challenge in a more reliable manner. In the first stage, an efficient road detection network is designed to extract the road region, providing a highly reliable reference for the estimation. In the second stage, a variational auto-encoder (VAE) is proposed to learn the distribution patterns of road regions, which is particularly suitable for modeling the changing patterns of yaw angle under different driving maneuvers, and it can inherently enhance the generalization ability. In the last stage, a gated recurrent unit (GRU) network is used to capture the temporal correlations of the learned patterns, which is capable to further improve the estimation accuracy due to the fact that the changes of deflection angle are relatively easier to recognize among continuous frames. Afterward, the yaw angle can be obtained by combining the estimated deflection angle and the road direction stored in a roadway map. Through effective multi-stage learning, the proposed framework presents high reliability while it maintains better accuracy. Road-test experiments with different driving maneuvers were performed in complex urban environments, and the results validate the effectiveness of the proposed framework.

Keywords: gated recurrent unit, multi-stage learning, reliable estimation, variational auto-encoder, yaw angle

Procedia PDF Downloads 125
21600 High Arousal and Athletic Performance

Authors: Turki Mohammed Al Mohaid

Abstract:

High arousal may lead to inhibited athletic performance, or high positive arousal may enhance performance is controversial. To evaluate and review this issue, 31 athletes (all male) were induced into high pre-determined goal arousal and high arousal without pre-determined goal motivational states and tested on a standard grip strength task. Paced breathing was used to change psychological and physiological arousal. It was noted that significant increases in grip strength performance occurred when arousal was high and experienced as delighted, happy, and pleasant excitement in those with no pre-determined goal motivational states. Blood pressure, heart rate, and other indicators of physiological activity were not found to mediate between psychological arousal and performance. In a situation where athletic performance necessitates maximal motor strength over a short period, performance benefits of high arousal may be enhanced by designing a specific motivational state.

Keywords: high arousal, athletic, performance, physiological

Procedia PDF Downloads 103
21599 A Comparative Study of Black Carbon Emission Characteristics from Marine Diesel Engines Using Light Absorption Method

Authors: Dongguk Im, Gunfeel Moon, Younwoo Nam, Kangwoo Chun

Abstract:

Recognition of the needs about protecting environment throughout worldwide is widespread. In the shipping industry, International Maritime Organization (IMO) has been regulating pollutants emitted from ships by MARPOL 73/78. Recently, the Marine Environment Protection Committee (MEPC) of IMO, at its 68th session, approved the definition of Black Carbon (BC) specified by the following physical properties (light absorption, refractory, insolubility and morphology). The committee also agreed to the need for a protocol for any voluntary measurement studies to identify the most appropriate measurement methods. Filter Smoke Number (FSN) based on light absorption is categorized as one of the IMO relevant BC measurement methods. EUROMOT provided a FSN measurement data (measured by smoke meter) of 31 different engines (low, medium and high speed marine engines) of member companies at the 3rd International Council on Clean Transportation (ICCT) workshop on marine BC. From the comparison of FSN, the results indicated that BC emission from low speed marine diesel engines was ranged from 0.009 to 0.179 FSN and it from medium and high speed marine diesel engine was ranged 0.012 to 3.2 FSN. In consideration of measured the low FSN from low speed engine, an experimental study was conducted using both a low speed marine diesel engine (2 stroke, power of 7,400 kW at 129 rpm) and a high speed marine diesel engine (4 stroke, power of 403 kW at 1,800 rpm) under E3 test cycle. The results revealed that FSN was ranged from 0.01 to 0.16 and 1.09 to 1.35 for low and high speed engines, respectively. The measurement equipment (smoke meter) ranges from 0 to 10 FSN. Considering measurement range of it, FSN values from low speed engines are near the detection limit (0.002 FSN or ~0.02 mg/m3). From these results, it seems to be modulated the measurement range of the measurement equipment (smoke meter) for enhancing measurement accuracy of marine BC and evaluation on performance of BC abatement technologies.

Keywords: black carbon, filter smoke number, international maritime organization, marine diesel engine (two and four stroke), particulate matter

Procedia PDF Downloads 256
21598 Sinhala Sign Language to Grammatically Correct Sentences using NLP

Authors: Anjalika Fernando, Banuka Athuraliya

Abstract:

This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired community

Keywords: Sinhala sign language, sign Language, NLP, LSTM, NMT

Procedia PDF Downloads 86
21597 K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors

Authors: Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang

Abstract:

Matching high dimensional features between images is computationally expensive for exhaustive search approaches in computer vision. Although the dimension of the feature can be degraded by simplifying the prior knowledge of homography, matching accuracy may degrade as a tradeoff. In this paper, we present a feature matching method based on k-means algorithm that reduces the matching cost and matches the features between images instead of using a simplified geometric assumption. Experimental results show that the proposed method outperforms the previous linear exhaustive search approaches in terms of the inlier ratio of matched pairs.

Keywords: feature matching, k-means clustering, SIFT, RANSAC

Procedia PDF Downloads 345
21596 Integrating Virtual Reality and Building Information Model-Based Quantity Takeoffs for Supporting Construction Management

Authors: Chin-Yu Lin, Kun-Chi Wang, Shih-Hsu Wang, Wei-Chih Wang

Abstract:

A construction superintendent needs to know not only the amount of quantities of cost items or materials completed to develop a daily report or calculate the daily progress (earned value) in each day, but also the amount of quantities of materials (e.g., reinforced steel and concrete) to be ordered (or moved into the jobsite) for performing the in-progress or ready-to-start construction activities (e.g., erection of reinforced steel and concrete pouring). These daily construction management tasks require great effort in extracting accurate quantities in a short time (usually must be completed right before getting off work every day). As a result, most superintendents can only provide these quantity data based on either what they see on the site (high inaccuracy) or the extraction of quantities from two-dimension (2D) construction drawings (high time consumption). Hence, the current practice of providing the amount of quantity data completed in each day needs improvement in terms of more accuracy and efficiency. Recently, a three-dimension (3D)-based building information model (BIM) technique has been widely applied to support construction quantity takeoffs (QTO) process. The capability of virtual reality (VR) allows to view a building from the first person's viewpoint. Thus, this study proposes an innovative system by integrating VR (using 'Unity') and BIM (using 'Revit') to extract quantities to support the above daily construction management tasks. The use of VR allows a system user to be present in a virtual building to more objectively assess the construction progress in the office. This VR- and BIM-based system is also facilitated by an integrated database (consisting of the information and data associated with the BIM model, QTO, and costs). In each day, a superintendent can work through a BIM-based virtual building to quickly identify (via a developed VR shooting function) the building components (or objects) that are in-progress or finished in the jobsite. And he then specifies a percentage (e.g., 20%, 50% or 100%) of completion of each identified building object based on his observation on the jobsite. Next, the system will generate the completed quantities that day by multiplying the specified percentage by the full quantities of the cost items (or materials) associated with the identified object. A building construction project located in northern Taiwan is used as a case study to test the benefits (i.e., accuracy and efficiency) of the proposed system in quantity extraction for supporting the development of daily reports and the orders of construction materials.

Keywords: building information model, construction management, quantity takeoffs, virtual reality

Procedia PDF Downloads 123
21595 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 249
21594 An Intelligent Steerable Drill System for Orthopedic Surgery

Authors: Wei Yao

Abstract:

A steerable and flexible drill is needed in orthopaedic surgery. For example, osteoarthritis is a common condition affecting millions of people for which joint replacement is an effective treatment which improves the quality and duration of life in elderly sufferers. Conventional surgery is not very accurate. Computer navigation and robotics can help increase the accuracy. For example, In Total Hip Arthroplasty (THA), robotic surgery is currently practiced mainly on acetabular side helping cup positioning and orientation. However, femoral stem positioning mostly uses hand-rasping method rather than robots for accurate positioning. The other case for using a flexible drill in surgery is Anterior Cruciate Ligament (ACL) Reconstruction. The majority of ACL Reconstruction failures are primarily caused by technical mistakes and surgical errors resulting from drilling the anatomical bone tunnels required to accommodate the ligament graft. The proposed new steerable drill system will perform orthopedic surgery through curved tunneling leading to better accuracy and patient outcomes. It may reduce intra-operative fractures, dislocations, early failure and leg length discrepancy by making possible a new level of precision. This technology is based on a robotically assisted, steerable, hand-held flexible drill, with a drill-tip tracking device and a multi-modality navigation system. The critical differentiator is that this robotically assisted surgical technology now allows the surgeon to prepare 'patient specific' and more anatomically correct 'curved' bone tunnels during orthopedic surgery rather than drilling straight holes as occurs currently with existing surgical tools. The flexible and steerable drill and its navigation system for femoral milling in total hip arthroplasty had been tested on sawbones to evaluate the accuracy of the positioning and orientation of femoral stem relative to the pre-operative plan. The data show the accuracy of the navigation system is better than traditional hand-rasping method.

Keywords: navigation, robotic orthopedic surgery, steerable drill, tracking

Procedia PDF Downloads 156
21593 Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery

Authors: Sidney A. Lima, Hermann J. H. Kux, Elcio H. Shiguemori

Abstract:

The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6º in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation.

Keywords: autonomy, navigation, security, photogrammetry, remote sensing, spatial resection, UAS

Procedia PDF Downloads 174
21592 High-Resolution Computed Tomography Imaging Features during Pandemic 'COVID-19'

Authors: Sahar Heidary, Ramin Ghasemi Shayan

Abstract:

By the development of new coronavirus (2019-nCoV) pneumonia, chest high-resolution computed tomography (HRCT) has been one of the main investigative implements. To realize timely and truthful diagnostics, defining the radiological features of the infection is of excessive value. The purpose of this impression was to consider the imaging demonstrations of early-stage coronavirus disease 2019 (COVID-19) and to run an imaging base for a primary finding of supposed cases and stratified interference. The right prophetic rate of HRCT was 85%, sensitivity was 73% for all patients. Total accuracy was 68%. There was no important change in these values for symptomatic and asymptomatic persons. These consequences were besides free of the period of X-ray from the beginning of signs or interaction. Therefore, we suggest that HRCT is a brilliant attachment for early identification of COVID-19 pneumonia in both symptomatic and asymptomatic individuals in adding to the role of predictive gauge for COVID-19 pneumonia. Patients experienced non-contrast HRCT chest checkups and images were restored in a thin 1.25 mm lung window. Images were estimated for the existence of lung scratches & a CT severity notch was allocated separately for each patient based on the number of lung lobes convoluted.

Keywords: COVID-19, radiology, respiratory diseases, HRCT

Procedia PDF Downloads 134
21591 Audio-Visual Recognition Based on Effective Model and Distillation

Authors: Heng Yang, Tao Luo, Yakun Zhang, Kai Wang, Wei Qin, Liang Xie, Ye Yan, Erwei Yin

Abstract:

Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual.

Keywords: lipreading, audio-visual, Efficientnet, distillation

Procedia PDF Downloads 121
21590 A Method to Enhance the Accuracy of Digital Forensic in the Absence of Sufficient Evidence in Saudi Arabia

Authors: Fahad Alanazi, Andrew Jones

Abstract:

Digital forensics seeks to achieve the successful investigation of digital crimes through obtaining acceptable evidence from digital devices that can be presented in a court of law. Thus, the digital forensics investigation is normally performed through a number of phases in order to achieve the required level of accuracy in the investigation processes. Since 1984 there have been a number of models and frameworks developed to support the digital investigation processes. In this paper, we review a number of the investigation processes that have been produced throughout the years and introduce a proposed digital forensic model which is based on the scope of the Saudi Arabia investigation process. The proposed model has been integrated with existing models for the investigation processes and produced a new phase to deal with a situation where there is initially insufficient evidence.

Keywords: digital forensics, process, metadata, Traceback, Sauid Arabia

Procedia PDF Downloads 345
21589 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 95
21588 Automatic Number Plate Recognition System Based on Deep Learning

Authors: T. Damak, O. Kriaa, A. Baccar, M. A. Ben Ayed, N. Masmoudi

Abstract:

In the last few years, Automatic Number Plate Recognition (ANPR) systems have become widely used in the safety, the security, and the commercial aspects. Forethought, several methods and techniques are computing to achieve the better levels in terms of accuracy and real time execution. This paper proposed a computer vision algorithm of Number Plate Localization (NPL) and Characters Segmentation (CS). In addition, it proposed an improved method in Optical Character Recognition (OCR) based on Deep Learning (DL) techniques. In order to identify the number of detected plate after NPL and CS steps, the Convolutional Neural Network (CNN) algorithm is proposed. A DL model is developed using four convolution layers, two layers of Maxpooling, and six layers of fully connected. The model was trained by number image database on the Jetson TX2 NVIDIA target. The accuracy result has achieved 95.84%.

Keywords: ANPR, CS, CNN, deep learning, NPL

Procedia PDF Downloads 292
21587 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer

Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın

Abstract:

We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.

Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer

Procedia PDF Downloads 236
21586 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.

Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO

Procedia PDF Downloads 408
21585 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 43
21584 Evaluate the Possibility of Using ArcGIS Basemaps as GCP for Large Scale Maps

Authors: Jali Octariady, Ida Herliningsih, Ade K. Mulyana, Annisa Fitria, Diaz C. K. Yuwana

Abstract:

Awareness of the importance large-scale maps for development of a country is growing in all walks of life, especially for governments in Indonesia. Various parties, especially local governments throughout Indonesia demanded for immediate availability the large-scale maps of 1:5000 for regional development. But in fact, the large-scale maps of 1:5000 is only available less than 5% of the entire territory of Indonesia. Unavailability precise GCP at the entire territory of Indonesia is one of causes of slow availability the large scale maps of 1:5000. This research was conducted to find an alternative solution to this problem. This study was conducted to assess the accuracy of ArcGIS base maps coordinate when it shall be used as GCP for creating a map scale of 1:5000. The study was conducted by comparing the GCP coordinate from Field survey using GPS Geodetic than the coordinate from ArcGIS basemaps in various locations in Indonesia. Some areas are used as a study area are Lombok Island, Kupang City, Surabaya City and Kediri District. The differences value of the coordinates serve as the basis for assessing the accuracy of ArcGIS basemaps coordinates. The results of the study at various study area show the variation of the amount of the coordinates value given. Differences coordinate value in the range of millimeters (mm) to meters (m) in the entire study area. This is shown the inconsistency quality of ArcGIS base maps coordinates. This inconsistency shows that the coordinate value from ArcGIS Basemaps is careless. The Careless coordinate from ArcGIS Basemaps indicates that its cannot be used as GCP for large-scale mapping on the entire territory of Indonesia.

Keywords: accuracy, ArcGIS base maps, GCP, large scale maps

Procedia PDF Downloads 360
21583 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 37
21582 Comparison of Various Classification Techniques Using WEKA for Colon Cancer Detection

Authors: Beema Akbar, Varun P. Gopi, V. Suresh Babu

Abstract:

Colon cancer causes the deaths of about half a million people every year. The common method of its detection is histopathological tissue analysis, it leads to tiredness and workload to the pathologist. A novel method is proposed that combines both structural and statistical pattern recognition used for the detection of colon cancer. This paper presents a comparison among the different classifiers such as Multilayer Perception (MLP), Sequential Minimal Optimization (SMO), Bayesian Logistic Regression (BLR) and k-star by using classification accuracy and error rate based on the percentage split method. The result shows that the best algorithm in WEKA is MLP classifier with an accuracy of 83.333% and kappa statistics is 0.625. The MLP classifier which has a lower error rate, will be preferred as more powerful classification capability.

Keywords: colon cancer, histopathological image, structural and statistical pattern recognition, multilayer perception

Procedia PDF Downloads 565
21581 Particle Filter Implementation of a Non-Linear Dynamic Fall Model

Authors: T. Kobayashi, K. Shiba, T. Kaburagi, Y. Kurihara

Abstract:

For the elderly living alone, falls can be a serious problem encountered in daily life. Some elderly people are unable to stand up without the assistance of a caregiver. They may become unconscious after a fall, which can lead to serious aftereffects such as hypothermia, dehydration, and sometimes even death. We treat the subject as an inverted pendulum and model its angle from the equilibrium position and its angular velocity. As the model is non-linear, we implement the filtering method with a particle filter which can estimate true states of the non-linear model. In order to evaluate the accuracy of the particle filter estimation results, we calculate the root mean square error (RMSE) between the estimated angle/angular velocity and the true values generated by the simulation. The experimental results give the highest accuracy RMSE of 0.0141 rad and 0.1311 rad/s for the angle and angular velocity, respectively.

Keywords: fall, microwave Doppler sensor, non-linear dynamics model, particle filter

Procedia PDF Downloads 200
21580 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 270
21579 Market Index Trend Prediction using Deep Learning and Risk Analysis

Authors: Shervin Alaei, Reza Moradi

Abstract:

Trading in financial markets is subject to risks due to their high volatilities. Here, using an LSTM neural network, and by doing some risk-based feature engineering tasks, we developed a method that can accurately predict trends of the Tehran stock exchange market index from a few days ago. Our test results have shown that the proposed method with an average prediction accuracy of more than 94% is superior to the other common machine learning algorithms. To the best of our knowledge, this is the first work incorporating deep learning and risk factors to accurately predict market trends.

Keywords: deep learning, LSTM, trend prediction, risk management, artificial neural networks

Procedia PDF Downloads 138
21578 Discovering Word-Class Deficits in Persons with Aphasia

Authors: Yashaswini Channabasavegowda, Hema Nagaraj

Abstract:

Aim: The current study aims at discovering word-class deficits concerning the noun-verb ratio in confrontation naming, picture description, and picture-word matching tasks. A total of ten persons with aphasia (PWA) and ten age-matched neurotypical individuals (NTI) were recruited for the study. The research includes both behavioural and objective measures to assess the word class deficits in PWA. Objective: The main objective of the research is to identify word class deficits seen in persons with aphasia, using various speech eliciting tasks. Method: The study was conducted in the L1 of the participants, considered to be Kannada. Action naming test and Boston naming test adapted to the Kannada version are administered to the participants; also, a picture description task is carried out. Picture-word matching task was carried out using e-prime software (version 2) to measure the accuracy and reaction time with respect to identification verbs and nouns. The stimulus was presented through auditory and visual modes. Data were analysed to identify errors noticed in the naming of nouns versus verbs, with respect to the Boston naming test and action naming test and also usage of nouns and verbs in the picture description task. Reaction time and accuracy for picture-word matching were extracted from the software. Results: PWA showed a significant difference in sentence structure compared to age-matched NTI. Also, PWA showed impairment in syntactic measures in the picture description task, with fewer correct grammatical sentences and fewer correct usage of verbs and nouns, and they produced a greater proportion of nouns compared to verbs. PWA had poorer accuracy and lesser reaction time in the picture-word matching task compared to NTI, and accuracy was higher for nouns compared to verbs in PWA. The deficits were noticed irrespective of the cause leading to aphasia.

Keywords: nouns, verbs, aphasia, naming, description

Procedia PDF Downloads 94
21577 Application of Adaptive Neuro Fuzzy Inference Systems Technique for Modeling of Postweld Heat Treatment Process of Pressure Vessel Steel AASTM A516 Grade 70

Authors: Omar Al Denali, Abdelaziz Badi

Abstract:

The ASTM A516 Grade 70 steel is a suitable material used for the fabrication of boiler pressure vessels working in moderate and lower temperature services, and it has good weldability and excellent notch toughness. The post-weld heat treatment (PWHT) or stress-relieving heat treatment has significant effects on avoiding the martensite transformation and resulting in high hardness, which can lead to cracking in the heat-affected zone (HAZ). An adaptive neuro-fuzzy inference system (ANFIS) was implemented to predict the material tensile strength of post-weld heat treatment (PWHT) experiments. The ANFIS models presented excellent predictions, and the comparison was carried out based on the mean absolute percentage error between the predicted values and the experimental values. The ANFIS model gave a Mean Absolute Percentage Error of 0.556 %, which confirms the high accuracy of the model.

Keywords: prediction, post-weld heat treatment, adaptive neuro-fuzzy inference system, mean absolute percentage error

Procedia PDF Downloads 144
21576 The Simple Two-Step Polydimethylsiloxane (PDMS) Transferring Process for High Aspect Ratio Microstructures

Authors: Shaoxi Wang, Pouya Rezai

Abstract:

High aspect ratio is the necessary parts of complex microstructures. Some methods available to achieve high aspect ratio requires expensive materials or complex process; others is difficult to research simple high aspect ratio structures. The paper presents a simple and cheap two-step Polydimethylsioxane (PDMS) transferring process to get high aspect ratio single pillars, which only requires covering the PDMS mold with Brij@52 surface solution. The experimental results demonstrate the method efficiency and effective.

Keywords: high aspect ratio, microstructure, PDMS, Brij

Procedia PDF Downloads 249