Search results for: measuring accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5161

Search results for: measuring accuracy

4831 System Response of a Variable-Rate Aerial Application System

Authors: Daniel E. Martin, Chenghai Yang

Abstract:

Variable-rate aerial application systems are becoming more readily available; however, aerial applicators typically only use the systems for constant-rate application of materials, allowing the systems to compensate for upwind and downwind ground speed variations. Much of the resistance to variable-rate aerial application system adoption in the U.S. pertains to applicator’s trust in the systems to turn on and off automatically as desired. The objectives of this study were to evaluate a commercially available variable-rate aerial application system under field conditions to demonstrate both the response and accuracy of the system to desired application rate inputs. This study involved planting oats in a 35-acre fallow field during the winter months to establish a uniform green backdrop in early spring. A binary (on/off) prescription application map was generated and a variable-rate aerial application of glyphosate was made to the field. Airborne multispectral imagery taken before and two weeks after the application documented actual field deposition and efficacy of the glyphosate. When compared to the prescription application map, these data provided application system response and accuracy information. The results of this study will be useful for quantifying and documenting the response and accuracy of a commercially available variable-rate aerial application system so that aerial applicators can be more confident in their capabilities and the use of these systems can increase, taking advantage of all that aerial variable-rate technologies have to offer.

Keywords: variable-rate, aerial application, remote sensing, precision application

Procedia PDF Downloads 469
4830 Violence Detection and Tracking on Moving Surveillance Video Using Machine Learning Approach

Authors: Abe Degale D., Cheng Jian

Abstract:

When creating automated video surveillance systems, violent action recognition is crucial. In recent years, hand-crafted feature detectors have been the primary method for achieving violence detection, such as the recognition of fighting activity. Researchers have also looked into learning-based representational models. On benchmark datasets created especially for the detection of violent sequences in sports and movies, these methods produced good accuracy results. The Hockey dataset's videos with surveillance camera motion present challenges for these algorithms for learning discriminating features. Image recognition and human activity detection challenges have shown success with deep representation-based methods. For the purpose of detecting violent images and identifying aggressive human behaviours, this research suggested a deep representation-based model using the transfer learning idea. The results show that the suggested approach outperforms state-of-the-art accuracy levels by learning the most discriminating features, attaining 99.34% and 99.98% accuracy levels on the Hockey and Movies datasets, respectively.

Keywords: violence detection, faster RCNN, transfer learning and, surveillance video

Procedia PDF Downloads 95
4829 Some Accuracy Related Aspects in Two-Fluid Hydrodynamic Sub-Grid Modeling of Gas-Solid Riser Flows

Authors: Joseph Mouallem, Seyed Reza Amini Niaki, Norman Chavez-Cussy, Christian Costa Milioli, Fernando Eduardo Milioli

Abstract:

Sub-grid closures for filtered two-fluid models (fTFM) useful in large scale simulations (LSS) of riser flows can be derived from highly resolved simulations (HRS) with microscopic two-fluid modeling (mTFM). Accurate sub-grid closures require accurate mTFM formulations as well as accurate correlation of relevant filtered parameters to suitable independent variables. This article deals with both of those issues. The accuracy of mTFM is touched by assessing the impact of gas sub-grid turbulence over HRS filtered predictions. A gas turbulence alike effect is artificially inserted by means of a stochastic forcing procedure implemented in the physical space over the momentum conservation equation of the gas phase. The correlation issue is touched by introducing a three-filtered variable correlation analysis (three-marker analysis) performed under a variety of different macro-scale conditions typical or risers. While the more elaborated correlation procedure clearly improved accuracy, accounting for gas sub-grid turbulence had no significant impact over predictions.

Keywords: fluidization, gas-particle flow, two-fluid model, sub-grid models, filtered closures

Procedia PDF Downloads 119
4828 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 152
4827 Comparison of the Distillation Curve Obtained Experimentally with the Curve Extrapolated by a Commercial Simulator

Authors: Lívia B. Meirelles, Erika C. A. N. Chrisman, Flávia B. de Andrade, Lilian C. M. de Oliveira

Abstract:

True Boiling Point distillation (TBP) is one of the most common experimental techniques for the determination of petroleum properties. This curve provides information about the performance of petroleum in terms of its cuts. The experiment is performed in a few days. Techniques are used to determine the properties faster with a software that calculates the distillation curve when a little information about crude oil is known. In order to evaluate the accuracy of distillation curve prediction, eight points of the TBP curve and specific gravity curve (348 K and 523 K) were inserted into the HYSYS Oil Manager, and the extended curve was evaluated up to 748 K. The methods were able to predict the curve with the accuracy of 0.6%-9.2% error (Software X ASTM), 0.2%-5.1% error (Software X Spaltrohr).

Keywords: distillation curve, petroleum distillation, simulation, true boiling point curve

Procedia PDF Downloads 433
4826 The Synergistic Effects of Blockchain and AI on Enhancing Data Integrity and Decision-Making Accuracy in Smart Contracts

Authors: Sayor Ajfar Aaron, Sajjat Hossain Abir, Ashif Newaz, Mushfiqur Rahman

Abstract:

Investigating the convergence of blockchain technology and artificial intelligence, this paper examines their synergistic effects on data integrity and decision-making within smart contracts. By implementing AI-driven analytics on blockchain-based platforms, the research identifies improvements in automated contract enforcement and decision accuracy. The paper presents a framework that leverages AI to enhance transparency and trust while blockchain ensures immutable record-keeping, culminating in significantly optimized operational efficiencies in various industries.

Keywords: artificial intelligence, blockchain, data integrity, smart contracts

Procedia PDF Downloads 45
4825 Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision

Authors: Lianzhong Zhang, Chao Huang

Abstract:

Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values.

Keywords: SAR, sea-land segmentation, deep learning, transformer

Procedia PDF Downloads 169
4824 Role of Pulp Volume Method in Assessment of Age and Gender in Lucknow, India, an Observational Study

Authors: Anurag Tripathi, Sanad Khandelwal

Abstract:

Age and gender determination are required in forensic for victim identification. There is secondary dentine deposition throughout life, resulting in decreased pulp volume and size. Evaluation of pulp volume using Cone Beam Computed Tomography (CBCT)is a noninvasive method to evaluate the age and gender of an individual. The study was done to evaluate the efficacy of pulp volume method in the determination of age and gender.Aims/Objectives: The study was conducted to estimate age and determine sex by measuring tooth pulp volume with the help of CBCT. An observational study of one year duration on CBCT data of individuals was conducted in Lucknow. Maxillary central incisors (CI) and maxillary canine (C) of the randomly selected samples were assessed for measurement of pulp volume using a software. Statistical analysis: Chi Square Test, Arithmetic Mean, Standard deviation, Pearson’s Correlation, Linear & Logistic regression analysis. Results: The CBCT data of Ninety individuals with age range between 18-70 years was evaluated for pulp volume of central incisor and canine (CI & C). The Pearson correlation coefficient between the tooth pulp volume (CI & C) and chronological age suggested that pulp volume decreased with age. The validation of the equations for sex determination showed higher prediction accuracy for CI (56.70%) and lower for C (53.30%).Conclusion: Pulp volume obtained from CBCT is a reliable indicator for age estimation and gender prediction.

Keywords: forensic, dental age, pulp volume, cone beam computed tomography

Procedia PDF Downloads 94
4823 Multilayer Ceramic Capacitors: Based Force Sensor Array for Occlusal Force Measurement

Authors: Sheng-Che Chen, Keng-Ren Lin, Che-Hsin Lin, Hao-Yuan Tseng, Chih-Han Chang

Abstract:

Teeth play an important role in providing the essential nutrients. The force loading of chewing on the crow is important condition to evaluate long-term success of many dental treatments. However, the quantification of the force regarding forces are distributed over the dental crow is still not well recognized. This study presents an industrial-grade piezoelectric-based multilayer ceramic capacitors (MLCCs) force sensor for measuring the distribution of the force distribute over the first molar. The developed sensor array is based on a flexible polyimide electrode and barium titanate-based MLCCs. MLCCs are commonly used in the electronic industry and it is a typical electric component composed of BaTiO₃, which is used as a capacitive material. The most important is that it also can be used as a force-sensing component by its piezoelectric property. In this study, to increase the sensitivity as well as to reduce the variation of different MLCCs, a treatment process is utilized. The MLCC force sensors are able to measure large forces (above 500 N), making them suitable for measuring the bite forces on the tooth crown. Moreover, the sensors also show good force response and good repeatability.

Keywords: force sensor array, multilayer ceramic capacitors, occlusal force, piezoelectric

Procedia PDF Downloads 408
4822 Electroencephalogram Based Approach for Mental Stress Detection during Gameplay with Level Prediction

Authors: Priyadarsini Samal, Rajesh Singla

Abstract:

Many mobile games come with the benefits of entertainment by introducing stress to the human brain. In recognizing this mental stress, the brain-computer interface (BCI) plays an important role. It has various neuroimaging approaches which help in analyzing the brain signals. Electroencephalogram (EEG) is the most commonly used method among them as it is non-invasive, portable, and economical. Here, this paper investigates the pattern in brain signals when introduced with mental stress. Two healthy volunteers played a game whose aim was to search hidden words from the grid, and the levels were chosen randomly. The EEG signals during gameplay were recorded to investigate the impacts of stress with the changing levels from easy to medium to hard. A total of 16 features of EEG were analyzed for this experiment which includes power band features with relative powers, event-related desynchronization, along statistical features. Support vector machine was used as the classifier, which resulted in an accuracy of 93.9% for three-level stress analysis; for two levels, the accuracy of 92% and 98% are achieved. In addition to that, another game that was similar in nature was played by the volunteers. A suitable regression model was designed for prediction where the feature sets of the first and second game were used for testing and training purposes, respectively, and an accuracy of 73% was found.

Keywords: brain computer interface, electroencephalogram, regression model, stress, word search

Procedia PDF Downloads 183
4821 Inter Laboratory Comparison with Coordinate Measuring Machine and Uncertainty Analysis

Authors: Tugrul Torun, Ihsan A. Yuksel, Si̇nem On Aktan, Taha K. Vezi̇roglu

Abstract:

In the quality control processes in some industries, the usage of CMM has increased in recent years. Consequently, the CMMs play important roles in the acceptance or rejection of manufactured parts. For parts, it’s important to be able to make decisions by performing fast measurements. According to related technical drawing and its tolerances, measurement uncertainty should also be considered during assessment. Since uncertainty calculation is difficult and time-consuming, most companies ignore the uncertainty value in their routine inspection method. Although studies on measurement uncertainty have been carried out on CMM’s in recent years, there is still no applicable method for analyzing task-specific measurement uncertainty. There are some standard series for calculating measurement uncertainty (ISO-15530); it is not possible to use it in industrial measurement because it is not a practical method for standard measurement routine. In this study, the inter-laboratory comparison test has been carried out in the ROKETSAN A.Ş. with all dimensional inspection units. The reference part that we used is traceable to the national metrology institute TUBİTAK UME. Each unit has measured reference parts according to related technical drawings, and the task-specific measuring uncertainty has been calculated with related parameters. According to measurement results and uncertainty values, the En values have been calculated.

Keywords: coordinate measurement, CMM, comparison, uncertainty

Procedia PDF Downloads 204
4820 Efficient Schemes of Classifiers for Remote Sensing Satellite Imageries of Land Use Pattern Classifications

Authors: S. S. Patil, Sachidanand Kini

Abstract:

Classification of land use patterns is compelling in complexity and variability of remote sensing imageries data. An imperative research in remote sensing application exploited to mine some of the significant spatially variable factors as land cover and land use from satellite images for remote arid areas in Karnataka State, India. The diverse classification techniques, unsupervised and supervised consisting of maximum likelihood, Mahalanobis distance, and minimum distance are applied in Bellary District in Karnataka State, India for the classification of the raw satellite images. The accuracy evaluations of results are compared visually with the standard maps with ground-truths. We initiated with the maximum likelihood technique that gave the finest results and both minimum distance and Mahalanobis distance methods over valued agriculture land areas. In meanness of mislaid few irrelevant features due to the low resolution of the satellite images, high-quality accord between parameters extracted automatically from the developed maps and field observations was found.

Keywords: Mahalanobis distance, minimum distance, supervised, unsupervised, user classification accuracy, producer's classification accuracy, maximum likelihood, kappa coefficient

Procedia PDF Downloads 177
4819 Framework to Quantify Customer Experience

Authors: Anant Sharma, Ashwin Rajan

Abstract:

Customer experience is measured today based on defining a set of metrics and KPIs, setting up thresholds and defining triggers across those thresholds. While this is an effective way of measuring against a Key Performance Indicator ( referred to as KPI in the rest of the paper ), this approach cannot capture the various nuances that make up the overall customer experience. Customers consume a product or service at various levels, which is not reflected in metrics like Customer Satisfaction or Net Promoter Score, but also across other measurements like recurring revenue, frequency of service usage, e-learning and depth of usage. Here we explore an alternative method of measuring customer experience by flipping the traditional views. Rather than rolling customers up to a metric, we roll up metrics to hierarchies and then measure customer experience. This method allows any team to quantify customer experience across multiple touchpoints in a customer’s journey. We make use of various data sources which contain information for metrics like CXSAT, NPS, Renewals, and depths of service usage collected across a customer lifecycle. This data can be mined systematically to get linkages between different data points like geographies, business groups, products and time. Additional views can be generated by blending synthetic contexts into the data to show trends and top/bottom types of reports. We have created a framework that allows us to measure customer experience using the above logic.

Keywords: analytics, customers experience, BI, business operations, KPIs, metrics

Procedia PDF Downloads 68
4818 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy

Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu

Abstract:

The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.

Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis

Procedia PDF Downloads 64
4817 Estimation of Train Operation Using an Exponential Smoothing Method

Authors: Taiyo Matsumura, Kuninori Takahashi, Takashi Ono

Abstract:

The purpose of this research is to improve the convenience of waiting for trains at level crossings and stations and to prevent accidents resulting from forcible entry into level crossings, by providing level crossing users and passengers with information that tells them when the next train will pass through or arrive. For this paper, we proposed methods for estimating operation by means of an average value method, variable response smoothing method, and exponential smoothing method, on the basis of open data, which has low accuracy, but for which performance schedules are distributed in real time. We then examined the accuracy of the estimations. The results showed that the application of an exponential smoothing method is valid.

Keywords: exponential smoothing method, open data, operation estimation, train schedule

Procedia PDF Downloads 383
4816 Fast and Accurate Finite-Difference Method Solving Multicomponent Smoluchowski Coagulation Equation

Authors: Alexander P. Smirnov, Sergey A. Matveev, Dmitry A. Zheltkov, Eugene E. Tyrtyshnikov

Abstract:

We propose a new computational technique for multidimensional (multicomponent) Smoluchowski coagulation equation. Using low-rank approximations in Tensor Train format of both the solution and the coagulation kernel, we accelerate the classical finite-difference Runge-Kutta scheme keeping its level of accuracy. The complexity of the taken finite-difference scheme is reduced from O(N^2d) to O(d^2 N log N ), where N is the number of grid nodes and d is a dimensionality of the problem. The efficiency and the accuracy of the new method are demonstrated on concrete problem with known analytical solution.

Keywords: tensor train decomposition, multicomponent Smoluchowski equation, runge-kutta scheme, convolution

Procedia PDF Downloads 426
4815 A Research on Tourism Market Forecast and Its Evaluation

Authors: Min Wei

Abstract:

The traditional prediction methods of the forecast for tourism market are paid more attention to the accuracy of the forecasts, ignoring the results of the feasibility of forecasting and predicting operability, which had made it difficult to predict the results of scientific testing. With the application of Linear Regression Model, this paper attempts to construct a scientific evaluation system for predictive value, both to ensure the accuracy, stability of the predicted value, and to ensure the feasibility of forecasting and predicting the results of operation. The findings show is that a scientific evaluation system can implement the scientific concept of development, the harmonious development of man and nature co-ordinate.

Keywords: linear regression model, tourism market, forecast, tourism economics

Procedia PDF Downloads 328
4814 MIM and Experimental Studies of the Thermal Drift in an Ultra-High Precision Instrument for Dimensional Metrology

Authors: Kamélia Bouderbala, Hichem Nouira, Etienne Videcoq, Manuel Girault, Daniel Petit

Abstract:

Thermal drifts caused by the power dissipated by the mechanical guiding systems constitute the main limit to enhance the accuracy of an ultra-high precision cylindricity measuring machine. For this reason, a high precision compact prototype has been designed to simulate the behaviour of the instrument. It ensures in situ calibration of four capacitive displacement probes by comparison with four laser interferometers. The set-up includes three heating wires for simulating the powers dissipated by the mechanical guiding systems, four additional heating wires located between each laser interferometer head and its respective holder, 19 Platinum resistance thermometers (Pt100) to observe the temperature evolution inside the set-up and four Pt100 sensors to monitor the ambient temperature. Both a Reduced Model (RM), based on the Modal Identification Method (MIM) was developed and optimized by comparison with the experimental results. Thereafter, time dependent tests were performed under several conditions to measure the temperature variation at 19 fixed positions in the system and compared to the calculated RM results. The RM results show good agreement with experiment and reproduce as well the temperature variations, revealing the importance of the RM proposed for the evaluation of the thermal behaviour of the system.

Keywords: modal identification method (MIM), thermal behavior and drift, dimensional metrology, measurement

Procedia PDF Downloads 392
4813 MB-Slam: A Slam Framework for Construction Monitoring

Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han

Abstract:

Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.

Keywords: perspective alignment, progress monitoring, slam, stereo matching.

Procedia PDF Downloads 219
4812 Development of Colorimetric Based Microfluidic Platform for Quantification of Fluid Contaminants

Authors: Sangeeta Palekar, Mahima Rana, Jayu Kalambe

Abstract:

In this paper, a microfluidic-based platform for the quantification of contaminants in the water is proposed. The proposed system uses microfluidic channels with an embedded environment for contaminants detection in water. Microfluidics-based platforms present an evident stage of innovation for fluid analysis, with different applications advancing minimal efforts and simplicity of fabrication. Polydimethylsiloxane (PDMS)-based microfluidics channel is fabricated using a soft lithography technique. Vertical and horizontal connections for fluid dispensing with the microfluidic channel are explored. The principle of colorimetry, which incorporates the use of Griess reagent for the detection of nitrite, has been adopted. Nitrite has high water solubility and water retention, due to which it has a greater potential to stay in groundwater, endangering aquatic life along with human health, hence taken as a case study in this work. The developed platform also compares the detection methodology, containing photodetectors for measuring absorbance and image sensors for measuring color change for quantification of contaminants like nitrite in water. The utilization of image processing techniques offers the advantage of operational flexibility, as the same system can be used to identify other contaminants present in water by introducing minor software changes.

Keywords: colorimetric, fluid contaminants, nitrite detection, microfluidics

Procedia PDF Downloads 193
4811 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method

Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis

Abstract:

Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.

Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses

Procedia PDF Downloads 126
4810 Facial Emotion Recognition Using Deep Learning

Authors: Ashutosh Mishra, Nikhil Goyal

Abstract:

A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations.

Keywords: facial recognition, computational intelligence, convolutional neural network, depth map

Procedia PDF Downloads 221
4809 High-Accuracy Satellite Image Analysis and Rapid DSM Extraction for Urban Environment Evaluations (Tripoli-Libya)

Authors: Abdunaser Abduelmula, Maria Luisa M. Bastos, José A. Gonçalves

Abstract:

The modeling of the earth's surface and evaluation of urban environment, with 3D models, is an important research topic. New stereo capabilities of high-resolution optical satellites images, such as the tri-stereo mode of Pleiades, combined with new image matching algorithms, are now available and can be applied in urban area analysis. In addition, photogrammetry software packages gained new, more efficient matching algorithms, such as SGM, as well as improved filters to deal with shadow areas, can achieve denser and more precise results. This paper describes a comparison between 3D data extracted from tri-stereo and dual stereo satellite images, combined with pixel based matching and Wallis filter. The aim was to improve the accuracy of 3D models especially in urban areas, in order to assess if satellite images are appropriate for a rapid evaluation of urban environments. The results showed that 3D models achieved by Pleiades tri-stereo outperformed, both in terms of accuracy and detail, the result obtained from a Geo-eye pair. The assessment was made with reference digital surface models derived from high-resolution aerial photography. This could mean that tri-stereo images can be successfully used for the proposed urban change analyses.

Keywords: 3D models, environment, matching, pleiades

Procedia PDF Downloads 324
4808 The Development of a Residual Stress Measurement Method for Roll Formed Products

Authors: Yong Sun, Vladimir Luzin, Zhen Qian, William J. T. Daniel, Mingxing Zhang, Shichao Ding

Abstract:

The residual stresses in roll formed products are generally very high and un-predictable. This is due to the occurrence of redundant plastic deformation in roll forming process and it can cause various product defects. Although the residual stresses of a roll formed product consist of longitudinal and transverse residual stresses components, but the longitudinal residual stresses plays a key role to the product defects of a roll formed product and therefore, only the longitudinal residual stresses concerned by the roll forming scholars and engineers. However, how to inspect the residual stresses of a product quickly and economically as a routine operation is still a challenge. This paper introduces a residual stresses measurement method called slope cutting method to study the longitudinal residual stresses through layers geometrically to a roll formed products or a product with similar process such as a rolled sheet. The detailed measuring procedure is given and discussed. The residual stresses variation through the layer can be derived based on the variation of curvature in different layers and steps. The slope cutting method has been explored and validated by experimental study on a roll-formed square tube. The neutron diffraction method is applied to validate the accuracy of the newly proposed layering removal materials results. The two set results agree with each other very well and therefore, the method is expected to be a routine testing method to monitor the quality of a product been formed and that is a great impact to roll forming industry.

Keywords: roll forming, residual stress, measurement method, neutron diffraction

Procedia PDF Downloads 360
4807 Predicting the Diagnosis of Alzheimer’s Disease: Development and Validation of Machine Learning Models

Authors: Jay L. Fu

Abstract:

Patients with Alzheimer's disease progressively lose their memory and thinking skills and, eventually, the ability to carry out simple daily tasks. The disease is irreversible, but early detection and treatment can slow down the disease progression. In this research, publicly available MRI data and demographic data from 373 MRI imaging sessions were utilized to build models to predict dementia. Various machine learning models, including logistic regression, k-nearest neighbor, support vector machine, random forest, and neural network, were developed. Data were divided into training and testing sets, where training sets were used to build the predictive model, and testing sets were used to assess the accuracy of prediction. Key risk factors were identified, and various models were compared to come forward with the best prediction model. Among these models, the random forest model appeared to be the best model with an accuracy of 90.34%. MMSE, nWBV, and gender were the three most important contributing factors to the detection of Alzheimer’s. Among all the models used, the percent in which at least 4 of the 5 models shared the same diagnosis for a testing input was 90.42%. These machine learning models allow early detection of Alzheimer’s with good accuracy, which ultimately leads to early treatment of these patients.

Keywords: Alzheimer's disease, clinical diagnosis, magnetic resonance imaging, machine learning prediction

Procedia PDF Downloads 139
4806 On Phase Based Stereo Matching and Its Related Issues

Authors: András Rövid, Takeshi Hashimoto

Abstract:

The paper focuses on the problem of the point correspondence matching in stereo images. The proposed matching algorithm is based on the combination of simpler methods such as normalized sum of squared differences (NSSD) and a more complex phase correlation based approach, by considering the noise and other factors, as well. The speed of NSSD and the preciseness of the phase correlation together yield an efficient approach to find the best candidate point with sub-pixel accuracy in stereo image pairs. The task of the NSSD in this case is to approach the candidate pixel roughly. Afterwards the location of the candidate is refined by an enhanced phase correlation based method which in contrast to the NSSD has to run only once for each selected pixel.

Keywords: stereo matching, sub-pixel accuracy, phase correlation, SVD, NSSD

Procedia PDF Downloads 462
4805 A Unified Fitting Method for the Set of Unified Constitutive Equations for Modelling Microstructure Evolution in Hot Deformation

Authors: Chi Zhang, Jun Jiang

Abstract:

Constitutive equations are very important in finite element (FE) modeling, and the accuracy of the material constants in the equations have significant effects on the accuracy of the FE models. A wide range of constitutive equations are available; however, fitting the material constants in the constitutive equations could be complex and time-consuming due to the strong non-linearity and relationship between the constants. This work will focus on the development of a set of unified MATLAB programs for fitting the material constants in the constitutive equations efficiently. Users will only need to supply experimental data in the required format and run the program without modifying functions or precisely guessing the initial values, or finding the parameters in previous works and will be able to fit the material constants efficiently.

Keywords: constitutive equations, FE modelling, MATLAB program, non-linear curve fitting

Procedia PDF Downloads 93
4804 Application of Groundwater Level Data Mining in Aquifer Identification

Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen

Abstract:

Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.

Keywords: aquifer identification, decision tree, groundwater, Fourier transform

Procedia PDF Downloads 153
4803 Adaptation and Validation of the Program Sustainability Assessment Tool

Authors: Henok Metaferia Gebremariam

Abstract:

Worldwide, considerable resources are spent implementing public health interventions that are interrupted soon after the initial funding ends. However, ambiguity remains as to how health programs can be effectively sustained over time because of the diversity of perspectives, definitions, study methods, outcomes measures and timeframes. From all the above-mentioned research challenges, standardized measures of sustainability should ultimately become a key research issue. To resolve this key challenge, the objective of the study was to adapt a tool for measuring the program’s capacity for sustainability and evaluating its reliability and validity. To adapt and validate the tool, a cross-sectional and cohort study design was conducted at 26 programs in Addis Ababa between September 2014 and May 2015. An adapted version of the tool after the pilot test was administered to 220 staff. The tool was analyzed for reliability and validity. Results show that a 40-item PSAT tool had been adapted into the Amharic version with good internal consistency (Cronbach’s alpha= 0.80), test-retest reliability(r=0.916) and construct validity. Factor analysis resulted in 7 components explaining 56.67 % of the variance. In conclusion, it was found that the Amharic version of PAST was a reliable and valid tool for measuring the program’s capacity for sustainability.

Keywords: program sustainability, public health interventions, reliability, validity

Procedia PDF Downloads 40
4802 Autogenous Diabetic Retinopathy Censor for Ophthalmologists - AKSHI

Authors: Asiri Wijesinghe, N. D. Kodikara, Damitha Sandaruwan

Abstract:

The Diabetic Retinopathy (DR) is a rapidly growing interrogation around the world which can be annotated by abortive metabolism of glucose that causes long-term infection in human retina. This is one of the preliminary reason of visual impairment and blindness of adults. Information on retinal pathological mutation can be recognized using ocular fundus images. In this research, we are mainly focused on resurrecting an automated diagnosis system to detect DR anomalies such as severity level classification of DR patient (Non-proliferative Diabetic Retinopathy approach) and vessel tortuosity measurement of untwisted vessels to assessment of vessel anomalies (Proliferative Diabetic Retinopathy approach). Severity classification method is obtained better results according to the precision, recall, F-measure and accuracy (exceeds 94%) in all formats of cross validation. In ROC (Receiver Operating Characteristic) curves also visualized the higher AUC (Area Under Curve) percentage (exceeds 95%). User level evaluation of severity capturing is obtained higher accuracy (85%) result and fairly better values for each evaluation measurements. Untwisted vessel detection for tortuosity measurement also carried out the good results with respect to the sensitivity (85%), specificity (89%) and accuracy (87%).

Keywords: fundus image, exudates, microaneurisms, hemorrhages, tortuosity, diabetic retinopathy, optic disc, fovea

Procedia PDF Downloads 338