Search results for: dimensional accuracy (DA)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5661

Search results for: dimensional accuracy (DA)

5121 Diesel Engine Performance Optimization to Reduce Fuel Consumption and Emissions Issues

Authors: hadi kargar, bahador shabani

Abstract:

In this article, 16 cylinder motor combustion CFD modeling with a diameter of 165 mm and 195 mm along the way to help the FIRE software to optimize its function to work. A three-dimensional model of the processes that formed inside the cylinder made that involves mixing the fuel and air, ignition and spraying. In this three-dimensional model, all chemical species, density of air fuel spraying and spray with full profile intended to detailed results from mixing the fuel and air, igniting the ignition advance, spray, and mixed media in different times and get fit by moving the piston. Optimal selection of the model for the shape of the piston and spraying fuel specifications (including the management of spraying, the number of azhneh hole, start time of spraying and spraying angle) to achieve the best fuel consumption and minimal pollution. The spray hole 6 and 7 in three different configurations with five spraying and gives the best geometry and various performances in the simulation. 6 hole spray angle, finally spraying 72.5 degrees and two forms of spraying a better performance in comparison with other items of their own.

Keywords: spray, FIRE, CFD, optimize, diesel engine

Procedia PDF Downloads 419
5120 CFD Modeling of Insect Flight at Low Reynolds Numbers

Authors: Wu Di, Yeo Khoon Seng, Lim Tee Tai

Abstract:

The typical insects employ a flapping-wing mode of flight. The numerical simulations on free flight of a model fruit fly (Re=143) including hovering and are presented in this paper. Unsteady aerodynamics around a flapping insect is studied by solving the three-dimensional Newtonian dynamics of the flyer coupled with Navier-Stokes equations. A hybrid-grid scheme (Generalized Finite Difference Method) that combines great geometry flexibility and accuracy of moving boundary definition is employed for obtaining flow dynamics. The results show good points of agreement and consistency with the outcomes and analyses of other researchers, which validate the computational model and demonstrate the feasibility of this computational approach on analyzing fluid phenomena in insect flight. The present modeling approach also offers a promising route of investigation that could complement as well as overcome some of the limitations of physical experiments in the study of free flight aerodynamics of insects. The results are potentially useful for the design of biomimetic flapping-wing flyers.

Keywords: free hovering flight, flapping wings, fruit fly, insect aerodynamics, leading edge vortex (LEV), computational fluid dynamics (CFD), Navier-Stokes equations (N-S), fluid structure interaction (FSI), generalized finite-difference method (GFD)

Procedia PDF Downloads 410
5119 Accelerating Molecular Dynamics Simulations of Electrolytes with Neural Network: Bridging the Gap between Ab Initio Molecular Dynamics and Classical Molecular Dynamics

Authors: Po-Ting Chen, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang

Abstract:

Classical molecular dynamics (CMD) simulations are highly efficient for material simulations but have limited accuracy. In contrast, ab initio molecular dynamics (AIMD) provides high precision by solving the Kohn–Sham equations yet requires significant computational resources, restricting the size of systems and time scales that can be simulated. To address these challenges, we employed NequIP, a machine learning model based on an E(3)-equivariant graph neural network, to accelerate molecular dynamics simulations of a 1M LiPF6 in EC/EMC (v/v 3:7) for Li battery applications. AIMD calculations were initially conducted using the Vienna Ab initio Simulation Package (VASP) to generate highly accurate atomic positions, forces, and energies. This data was then used to train the NequIP model, which efficiently learns from the provided data. NequIP achieved AIMD-level accuracy with significantly less training data. After training, NequIP was integrated into the LAMMPS software to enable molecular dynamics simulations of larger systems over longer time scales. This method overcomes the computational limitations of AIMD while improving the accuracy limitations of CMD, providing an efficient and precise computational framework. This study showcases NequIP’s applicability to electrolyte systems, particularly for simulating the dynamics of LiPF6 ionic mixtures. The results demonstrate substantial improvements in both computational efficiency and simulation accuracy, highlighting the potential of machine learning models to enhance molecular dynamics simulations.

Keywords: lithium-ion batteries, electrolyte simulation, molecular dynamics, neural network

Procedia PDF Downloads 22
5118 From Faces to Feelings: Exploring Emotional Contagion and Empathic Accuracy through the Enfacement Illusion

Authors: Ilenia Lanni, Claudia Del Gatto, Allegra Indraccolo, Riccardo Brunetti

Abstract:

Empathy represents a multifaceted construct encompassing affective and cognitive components. Among these, empathic accuracy—defined as the ability to accurately infer another person’s emotions or mental state—plays a pivotal role in fostering empathetic understanding. Emotional contagion, the automatic process through which individuals mimic and synchronize facial expressions, vocalizations, and postures, is considered a foundational mechanism for empathy. This embodied simulation enables shared emotional experiences and facilitates the recognition of others’ emotional states, forming the basis of empathic accuracy. Facial mimicry, an integral part of emotional contagion, creates a physical and emotional resonance with others, underscoring its potential role in enhancing empathic understanding. Building on these findings, the present study explores how manipulating emotional contagion through the enfacement illusion impacts empathic accuracy, particularly in the recognition of complex emotional expressions. The enfacement illusion was implemented as a visuo-tactile multisensory manipulation, during which participants experienced synchronous and spatially congruent tactile stimulation on their own face while observing the same stimulation being applied to another person’s face. This manipulation enhances facial mimicry, which is hypothesized to play a key role in improving empathic accuracy. Following the enfacement illusion, participants completed a modified version of the Diagnostic Analysis of Nonverbal Accuracy–Form 2 (DANVA2-AF). The task included 48 images of adult faces expressing happiness, sadness, or morphed emotions blending neutral with happiness or sadness to increase recognition difficulty. These images featured both familiar and unfamiliar faces, with familiar faces belonging to the actors involved in the prior visuo-tactile stimulation. Participants were required to identify the target’s emotional state as either "happy" or "sad," with response accuracy and reaction times recorded. Results from this study indicate that emotional contagion, as manipulated through the enfacement illusion, significantly enhances empathic accuracy, particularly for the recognition of happiness. Participants demonstrated greater accuracy and faster response times in identifying happiness when viewing familiar faces compared to unfamiliar ones. These findings suggest that the enfacement illusion strengthens emotional resonance and facilitates the processing of positive emotions, which are inherently more likely to be shared and mimicked. Conversely, for the recognition of sadness, an opposite but non-significant trend was observed. Specifically, participants were slightly faster at recognizing sadness in unfamiliar faces compared to familiar ones. This pattern suggests potential differences in how positive and negative emotions are processed within the context of facial mimicry and emotional contagion, warranting further investigation. These results provide insights into the role of facial mimicry in emotional contagion and its selective impact on empathic accuracy. This study highlights how the enfacement illusion can precisely modulate the recognition of specific emotions, offering a deeper understanding of the mechanisms underlying empathy.

Keywords: empathy, emotional contagion, enfacement illusion, emotion recognition

Procedia PDF Downloads 3
5117 Two-Dimensional Nanostack Based On Chip Wiring

Authors: Nikhil Jain, Bin Yu

Abstract:

The material behavior of graphene, a single layer of carbon lattice, is extremely sensitive to its dielectric environment. We demonstrate improvement in electronic performance of graphene nanowire interconnects with full encapsulation by lattice-matching, chemically inert, 2D layered insulator hexagonal boron nitride (h-BN). A novel layer-based transfer technique is developed to construct the h-BN/MLG/h-BN heterostructures. The encapsulated graphene wires are characterized and compared with that on SiO2 or h-BN substrate without passivating h-BN layer. Significant improvements in maximum current-carrying density, breakdown threshold, and power density in encapsulated graphene wires are observed. These critical improvements are achieved without compromising the carrier transport characteristics in graphene. Furthermore, graphene wires exhibit electrical behavior less insensitive to ambient conditions, as compared with the non-passivated ones. Overall, h-BN/graphene/h-BN heterostructure presents a robust material platform towards the implementation of high-speed carbon-based interconnects.

Keywords: two-dimensional nanosheet, graphene, hexagonal boron nitride, heterostructure, interconnects

Procedia PDF Downloads 454
5116 Implementation in Python of a Method to Transform One-Dimensional Signals in Graphs

Authors: Luis Andrey Fajardo Fajardo

Abstract:

We are immersed in complex systems. The human brain, the galaxies, the snowflakes are examples of complex systems. An area of interest in Complex systems is the chaos theory. This revolutionary field of science presents different ways of study than determinism and reductionism. Here is where in junction with the Nonlinear DSP, chaos theory offer valuable techniques that establish a link between time series and complex theory in terms of complex networks, so that, the study of signals can be explored from the graph theory. Recently, some people had purposed a method to transform time series in graphs, but no one had developed a suitable implementation in Python with signals extracted from Chaotic Systems or Complex systems. That’s why the implementation in Python of an existing method to transform one dimensional chaotic signals from time domain to graph domain and some measures that may reveal information not extracted in the time domain is proposed.

Keywords: Python, complex systems, graph theory, dynamical systems

Procedia PDF Downloads 509
5115 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 58
5114 A Simple Adaptive Atomic Decomposition Voice Activity Detector Implemented by Matching Pursuit

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

A simple adaptive voice activity detector (VAD) is implemented using Gabor and gammatone atomic decomposition of speech for high Gaussian noise environments. Matching pursuit is used for atomic decomposition, and is shown to achieve optimal speech detection capability at high data compression rates for low signal to noise ratios. The most active dictionary elements found by matching pursuit are used for the signal reconstruction so that the algorithm adapts to the individual speakers dominant time-frequency characteristics. Speech has a high peak to average ratio enabling matching pursuit greedy heuristic of highest inner products to isolate high energy speech components in high noise environments. Gabor and gammatone atoms are both investigated with identical logarithmically spaced center frequencies, and similar bandwidths. The algorithm performs equally well for both Gabor and gammatone atoms with no significant statistical differences. The algorithm achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR and 98% accuracy at a 20dB SNR using 30dB SNR as a reference for voice activity.

Keywords: atomic decomposition, gabor, gammatone, matching pursuit, voice activity detection

Procedia PDF Downloads 290
5113 Coastal Hydraulic Modelling to Ascertain Stability of Rubble Mound Breakwater

Authors: Safari Mat Desa, Othman A. Karim, Mohd Kamarulhuda Samion, Saiful Bahri Hamzah

Abstract:

Rubble mound breakwater was one of the most popular designs in Malaysia, constructed at the river mouth to dissipate the incoming wave energy from the seaward. Geometrically characteristics in trapezoid, crest width, and bottom width will determine the hypotonus stability, whilst structural height was designed for wave overtopping consideration. Physical hydraulic modelling in two-dimensional facilities was instigated in the flume to test the stability as well as the overtopping rate complied with the method of similarity, namely kinematic, dynamic, and geometric. Scaling effects of wave characteristics were carried out in order to acquire significant interaction of wave height, wave period, and water depth. Results showed two-dimensional physical modelling has proven reliable capability to ascertain breakwater stability significantly.

Keywords: breakwater, geometrical characteristic, wave overtopping, physical hydraulic modelling, method of similarity, wave characteristic

Procedia PDF Downloads 117
5112 3D Numerical Study of Tsunami Loading and Inundation in a Model Urban Area

Authors: A. Bahmanpour, I. Eames, C. Klettner, A. Dimakopoulos

Abstract:

We develop a new set of diagnostic tools to analyze inundation into a model district using three-dimensional CFD simulations, with a view to generating a database against which to test simpler models. A three-dimensional model of Oregon city with different-sized groups of building next to the coastline is used to run calculations of the movement of a long period wave on the shore. The initial and boundary conditions of the off-shore water are set using a nonlinear inverse method based on Eulerian spatial information matching experimental Eulerian time series measurements of water height. The water movement is followed in time, and this enables the pressure distribution on every surface of each building to be followed in a temporal manner. The three-dimensional numerical data set is validated against published experimental work. In the first instance, we use the dataset as a basis to understand the success of reduced models - including 2D shallow water model and reduced 1D models - to predict water heights, flow velocity and forces. This is because models based on the shallow water equations are known to underestimate drag forces after the initial surge of water. The second component is to identify critical flow features, such as hydraulic jumps and choked states, which are flow regions where dissipation occurs and drag forces are large. Finally, we describe how future tsunami inundation models should be modified to account for the complex effects of buildings through drag and blocking.Financial support from UCL and HR Wallingford is greatly appreciated. The authors would like to thank Professor Daniel Cox and Dr. Hyoungsu Park for providing the data on the Seaside Oregon experiment.

Keywords: computational fluid dynamics, extreme events, loading, tsunami

Procedia PDF Downloads 115
5111 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 40
5110 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 86
5109 A Model for Diagnosis and Prediction of Coronavirus Using Neural Network

Authors: Sajjad Baghernezhad

Abstract:

Meta-heuristic and hybrid algorithms have high adeer in modeling medical problems. In this study, a neural network was used to predict covid-19 among high-risk and low-risk patients. This study was conducted to collect the applied method and its target population consisting of 550 high-risk and low-risk patients from the Kerman University of medical sciences medical center to predict the coronavirus. In this study, the memetic algorithm, which is a combination of a genetic algorithm and a local search algorithm, has been used to update the weights of the neural network and develop the accuracy of the neural network. The initial study showed that the accuracy of the neural network was 88%. After updating the weights, the memetic algorithm increased by 93%. For the proposed model, sensitivity, specificity, positive predictivity value, value/accuracy to 97.4, 92.3, 95.8, 96.2, and 0.918, respectively; for the genetic algorithm model, 87.05, 9.20 7, 89.45, 97.30 and 0.967 and for logistic regression model were 87.40, 95.20, 93.79, 0.87 and 0.916. Based on the findings of this study, neural network models have a lower error rate in the diagnosis of patients based on individual variables and vital signs compared to the regression model. The findings of this study can help planners and health care providers in signing programs and early diagnosis of COVID-19 or Corona.

Keywords: COVID-19, decision support technique, neural network, genetic algorithm, memetic algorithm

Procedia PDF Downloads 67
5108 The Evaluation of Current Pile Driving Prediction Methods for Driven Monopile Foundations in London Clay

Authors: John Davidson, Matteo Castelletti, Ismael Torres, Victor Terente, Jamie Irvine, Sylvie Raymackers

Abstract:

The current industry approach to pile driving predictions consists of developing a model of the hammer-pile-soil system which simulates the relationship between soil resistance to driving (SRD) and blow counts (or pile penetration per blow). The SRD methods traditionally used are broadly based on static pile capacity calculations. The SRD is used in combination with the one-dimensional wave equation model to indicate the anticipated blowcounts with depth for specific hammer energy settings. This approach has predominantly been calibrated on relatively long slender piles used in the oil and gas industry but is now being extended to allow calculations to be undertaken for relatively short rigid large diameter monopile foundations. This paper evaluates the accuracy of current industry practice when applied to a site where large diameter monopiles were installed in predominantly stiff fissured clay. Actual geotechnical and pile installation data, including pile driving records and signal matching analysis (based upon pile driving monitoring techniques), were used for the assessment on the case study site.

Keywords: driven piles, fissured clay, London clay, monopiles, offshore foundations

Procedia PDF Downloads 225
5107 Optimization of Process Parameters in Wire Electrical Discharge Machining of Inconel X-750 for Dimensional Deviation Using Taguchi Technique

Authors: Mandeep Kumar, Hari Singh

Abstract:

The effective optimization of machining process parameters affects dramatically the cost and production time of machined components as well as the quality of the final products. This paper presents the optimization aspects of a Wire Electrical Discharge Machining operation using Inconel X-750 as work material. The objective considered in this study is minimization of the dimensional deviation. Six input process parameters of WEDM namely spark gap voltage, pulse-on time, pulse-off time, wire feed rate, peak current and wire tension, were chosen as variables to study the process performance. Taguchi's design of experiments methodology has been used for planning and designing the experiments. The analysis of variance was carried out for raw data as well as for signal to noise ratio. Four input parameters and one two-factor interaction have been found to be statistically significant for their effects on the response of interest. The confirmation experiments were also performed for validating the predicted results.

Keywords: ANOVA, DOE, inconel, machining, optimization

Procedia PDF Downloads 205
5106 System Response of a Variable-Rate Aerial Application System

Authors: Daniel E. Martin, Chenghai Yang

Abstract:

Variable-rate aerial application systems are becoming more readily available; however, aerial applicators typically only use the systems for constant-rate application of materials, allowing the systems to compensate for upwind and downwind ground speed variations. Much of the resistance to variable-rate aerial application system adoption in the U.S. pertains to applicator’s trust in the systems to turn on and off automatically as desired. The objectives of this study were to evaluate a commercially available variable-rate aerial application system under field conditions to demonstrate both the response and accuracy of the system to desired application rate inputs. This study involved planting oats in a 35-acre fallow field during the winter months to establish a uniform green backdrop in early spring. A binary (on/off) prescription application map was generated and a variable-rate aerial application of glyphosate was made to the field. Airborne multispectral imagery taken before and two weeks after the application documented actual field deposition and efficacy of the glyphosate. When compared to the prescription application map, these data provided application system response and accuracy information. The results of this study will be useful for quantifying and documenting the response and accuracy of a commercially available variable-rate aerial application system so that aerial applicators can be more confident in their capabilities and the use of these systems can increase, taking advantage of all that aerial variable-rate technologies have to offer.

Keywords: variable-rate, aerial application, remote sensing, precision application

Procedia PDF Downloads 475
5105 3D Scanning Documentation and X-Ray Radiography Examination for Ancient Egyptian Canopic Jar

Authors: Abdelrahman Mohamed Abdelrahman

Abstract:

Canopic jars are one of the vessels of funerary nature used by the ancient Egyptian in mummification process that were used to save the viscera of the mummified body after being extracted from the body and treated. Canopic jars are made of several types of materials like Limestone, Alabaster, and Pottery. The studied canopic jar dates back to Late period, located in the Grand Egyptian Museum (GEM), Giza, Egypt. This jar carved from limestone with carved hieroglyphic inscriptions, and it filled and closed by mortar from inside. Some aspects of damage appeared in the jar, such as dust, dirts, classification, wide crack, weakness of limestone. In this study, we used documentation and investigation modern techniques to document and examine the jar. 3D scanning and X-ray Radiography imaging used in applied study. X-ray imaging showed that the mortar was placed at a time when the jar contained probably viscera where the mortar appeared that not reach up to the base of the inner jar. Through the three-dimensional photography, the jar was documented, and we have 3D model of the jar, and now we have the ability through the computer to see any part of the jar in all its details. After that, conservation procedures have been applied with high accuracy to conserve the jar, including mechanical, wet, and chemical cleaning, filling wide crack in the body of the jar using mortar consisting of calcium carbonate powder mixing with primal E330 S, and consolidation, so the limestone became strong after using paraloid B72 2% concentrate as a consolidate material.

Keywords: vessel, limestone, canopic jar, mortar, 3D scanning, X-ray radiography

Procedia PDF Downloads 78
5104 Violence Detection and Tracking on Moving Surveillance Video Using Machine Learning Approach

Authors: Abe Degale D., Cheng Jian

Abstract:

When creating automated video surveillance systems, violent action recognition is crucial. In recent years, hand-crafted feature detectors have been the primary method for achieving violence detection, such as the recognition of fighting activity. Researchers have also looked into learning-based representational models. On benchmark datasets created especially for the detection of violent sequences in sports and movies, these methods produced good accuracy results. The Hockey dataset's videos with surveillance camera motion present challenges for these algorithms for learning discriminating features. Image recognition and human activity detection challenges have shown success with deep representation-based methods. For the purpose of detecting violent images and identifying aggressive human behaviours, this research suggested a deep representation-based model using the transfer learning idea. The results show that the suggested approach outperforms state-of-the-art accuracy levels by learning the most discriminating features, attaining 99.34% and 99.98% accuracy levels on the Hockey and Movies datasets, respectively.

Keywords: violence detection, faster RCNN, transfer learning and, surveillance video

Procedia PDF Downloads 108
5103 Mechanical Simulation with Electrical and Dimensional Tests for AISHa Containment Chamber

Authors: F. Noto, G. Costa, L. Celona, F. Chines, G. Ciavola, G. Cuttone, S. Gammino, O. Leonardi, S. Marletta, G. Torrisi

Abstract:

At Istituto Nazionale di Fisica Nucleare – Laboratorio Nazionale del Sud (INFN-LNS), a broad experience in the design, construction and commissioning of ECR and microwave ion sources is available. The AISHa ion source has been designed by taking into account the typical requirements of hospital-based facilities, where the minimization of the mean time between failures (MTBF) is a key point together with the maintenance operations, which should be fast and easy. It is intended to be a multipurpose device, operating at 18 GHz, in order to achieve higher plasma densities. It should provide enough versatility for future needs of the hadron therapy, including the ability to run at larger microwave power to produce different species and highly charged ion beams. The source is potentially interesting for any hadron therapy facility using heavy ions. In this paper, we analyzed the dimensional test and electrical test about an innovative solution for the containment chamber that allows us to solve our isolation and structural problems.

Keywords: FEM analysis, electron cyclotron resonance ion source, dielectrical measurement, hadron therapy

Procedia PDF Downloads 293
5102 Some Accuracy Related Aspects in Two-Fluid Hydrodynamic Sub-Grid Modeling of Gas-Solid Riser Flows

Authors: Joseph Mouallem, Seyed Reza Amini Niaki, Norman Chavez-Cussy, Christian Costa Milioli, Fernando Eduardo Milioli

Abstract:

Sub-grid closures for filtered two-fluid models (fTFM) useful in large scale simulations (LSS) of riser flows can be derived from highly resolved simulations (HRS) with microscopic two-fluid modeling (mTFM). Accurate sub-grid closures require accurate mTFM formulations as well as accurate correlation of relevant filtered parameters to suitable independent variables. This article deals with both of those issues. The accuracy of mTFM is touched by assessing the impact of gas sub-grid turbulence over HRS filtered predictions. A gas turbulence alike effect is artificially inserted by means of a stochastic forcing procedure implemented in the physical space over the momentum conservation equation of the gas phase. The correlation issue is touched by introducing a three-filtered variable correlation analysis (three-marker analysis) performed under a variety of different macro-scale conditions typical or risers. While the more elaborated correlation procedure clearly improved accuracy, accounting for gas sub-grid turbulence had no significant impact over predictions.

Keywords: fluidization, gas-particle flow, two-fluid model, sub-grid models, filtered closures

Procedia PDF Downloads 124
5101 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 159
5100 Comparison of the Distillation Curve Obtained Experimentally with the Curve Extrapolated by a Commercial Simulator

Authors: Lívia B. Meirelles, Erika C. A. N. Chrisman, Flávia B. de Andrade, Lilian C. M. de Oliveira

Abstract:

True Boiling Point distillation (TBP) is one of the most common experimental techniques for the determination of petroleum properties. This curve provides information about the performance of petroleum in terms of its cuts. The experiment is performed in a few days. Techniques are used to determine the properties faster with a software that calculates the distillation curve when a little information about crude oil is known. In order to evaluate the accuracy of distillation curve prediction, eight points of the TBP curve and specific gravity curve (348 K and 523 K) were inserted into the HYSYS Oil Manager, and the extended curve was evaluated up to 748 K. The methods were able to predict the curve with the accuracy of 0.6%-9.2% error (Software X ASTM), 0.2%-5.1% error (Software X Spaltrohr).

Keywords: distillation curve, petroleum distillation, simulation, true boiling point curve

Procedia PDF Downloads 442
5099 The Synergistic Effects of Blockchain and AI on Enhancing Data Integrity and Decision-Making Accuracy in Smart Contracts

Authors: Sayor Ajfar Aaron, Sajjat Hossain Abir, Ashif Newaz, Mushfiqur Rahman

Abstract:

Investigating the convergence of blockchain technology and artificial intelligence, this paper examines their synergistic effects on data integrity and decision-making within smart contracts. By implementing AI-driven analytics on blockchain-based platforms, the research identifies improvements in automated contract enforcement and decision accuracy. The paper presents a framework that leverages AI to enhance transparency and trust while blockchain ensures immutable record-keeping, culminating in significantly optimized operational efficiencies in various industries.

Keywords: artificial intelligence, blockchain, data integrity, smart contracts

Procedia PDF Downloads 55
5098 Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision

Authors: Lianzhong Zhang, Chao Huang

Abstract:

Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values.

Keywords: SAR, sea-land segmentation, deep learning, transformer

Procedia PDF Downloads 181
5097 Enhancement of Thermal Performance of Latent Heat Solar Storage System

Authors: Rishindra M. Sarviya, Ashish Agrawal

Abstract:

Solar energy is available abundantly in the world, but it is not continuous and its intensity also varies with time. Due to above reason the acceptability and reliability of solar based thermal system is lower than conventional systems. A properly designed heat storage system increases the reliability of solar thermal systems by bridging the gap between the energy demand and availability. In the present work, two dimensional numerical simulation of the melting of heat storage material is presented in the horizontal annulus of double pipe latent heat storage system. Longitudinal fins were used as a thermal conductivity enhancement. Paraffin wax was used as a heat-storage or phase change material (PCM). Constant wall temperature is applied to heat transfer tube. Presented two-dimensional numerical analysis shows the movement of melting front in the finned cylindrical annulus for analyzing the thermal behavior of the system during melting.

Keywords: latent heat, numerical study, phase change material, solar energy

Procedia PDF Downloads 311
5096 Analytical Solution for End Depth Ratio in Rectangular Channels

Authors: Abdulrahman Abdulrahman, Abir Abdulrahman

Abstract:

Free over-fall is an instrument for measuring discharge in open channels by measuring end depth. A comprehensive researchers investigated theoretically and experimentally brink phenomenon with various approaches for different cross-sectional shapes. Anderson's method, based on Boussinq's approximation and energy approach was used to derive a pressure distribution factor at end depth. Applying the one-dimensional momentum equation and the principles of limit slope analysis, a relevant analytical solution may be derived for brink depth ratio (EDR) in prismatic rectangular channel. Also relationships between end depth ratio and slope ratio for a given non-dimensional normal or critical depth with upstream supercritical flow regime are presented. Simple indirect procedure is used to estimate the end depth discharge ratio (EDD) for subcritical and supercritical flow using measured end depth. The comparison of this analysis with all previous theoretical and experimental studies showed an excellent agreement.

Keywords: analytical solution, brink depth, end depth, flow measurement, free over fall, hydraulics, rectangular channel

Procedia PDF Downloads 182
5095 Electroencephalogram Based Approach for Mental Stress Detection during Gameplay with Level Prediction

Authors: Priyadarsini Samal, Rajesh Singla

Abstract:

Many mobile games come with the benefits of entertainment by introducing stress to the human brain. In recognizing this mental stress, the brain-computer interface (BCI) plays an important role. It has various neuroimaging approaches which help in analyzing the brain signals. Electroencephalogram (EEG) is the most commonly used method among them as it is non-invasive, portable, and economical. Here, this paper investigates the pattern in brain signals when introduced with mental stress. Two healthy volunteers played a game whose aim was to search hidden words from the grid, and the levels were chosen randomly. The EEG signals during gameplay were recorded to investigate the impacts of stress with the changing levels from easy to medium to hard. A total of 16 features of EEG were analyzed for this experiment which includes power band features with relative powers, event-related desynchronization, along statistical features. Support vector machine was used as the classifier, which resulted in an accuracy of 93.9% for three-level stress analysis; for two levels, the accuracy of 92% and 98% are achieved. In addition to that, another game that was similar in nature was played by the volunteers. A suitable regression model was designed for prediction where the feature sets of the first and second game were used for testing and training purposes, respectively, and an accuracy of 73% was found.

Keywords: brain computer interface, electroencephalogram, regression model, stress, word search

Procedia PDF Downloads 187
5094 Efficient Schemes of Classifiers for Remote Sensing Satellite Imageries of Land Use Pattern Classifications

Authors: S. S. Patil, Sachidanand Kini

Abstract:

Classification of land use patterns is compelling in complexity and variability of remote sensing imageries data. An imperative research in remote sensing application exploited to mine some of the significant spatially variable factors as land cover and land use from satellite images for remote arid areas in Karnataka State, India. The diverse classification techniques, unsupervised and supervised consisting of maximum likelihood, Mahalanobis distance, and minimum distance are applied in Bellary District in Karnataka State, India for the classification of the raw satellite images. The accuracy evaluations of results are compared visually with the standard maps with ground-truths. We initiated with the maximum likelihood technique that gave the finest results and both minimum distance and Mahalanobis distance methods over valued agriculture land areas. In meanness of mislaid few irrelevant features due to the low resolution of the satellite images, high-quality accord between parameters extracted automatically from the developed maps and field observations was found.

Keywords: Mahalanobis distance, minimum distance, supervised, unsupervised, user classification accuracy, producer's classification accuracy, maximum likelihood, kappa coefficient

Procedia PDF Downloads 183
5093 Parametric Study of 3D Micro-Fin Tubes on Heat Transfer and Friction Factor

Authors: Shima Soleimani, Steven Eckels

Abstract:

One area of special importance for surface-level study of heat exchangers is tubes with internal micro-fins (< 0.5 mm tall). Micro-finned surfaces are a kind of extended solid surface in which energy is exchanged with water that acts as the source or sink of energy. Significant performance gains are possible for either shell, tube, or double pipe heat exchangers if the best surfaces are identified. The parametric studies of micro-finned tubes that have appeared in the literature left some key parameters unexplored. Specifically, they ignored three-dimensional (3D) micro-fin configurations, conduction heat transfer in the fins, and conduction in the solid surface below the micro-fins. Thus, this study aimed at implementing a parametric study of 3D micro-finned tubes that considered micro-fin height and discontinuity features. A 3D conductive and convective heat-transfer simulation through coupled solid and periodic fluid domains is applied in a commercial package, ANSYS Fluent 19.1. The simulation is steady-state with turbulent water flow cooling inner wall of a tube with micro-fins. The simulation utilizes a constant and uniform temperature on the tube outer wall. Performance is mapped for 18 different simulation cases, including a smooth tube using a realizable k-ε turbulence model at a Reynolds number of 48,928. Results compared the performance of 3D tubes with results for the similar two-dimensional (2D) one. Results showed that the micro-fin height has greater impact on performance factor than discontinuity features in 3D micro-fin tubes. A transformed 3D micro-fin tube can enhance heat transfer and pressure drop up to 21% and 56% compared to a 2D one, respectfully.

Keywords: three-dimensional micro-finned tube, heat transfer, friction factor, heat exchanger

Procedia PDF Downloads 115
5092 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy

Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu

Abstract:

The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.

Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis

Procedia PDF Downloads 65