Search results for: computational accuracy
2610 Quantitative Analysis of Multiprocessor Architectures for Radar Signal Processing
Authors: Deepak Kumar, Debasish Deb, Reena Mamgain
Abstract:
Radar signal processing requires high number crunching capability. Most often this is achieved using multiprocessor platform. Though multiprocessor platform provides the capability of meeting the real time computational challenges, the architecture of the same along with mapping of the algorithm on the architecture plays a vital role in efficiently using the platform. Towards this, along with standard performance metrics, few additional metrics are defined which helps in evaluating the multiprocessor platform along with the algorithm mapping. A generic multiprocessor architecture can not suit all the processing requirements. Depending on the system requirement and type of algorithms used, the most suitable architecture for the given problem is decided. In the paper, we study different architectures and quantify the different performance metrics which enables comparison of different architectures for their merit. We also carried out case study of different architectures and their efficiency depending on parallelism exploited on algorithm or data or both.Keywords: radar signal processing, multiprocessor architecture, efficiency, load imbalance, buffer requirement, pipeline, parallel, hybrid, cluster of processors (COPs)
Procedia PDF Downloads 4122609 [Keynote Speech]: Simulation Studies of Pulsed Voltage Effects on Cells
Authors: Jiahui Song
Abstract:
In order to predict or explain a complicated biological process, it is important first to construct mathematical models that can be used to yield analytical solutions. Through numerical simulation, mathematical model results can be used to test scenarios that might not be easily attained in a laboratory experiment, or to predict parameters or phenomena. High-intensity, nanosecond pulse electroporation has been a recent development in bioelectrics. The dynamic pore model can be achieved by including a dynamic aspect and a dependence on the pore population density into pore formation energy equation to analyze and predict such electroporation effects. For greater accuracy, with inclusion of atomistic details, molecular dynamics (MD) simulations were also carried out during this study. Besides inducing pores in cells, external voltages could also be used in principle to modulate action potential generation in nerves. This could have an application in electrically controlled ‘pain management’. Also a simple model-based rate equation treatment of the various cellular bio-chemical processes has been used to predict the pulse number dependent cell survival trends.Keywords: model, high-intensity, nanosecond, bioelectrics
Procedia PDF Downloads 2262608 A Near-Optimal Domain Independent Approach for Detecting Approximate Duplicates
Authors: Abdelaziz Fellah, Allaoua Maamir
Abstract:
We propose a domain-independent merging-cluster filter approach complemented with a set of algorithms for identifying approximate duplicate entities efficiently and accurately within a single and across multiple data sources. The near-optimal merging-cluster filter (MCF) approach is based on the Monge-Elkan well-tuned algorithm and extended with an affine variant of the Smith-Waterman similarity measure. Then we present constant, variable, and function threshold algorithms that work conceptually in a divide-merge filtering fashion for detecting near duplicates as hierarchical clusters along with their corresponding representatives. The algorithms take recursive refinement approaches in the spirit of filtering, merging, and updating, cluster representatives to detect approximate duplicates at each level of the cluster tree. Experiments show a high effectiveness and accuracy of the MCF approach in detecting approximate duplicates by outperforming the seminal Monge-Elkan’s algorithm on several real-world benchmarks and generated datasets.Keywords: data mining, data cleaning, approximate duplicates, near-duplicates detection, data mining applications and discovery
Procedia PDF Downloads 3872607 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques
Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu
Abstract:
Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare
Procedia PDF Downloads 642606 Improving the Performance of Gas Turbine Power Plant by Modified Axial Turbine
Authors: Hakim T. Kadhim, Faris A. Jabbar, Aldo Rona, Audrius Bagdanaviciu
Abstract:
Computer-based optimization techniques can be employed to improve the efficiency of energy conversions processes, including reducing the aerodynamic loss in a thermal power plant turbomachine. In this paper, towards mitigating secondary flow losses, a design optimization workflow is implemented for the casing geometry of a 1.5 stage axial flow turbine that improves the turbine isentropic efficiency. The improved turbine is used in an open thermodynamic gas cycle with regeneration and cogeneration. Performance estimates are obtained by the commercial software Cycle – Tempo. Design and off design conditions are considered as well as variations in inlet air temperature. Reductions in both the natural gas specific fuel consumption and in CO2 emissions are predicted by using the gas turbine cycle fitted with the new casing design. These gains are attractive towards enhancing the competitiveness and reducing the environmental impact of thermal power plant.Keywords: axial flow turbine, computational fluid dynamics, gas turbine power plant, optimization
Procedia PDF Downloads 1612605 Dynamic Log Parsing and Intelligent Anomaly Detection Method Combining Retrieval Augmented Generation and Prompt Engineering
Authors: Liu Linxin
Abstract:
As system complexity increases, log parsing and anomaly detection become more and more important in ensuring system stability. However, traditional methods often face the problems of insufficient adaptability and decreasing accuracy when dealing with rapidly changing log contents and unknown domains. To this end, this paper proposes an approach LogRAG, which combines RAG (Retrieval Augmented Generation) technology with Prompt Engineering for Large Language Models, applied to log analysis tasks to achieve dynamic parsing of logs and intelligent anomaly detection. By combining real-time information retrieval and prompt optimisation, this study significantly improves the adaptive capability of log analysis and the interpretability of results. Experimental results show that the method performs well on several public datasets, especially in the absence of training data, and significantly outperforms traditional methods. This paper provides a technical path for log parsing and anomaly detection, demonstrating significant theoretical value and application potential.Keywords: log parsing, anomaly detection, retrieval-augmented generation, prompt engineering, LLMs
Procedia PDF Downloads 292604 Anomaly Detection with ANN and SVM for Telemedicine Networks
Authors: Edward Guillén, Jeisson Sánchez, Carlos Omar Ramos
Abstract:
In recent years, a wide variety of applications are developed with Support Vector Machines -SVM- methods and Artificial Neural Networks -ANN-. In general, these methods depend on intrusion knowledge databases such as KDD99, ISCX, and CAIDA among others. New classes of detectors are generated by machine learning techniques, trained and tested over network databases. Thereafter, detectors are employed to detect anomalies in network communication scenarios according to user’s connections behavior. The first detector based on training dataset is deployed in different real-world networks with mobile and non-mobile devices to analyze the performance and accuracy over static detection. The vulnerabilities are based on previous work in telemedicine apps that were developed on the research group. This paper presents the differences on detections results between some network scenarios by applying traditional detectors deployed with artificial neural networks and support vector machines.Keywords: anomaly detection, back-propagation neural networks, network intrusion detection systems, support vector machines
Procedia PDF Downloads 3572603 A Comparative Study of k-NN and MLP-NN Classifiers Using GA-kNN Based Feature Selection Method for Wood Recognition System
Authors: Uswah Khairuddin, Rubiyah Yusof, Nenny Ruthfalydia Rosli
Abstract:
This paper presents a comparative study between k-Nearest Neighbour (k-NN) and Multi-Layer Perceptron Neural Network (MLP-NN) classifier using Genetic Algorithm (GA) as feature selector for wood recognition system. The features have been extracted from the images using Grey Level Co-Occurrence Matrix (GLCM). The use of GA based feature selection is mainly to ensure that the database used for training the features for the wood species pattern classifier consists of only optimized features. The feature selection process is aimed at selecting only the most discriminating features of the wood species to reduce the confusion for the pattern classifier. This feature selection approach maintains the ‘good’ features that minimizes the inter-class distance and maximizes the intra-class distance. Wrapper GA is used with k-NN classifier as fitness evaluator (GA-kNN). The results shows that k-NN is the best choice of classifier because it uses a very simple distance calculation algorithm and classification tasks can be done in a short time with good classification accuracy.Keywords: feature selection, genetic algorithm, optimization, wood recognition system
Procedia PDF Downloads 5452602 Experimental and Numerical Analyses of Tehran Research Reactor
Authors: A. Lashkari, H. Khalafi, H. Khazeminejad, S. Khakshourniya
Abstract:
In this paper, a numerical model is presented. The model is used to analyze a steady state thermo-hydraulic and reactivity insertion transient in TRR reference cores respectively. The model predictions are compared with the experiments and PARET code results. The model uses the piecewise constant and lumped parameter methods for the coupled point kinetics and thermal-hydraulics modules respectively. The advantages of the piecewise constant method are simplicity, efficiency and accuracy. A main criterion on the applicability range of this model is that the exit coolant temperature remains below the saturation temperature, i.e. no bulk boiling occurs in the core. The calculation values of power and coolant temperature, in steady state and positive reactivity insertion scenario, are in good agreement with the experiment values. However, the model is a useful tool for the transient analysis of most research reactor encountered in practice. The main objective of this work is using simple calculation methods and benchmarking them with experimental data. This model can be used for training proposes.Keywords: thermal-hydraulic, research reactor, reactivity insertion, numerical modeling
Procedia PDF Downloads 4012601 Human Identification and Detection of Suspicious Incidents Based on Outfit Colors: Image Processing Approach in CCTV Videos
Authors: Thilini M. Yatanwala
Abstract:
CCTV (Closed-Circuit-Television) Surveillance System is being used in public places over decades and a large variety of data is being produced every moment. However, most of the CCTV data is stored in isolation without having integrity. As a result, identification of the behavior of suspicious people along with their location has become strenuous. This research was conducted to acquire more accurate and reliable timely information from the CCTV video records. The implemented system can identify human objects in public places based on outfit colors. Inter-process communication technologies were used to implement the CCTV camera network to track people in the premises. The research was conducted in three stages and in the first stage human objects were filtered from other movable objects available in public places. In the second stage people were uniquely identified based on their outfit colors and in the third stage an individual was continuously tracked in the CCTV network. A face detection algorithm was implemented using cascade classifier based on the training model to detect human objects. HAAR feature based two-dimensional convolution operator was introduced to identify features of the human face such as region of eyes, region of nose and bridge of the nose based on darkness and lightness of facial area. In the second stage outfit colors of human objects were analyzed by dividing the area into upper left, upper right, lower left, lower right of the body. Mean color, mod color and standard deviation of each area were extracted as crucial factors to uniquely identify human object using histogram based approach. Color based measurements were written in to XML files and separate directories were maintained to store XML files related to each camera according to time stamp. As the third stage of the approach, inter-process communication techniques were used to implement an acknowledgement based CCTV camera network to continuously track individuals in a network of cameras. Real time analysis of XML files generated in each camera can determine the path of individual to monitor full activity sequence. Higher efficiency was achieved by sending and receiving acknowledgments only among adjacent cameras. Suspicious incidents such as a person staying in a sensitive area for a longer period or a person disappeared from the camera coverage can be detected in this approach. The system was tested for 150 people with the accuracy level of 82%. However, this approach was unable to produce expected results in the presence of group of people wearing similar type of outfits. This approach can be applied to any existing camera network without changing the physical arrangement of CCTV cameras. The study of human identification and suspicious incident detection using outfit color analysis can achieve higher level of accuracy and the project will be continued by integrating motion and gait feature analysis techniques to derive more information from CCTV videos.Keywords: CCTV surveillance, human detection and identification, image processing, inter-process communication, security, suspicious detection
Procedia PDF Downloads 1832600 Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Authors: Samba Raju, Chiluveru, Manoj Tripathy
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methodsKeywords: discrete wavelet transform, speech intelligibility, STOI, standard deviation
Procedia PDF Downloads 1482599 Modeling the Effect of Scale Deposition on Heat Transfer in Desalination Multi-Effect Distillation Evaporators
Authors: K. Bourouni, M. Chacha, T. Jaber, A. Tchantchane
Abstract:
In Multi-Effect Distillation (MED) desalination evaporators, the scale deposit outside the tubes presents a barrier to heat transfers reducing the global heat transfer coefficient and causing a decrease in water production; hence a loss of efficiency and an increase in operating and maintenance costs. Scale removal (by acid cleaning) is the main maintenance operation and constitutes the major reason for periodic plant shutdowns. A better understanding of scale deposition mechanisms will lead to an accurate determination of the variation of scale thickness around the tubes and an improved accuracy of the overall heat transfer coefficient calculation. In this paper, a coupled heat transfer-calcium carbonate scale deposition model on a horizontal tube bundle is presented. The developed tool is used to determine precisely the heat transfer area leading to a significant cost reduction for a given water production capacity. Simulations are carried to investigate the influence of different parameters such as water salinity, temperature, etc. on the heat transfer.Keywords: multi-effect-evaporator, scale deposition, water desalination, heat transfer coefficient
Procedia PDF Downloads 1512598 Evaluating Forecasts Through Stochastic Loss Order
Authors: Wilmer Osvaldo Martinez, Manuel Dario Hernandez, Juan Manuel Julio
Abstract:
We propose to assess the performance of k forecast procedures by exploring the distributions of forecast errors and error losses. We argue that non systematic forecast errors minimize when their distributions are symmetric and unimodal, and that forecast accuracy should be assessed through stochastic loss order rather than expected loss order, which is the way it is customarily performed in previous work. Moreover, since forecast performance evaluation can be understood as a one way analysis of variance, we propose to explore loss distributions under two circumstances; when a strict (but unknown) joint stochastic order exists among the losses of all forecast alternatives, and when such order happens among subsets of alternative procedures. In spite of the fact that loss stochastic order is stronger than loss moment order, our proposals are at least as powerful as competing tests, and are robust to the correlation, autocorrelation and heteroskedasticity settings they consider. In addition, since our proposals do not require samples of the same size, their scope is also wider, and provided that they test the whole loss distribution instead of just loss moments, they can also be used to study forecast distributions as well. We illustrate the usefulness of our proposals by evaluating a set of real world forecasts.Keywords: forecast evaluation, stochastic order, multiple comparison, non parametric test
Procedia PDF Downloads 892597 Labview-Based System for Fiber Links Events Detection
Authors: Bo Liu, Qingshan Kong, Weiqing Huang
Abstract:
With the rapid development of modern communication, diagnosing the fiber-optic quality and faults in real-time is widely focused. In this paper, a Labview-based system is proposed for fiber-optic faults detection. The wavelet threshold denoising method combined with Empirical Mode Decomposition (EMD) is applied to denoise the optical time domain reflectometer (OTDR) signal. Then the method based on Gabor representation is used to detect events. Experimental measurements show that signal to noise ratio (SNR) of the OTDR signal is improved by 1.34dB on average, compared with using the wavelet threshold denosing method. The proposed system has a high score in event detection capability and accuracy. The maximum detectable fiber length of the proposed Labview-based system can be 65km.Keywords: empirical mode decomposition, events detection, Gabor transform, optical time domain reflectometer, wavelet threshold denoising
Procedia PDF Downloads 1232596 DGA Data Interpretation Using Extension Theory for Power Transformer Diagnostics
Authors: O. P. Rahi, Manoj Kumar
Abstract:
Power transformers are essential and expensive equipments in electrical power system. Dissolved gas analysis (DGA) is one of the most useful techniques to detect incipient faults in power transformers. However, the identification of the faulted location by conventional method is not always an easy task due to variability of gas data and operational variables. In this paper, an extension theory based power transformer fault diagnosis method is presented. Extension theory tries to solve contradictions and incompatibility problems. This paper first briefly introduces the basic concept of matter element theory, establishes the matter element models for three-ratio method, and then briefly discusses extension set theory. Detailed analysis is carried out on the extended relation function (ERF) adopted in this paper for transformer fault diagnosis. The detailed diagnosing steps are offered. Simulation proves that the proposed method can overcome the drawbacks of the conventional three-ratio method, such as no matching and failure to diagnose multi-fault. It enhances diagnosing accuracy.Keywords: DGA, extension theory, ERF, fault diagnosis power transformers, fault diagnosis, fuzzy logic
Procedia PDF Downloads 4122595 The Modeling and Effectiveness Evaluation for Vessel Evasion to Acoustic Homing Torpedo
Authors: Li Minghui, Min Shaorong, Zhang Jun
Abstract:
This paper aims for studying the operational efficiency of surface warship’s motorized evasion to acoustic homing torpedo. It orderly developed trajectory model, self-guide detection model, vessel evasion model, as well as anti-torpedo error model in three-dimensional space to make up for the deficiency of precious researches analyzing two-dimensionally confrontational models. Then, making use of the Monte Carlo method, it carried out the simulation for the confrontation process of evasion in the environment of MATLAB. At last, it quantitatively analyzed the main factors which determine vessel’s survival probability. The results show that evasion relative bearing and speed will affect vessel’s survival probability significantly. Thus, choosing appropriate evasion relative bearing and speed according to alarming range and alarming relative bearing for torpedo, improving alarming range and positioning accuracy and reducing the response time against torpedo will improve the vessel’s survival probability significantly.Keywords: acoustic homing torpedo, vessel evasion, monte carlo method, torpedo defense, vessel's survival probability
Procedia PDF Downloads 4552594 A Dynamic Approach for Evaluating the Climate Change Risks on Building Performance
Authors: X. Lu, T. Lu, S. Javadi
Abstract:
A simple dynamic approach is presented for analyzing thermal and moisture dynamics of buildings, which is of particular relevance to understanding climate change impacts on buildings, including assessment of risks and applications of resilience strategies. With the goal to demonstrate the proposed modeling methodology, to verify the model, and to show that wooden materials provide a mechanism that can facilitate the reduction of moisture risks and be more resilient to global warming, a wooden church equipped with high precision measurement systems was taken as a test building for full-scale time-series measurements. Sensitivity analyses indicate a high degree of accuracy in the model prediction regarding the indoor environment. The model is then applied to a future projection of climate indoors aiming to identify significant environmental factors, the changing temperature and humidity, and effective response to the climate change impacts. The paper suggests that wooden building materials offer an effective and resilient response to anticipated future climate changes.Keywords: dynamic model, forecast, climate change impact, wooden structure, buildings
Procedia PDF Downloads 1512593 Information System Development for Online Journal System Using Online Journal System for Journal Management of Suan Sunandha Rajabhat University
Authors: Anuphan Suttimarn, Natcha Wattanaprapa, Suwaree Yordchim
Abstract:
The aim of this study is to develop the online journal system using a web application to manage the journal service of Suan Sunandha Rajabhat University in order to improve the journal management of the university. The main structures of the system process consist of 1. journal content management system 2. membership system of the journal and 3. online submission or review process. The investigators developed the system based on a web application using open source OJS software and phpMyAdmin to manage a research database. The system test showed that this online system 'Online Journal System (OJS)' could shorten the time in the period of submission article to journal and helped in managing a journal procedure efficiently and accurately. The quality evaluation of Suan Sunandha Rajabhat online journal system (SSRUOJS) undertaken by experts and researchers in 5 aspects; design, usability, security, reducing time, and accuracy showed the highest average value (X=4.30) on the aspect of reducing time. Meanwhile, the system efficiency evaluation was on an excellent level (X=4.13).Keywords: online journal system, Journal management, Information system development, OJS
Procedia PDF Downloads 1752592 Comparison between Hardy-Cross Method and Water Software to Solve a Pipe Networking Design Problem for a Small Town
Authors: Ahmed Emad Ahmed, Zeyad Ahmed Hussein, Mohamed Salama Afifi, Ahmed Mohammed Eid
Abstract:
Water has a great importance in life. In order to deliver water from resources to the users, many procedures should be taken by the water engineers. One of the main procedures to deliver water to the community is by designing pressurizer pipe networks for water. The main aim of this work is to calculate the water demand of a small town and then design a simple water network to distribute water resources among the town with the smallest losses. Literature has been mentioned to cover the main point related to water distribution. Moreover, the methodology has introduced two approaches to solve the research problem, one by the iterative method of Hardy-cross and the other by water software Pipe Flow. The results have introduced two main designs to satisfy the same research requirements. Finally, the researchers have concluded that the use of water software provides more abilities and options for water engineers.Keywords: looping pipe networks, hardy cross networks accuracy, relative error of hardy cross method
Procedia PDF Downloads 1652591 Prediction of the Transmittance of Various Bended Angles Lightpipe by Using Neural Network under Different Sky Clearness Condition
Authors: Li Zhang, Yuehong Su
Abstract:
Lightpipe as a mature solar light tube technique has been employed worldwide. Accurately assessing the performance of lightpipe and evaluate daylighting available has been a challenging topic. Previous research had used regression model and computational simulation methods to estimate the performance of lightpipe. However, due to the nonlinear nature of solar light transferring in lightpipe, the methods mentioned above express inaccurate and time-costing issues. In the present study, a neural network model as an alternative method is investigated to predict the transmittance of lightpipe. Four types of commercial lightpipe with bended angle 0°, 30°, 45° and 60° are discussed under clear, intermediate and overcast sky conditions respectively. The neural network is generated in MATLAB by using the outcomes of an optical software Photopia simulations as targets for networks training and testing. The coefficient of determination (R²) for each model is higher than 0.98, and the mean square error (MSE) is less than 0.0019, which indicate the neural network strong predictive ability and the use of the neural network method could be an efficient technique for determining the performance of lightpipe.Keywords: neural network, bended lightpipe, transmittance, Photopia
Procedia PDF Downloads 1522590 Cost-Effective Indoor-Air Quality (IAQ) Monitoring via Cavity Enhanced Photoacoustic Technology
Authors: Jifang Tao, Fei Gao, Hong Cai, Yuan Jin Zheng, Yuan Dong Gu
Abstract:
Photoacoustic technology is used to measure effect absorption of a light by means of acoustic detection, which provides a high sensitive, low-cross response, cost-effective solution for gas molecular detection. In this paper, we proposed an integrated photoacoustic sensor for Indoor-air quality (IAQ) monitoring. The sensor consists of an acoustically resonant cavity, a high silicon acoustic transducer chip, and a low-cost light source. The light is modulated at the resonant frequency of the cavity to create an enhanced periodic heating and result in an amplified acoustic pressure wave. The pressure is readout by a novel acoustic transducer with low noise. Based on this photoacoustic sensor, typical indoor gases, including CO2, CO, O2, and H2O have been successfully detected, and their concentration are also evaluated with very high accuracy. It has wide potential applications in IAQ monitoring for agriculture, food industry, and ventilation control systems used in public places, such as schools, hospitals and airports.Keywords: indoor-air quality (IAQ) monitoring, photoacoustic gas sensor, cavity enhancement, integrated gas sensor
Procedia PDF Downloads 6582589 An Innovative Green Cooling Approach Using Peltier Chip in Milling Operation for Surface Roughness Improvement
Authors: Md. Anayet U. Patwari, Mohammad Ahsan Habib, Md. Tanzib Ehsan, Md Golam Ahnaf, Md. S. I. Chowdhury
Abstract:
Surface roughness is one of the key quality parameters of the finished product. During any machining operation, high temperatures are generated at the tool-chip interface impairing surface quality and dimensional accuracy of products. Cutting fluids are generally applied during machining to reduce temperature at the tool-chip interface. However, usages of cutting fluids give rise to problems such as waste disposal, pollution, high cost, and human health hazard. Researchers, now-a-days, are opting towards dry machining and other cooling techniques to minimize use of coolants during machining while keeping surface roughness of products within desirable limits. In this paper, a concept of using peltier cooling effects during aluminium milling operation has been presented and adopted with an aim to improve surface roughness of the machined surface. Experimental evidence shows that peltier cooling effect provides better surface roughness of the machined surface compared to dry machining.Keywords: aluminium, milling operation, peltier cooling effect, surface roughness
Procedia PDF Downloads 3372588 Hydrodynamic Behavior Study of Fast Mono Hull and Catamaran Vessels in Calm Waters Using Free Surface Flow Analysis
Authors: Mohammad Ali Badri, Pouya Molana, Amin Rezvanpour
Abstract:
In this paper, planning catamaran and mono-hull vessels resistance and trim in calm waters were considered. Hydrodynamic analysis of fast mono-hull planning vessel was also investigated. In order to hull form geometry optimization, numerical methods of different parameters were used for this type of vessels. Hull material was selected in carbon fiber composite. Exact architectural aspects were specified and stability calculations were performed as well. Hydrodynamic calculations to extract the resistance force using semi-analytical methods and numerical modeling were carried out. Free surface numerical analysis of vessel in designed draft using finite volume method and double phase were evaluated and verified by experimental tests.Keywords: fast vessel, hydrostatic and hydrodynamic optimization, free surface flow, computational fluid dynamics
Procedia PDF Downloads 5162587 Neural Network Approach for Solving Integral Equations
Authors: Bhavini Pandya
Abstract:
This paper considers Hη: T2 → T2 the Perturbed Cerbelli-Giona map. That is a family of 2-dimensional nonlinear area-preserving transformations on the torus T2=[0,1]×[0,1]= ℝ2/ ℤ2. A single parameter η varies between 0 and 1, taking the transformation from a hyperbolic toral automorphism to the “Cerbelli-Giona” map, a system known to exhibit multifractal properties. Here we study the multifractal properties of the family of maps. We apply a box-counting method by defining a grid of boxes Bi(δ), where i is the index and δ is the size of the boxes, to quantify the distribution of stable and unstable manifolds of the map. When the parameter is in the range 0.51< η <0.58 and 0.68< η <1 the map is ergodic; i.e., the unstable and stable manifolds eventually cover the whole torus, although not in a uniform distribution. For accurate numerical results we require correspondingly accurate construction of the stable and unstable manifolds. Here we use the piecewise linearity of the map to achieve this, by computing the endpoints of line segments which define the global stable and unstable manifolds. This allows the generalized fractal dimension Dq, and spectrum of dimensions f(α), to be computed with accuracy. Finally, the intersection of the unstable and stable manifold of the map will be investigated, and compared with the distribution of periodic points of the system.Keywords: feed forward, gradient descent, neural network, integral equation
Procedia PDF Downloads 1892586 Unsteady Simulation of Burning Off Carbon Deposition in a Coke Oven
Authors: Uzu-Kuei Hsu, Keh-Chin Chang, Joo-Guan Hang, Chang-Hsien Tai
Abstract:
Carbon Deposits are often occurred inside the industrial coke oven during the coking process. Accumulation of carbon deposits may cause a big issue, which seriously influences the coking operation. The carbon is burning off by injecting fresh air through pipes into coke oven which is an efficient way practically operated in industries. The burning off carbon deposition in coke oven performed by Computational Fluid Dynamics (CFD) method has provided an evaluation of the feasibility study. A three-dimensional, transient, turbulent reacting flow simulation has performed with three different injecting air flow rate and another kind of injecting configuration. The result shows that injection higher air flow rate would effectively reduce the carbon deposits. In the meantime, the opened charging holes would suck extra oxygen from the atmosphere to participate in reactions. In term of coke oven operating limits, the wall temperatures are monitored to prevent over-heating of the adiabatic walls during the burn-off process.Keywords: coke oven, burning off, carbon deposits, carbon combustion, CFD
Procedia PDF Downloads 6932585 Numerical Investigation for External Strengthening of Dapped-End Beams
Authors: A. Abdel-Moniem, H. Madkour, K. Farah, A. Abdullah
Abstract:
The reduction in dapped end beams depth nearby the supports tends to produce stress concentration and hence results in shear cracks, if it does not have an adequate reinforcement detailing. This study investigates numerically the efficiency of applying different external strengthening techniques to the dapped end of such beams. A two-dimensional finite element model was built to predict the structural behavior of dapped ends strengthened with different techniques. The techniques included external bonding of the steel angle at the re-entrant corner, un-bounded bolt anchoring, external steel plate jacketing, exterior carbon fiber wrapping and/or stripping and external inclined steel plates. The FE analysis results are then presented in terms of the ultimate load capacities, load-deflection and crack pattern at failure. The results showed that the FE model, at various stages, was found to be comparable to the available test data. Moreover, it enabled the capture of the failure progress, with acceptable accuracy, which is very difficult in a laboratory test.Keywords: dapped-end beams, finite element, shear failure, strengthening techniques, reinforced concrete, numerical investigation
Procedia PDF Downloads 1172584 A Family of Second Derivative Methods for Numerical Integration of Stiff Initial Value Problems in Ordinary Differential Equations
Authors: Luke Ukpebor, C. E. Abhulimen
Abstract:
Stiff initial value problems in ordinary differential equations are problems for which a typical solution is rapidly decaying exponentially, and their numerical investigations are very tedious. Conventional numerical integration solvers cannot cope effectively with stiff problems as they lack adequate stability characteristics. In this article, we developed a new family of four-step second derivative exponentially fitted method of order six for the numerical integration of stiff initial value problem of general first order differential equations. In deriving our method, we employed the idea of breaking down the general multi-derivative multistep method into predator and corrector schemes which possess free parameters that allow for automatic fitting into exponential functions. The stability analysis of the method was discussed and the method was implemented with numerical examples. The result shows that the method is A-stable and competes favorably with existing methods in terms of efficiency and accuracy.Keywords: A-stable, exponentially fitted, four step, predator-corrector, second derivative, stiff initial value problems
Procedia PDF Downloads 2582583 Effect of Process Variables of Wire Electrical Discharge Machining on Surface Roughness for AA-6063 by Response Surface Methodology
Authors: Deepak
Abstract:
WEDM is an amazingly potential electro-wire process for machining of hard metal compounds and metal grid composites without making contact. Wire electrical machining is a developing noncustomary machining process for machining hard to machine materials that are electrically conductive. It is an exceptionally exact, precise, and one of the most famous machining forms in nontraditional machining. WEDM has turned into the fundamental piece of many assembling process ventures, which require precision, variety, and accuracy. In the present examination, AA-6063 is utilized as a workpiece, and execution investigation is done to discover the critical control factors. Impact of different parameters like a pulse on time, pulse off time, servo voltage, peak current, water pressure, wire tension, wire feed upon surface hardness has been researched while machining on AA-6063. RSM has been utilized to advance the yield variable. A variety of execution measures with input factors was demonstrated by utilizing the response surface methodology.Keywords: AA-6063, response surface methodology, WEDM, surface roughness
Procedia PDF Downloads 1162582 Computer-Aided Exudate Diagnosis for the Screening of Diabetic Retinopathy
Authors: Shu-Min Tsao, Chung-Ming Lo, Shao-Chun Chen
Abstract:
Most diabetes patients tend to suffer from its complication of retina diseases. Therefore, early detection and early treatment are important. In clinical examinations, using color fundus image was the most convenient and available examination method. According to the exudates appeared in the retinal image, the status of retina can be confirmed. However, the routine screening of diabetic retinopathy by color fundus images would bring time-consuming tasks to physicians. This study thus proposed a computer-aided exudate diagnosis for the screening of diabetic retinopathy. After removing vessels and optic disc in the retinal image, six quantitative features including region number, region area, and gray-scale values etc… were extracted from the remaining regions for classification. As results, all six features were evaluated to be statistically significant (p-value < 0.001). The accuracy of classifying the retinal images into normal and diabetic retinopathy achieved 82%. Based on this system, the clinical workload could be reduced. The examination procedure may also be improved to be more efficient.Keywords: computer-aided diagnosis, diabetic retinopathy, exudate, image processing
Procedia PDF Downloads 2702581 Exploring the Applications of Modular Forms in Cryptography
Authors: Berhane Tewelday Weldhiwot
Abstract:
This research investigates the pivotal role of modular forms in modern cryptographic systems, particularly focusing on their applications in secure communications and data integrity. Modular forms, which are complex analytic functions with rich arithmetic properties, have gained prominence due to their connections to number theory and algebraic geometry. This study begins by outlining the fundamental concepts of modular forms and their historical development, followed by a detailed examination of their applications in cryptographic protocols such as elliptic curve cryptography and zero-knowledge proofs. By employing techniques from analytic number theory, the research delves into how modular forms can enhance the efficiency and security of cryptographic algorithms. The findings suggest that leveraging modular forms not only improves computational performance but also fortifies security measures against emerging threats in digital communication. This work aims to contribute to the ongoing discourse on integrating advanced mathematical theories into practical applications, ultimately fostering innovation in cryptographic methodologies.Keywords: modular forms, cryptography, elliptic curves, applications, mathematical theory
Procedia PDF Downloads 16