Search results for: forecasting accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4142

Search results for: forecasting accuracy

2642 Engineering Optimization of Flexible Energy Absorbers

Authors: Reza Hedayati, Meysam Jahanbakhshi

Abstract:

Elastic energy absorbers which consist of a ring-liked plate and springs can be a good choice for increasing the impact duration during an accident. In the current project, an energy absorber system is optimized using four optimizing methods Kuhn-Tucker, Sequential Linear Programming (SLP), Concurrent Subspace Design (CSD), and Pshenichny-Lim-Belegundu-Arora (PLBA). Time solution, convergence, Programming Length and accuracy of the results were considered to find the best solution algorithm. Results showed the superiority of PLBA over the other algorithms.

Keywords: Concurrent Subspace Design (CSD), Kuhn-Tucker, Pshenichny-Lim-Belegundu-Arora (PLBA), Sequential Linear Programming (SLP)

Procedia PDF Downloads 399
2641 Detection of Chaos in General Parametric Model of Infectious Disease

Authors: Javad Khaligh, Aghileh Heydari, Ali Akbar Heydari

Abstract:

Mathematical epidemiological models for the spread of disease through a population are used to predict the prevalence of a disease or to study the impacts of treatment or prevention measures. Initial conditions for these models are measured from statistical data collected from a population since these initial conditions can never be exact, the presence of chaos in mathematical models has serious implications for the accuracy of the models as well as how epidemiologists interpret their findings. This paper confirms the chaotic behavior of a model for dengue fever and SI by investigating sensitive dependence, bifurcation, and 0-1 test under a variety of initial conditions.

Keywords: epidemiological models, SEIR disease model, bifurcation, chaotic behavior, 0-1 test

Procedia PDF Downloads 326
2640 Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9

Authors: Ulrich Wake, Eniman Syamsuddin

Abstract:

The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weights

Keywords: ​ One Cycle Policy, ResNet34, ResNet50, Test-Time Agumentation

Procedia PDF Downloads 208
2639 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong

Abstract:

This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.

Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 219
2638 Modeling and Validation of Microspheres Generation in the Modified T-Junction Device

Authors: Lei Lei, Hongbo Zhang, Donald J. Bergstrom, Bing Zhang, K. Y. Song, W. J. Zhang

Abstract:

This paper presents a model for a modified T-junction device for microspheres generation. The numerical model is developed using a commercial software package: COMSOL Multiphysics. In order to test the accuracy of the numerical model, multiple variables, such as the flow rate of cross-flow, fluid properties, structure, and geometry of the microdevice are applied. The results from the model are compared with the experimental results in the diameter of the microsphere generated. The comparison shows a good agreement. Therefore the model is useful in further optimization of the device and feedback control of microsphere generation if any.

Keywords: CFD modeling, validation, microsphere generation, modified T-junction

Procedia PDF Downloads 707
2637 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 59
2636 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 125
2635 A Comparative Study of Deep Learning Methods for COVID-19 Detection

Authors: Aishrith Rao

Abstract:

COVID 19 is a pandemic which has resulted in thousands of deaths around the world and a huge impact on the global economy. Testing is a huge issue as the test kits have limited availability and are expensive to manufacture. Using deep learning methods on radiology images in the detection of the coronavirus as these images contain information about the spread of the virus in the lungs is extremely economical and time-saving as it can be used in areas with a lack of testing facilities. This paper focuses on binary classification and multi-class classification of COVID 19 and other diseases such as pneumonia, tuberculosis, etc. Different deep learning methods such as VGG-19, COVID-Net, ResNET+ SVM, Deep CNN, DarkCovidnet, etc., have been used, and their accuracy has been compared using the Chest X-Ray dataset.

Keywords: deep learning, computer vision, radiology, COVID-19, ResNet, VGG-19, deep neural networks

Procedia PDF Downloads 160
2634 Element-Independent Implementation for Method of Lagrange Multipliers

Authors: Gil-Eon Jeong, Sung-Kie Youn, K. C. Park

Abstract:

Treatment for the non-matching interface is an important computational issue. To handle this problem, the method of Lagrange multipliers including classical and localized versions are the most popular technique. It essentially imposes the interface compatibility conditions by introducing Lagrange multipliers. However, the numerical system becomes unstable and inefficient due to the Lagrange multipliers. The interface element-independent formulation that does not include the Lagrange multipliers can be obtained by modifying the independent variables mathematically. Through this modification, more efficient and stable system can be achieved while involving equivalent accuracy comparing with the conventional method. A numerical example is conducted to verify the validity of the presented method.

Keywords: element-independent formulation, interface coupling, methods of Lagrange multipliers, non-matching interface

Procedia PDF Downloads 403
2633 Retrieval of Aerosol Optical Depth and Correlation Analysis of PM2.5 Based on GF-1 Wide Field of View Images

Authors: Bo Wang

Abstract:

This paper proposes a method that can estimate PM2.5 by the images of GF-1 Satellite that called WFOV images (Wide Field of View). AOD (Aerosol Optical Depth) over land surfaces was retrieved in Shanghai area based on DDV (Dark Dense Vegetation) method. PM2.5 information, gathered from ground monitoring stations hourly, was fitted with AOD using different polynomial coefficients, and then the correlation coefficient between them was calculated. The results showed that, the GF-1 WFOV images can meet the requirement of retrieving AOD, and the correlation coefficient between the retrieved AOD and PM2.5 was high. If more detailed and comprehensive data is provided, the accuracy could be improved and the parameters can be more precise in the future.

Keywords: remote sensing retrieve, PM 2.5, GF-1, aerosol optical depth

Procedia PDF Downloads 244
2632 Auditory Rehabilitation via an VR Serious Game for Children with Cochlear Implants: Bio-Behavioral Outcomes

Authors: Areti Okalidou, Paul D. Hatzigiannakoglou, Aikaterini Vatou, George Kyriafinis

Abstract:

Young children are nowadays adept at using technology. Hence, computer-based auditory training programs (CBATPs) have become increasingly popular in aural rehabilitation for children with hearing loss and/or with cochlear implants (CI). Yet, their clinical utility for prognostic, diagnostic, and monitoring purposes has not been explored. The purposes of the study were: a) to develop an updated version of the auditory rehabilitation tool for Greek-speaking children with cochlear implants, b) to develop a database for behavioral responses, and c) to compare accuracy rates and reaction times in children differing in hearing status and other medical and demographic characteristics, in order to assess the tool’s clinical utility in prognosis, diagnosis, and progress monitoring. The updated version of the auditory rehabilitation tool was developed on a tablet, retaining the User-Centered Design approach and the elements of the Virtual Reality (VR) serious game. The visual stimuli were farm animals acting in simple game scenarios designed to trigger children’s responses to animal sounds, names, and relevant sentences. Based on an extended version of Erber’s auditory development model, the VR game consisted of six stages, i.e., sound detection, sound discrimination, word discrimination, identification, comprehension of words in a carrier phrase, and comprehension of sentences. A familiarization stage (learning) was set prior to the game. Children’s tactile responses were recorded as correct, false, or impulsive, following a child-dependent set up of a valid delay time after stimulus offset for valid responses. Reaction times were also recorded, and the database was in Εxcel format. The tablet version of the auditory rehabilitation tool was piloted in 22 preschool children with Νormal Ηearing (ΝΗ), which led to improvements. The study took place in clinical settings or at children’s homes. Fifteen children with CI, aged 5;7-12;3 years with post-implantation 0;11-5;1 years used the auditory rehabilitation tool. Eight children with CI were monolingual, two were bilingual and five had additional disabilities. The control groups consisted of 13 children with ΝΗ, aged 2;6-9;11 years. A comparison of both accuracy rates, as percent correct, and reaction times (in sec) was made at each stage, across hearing status, age, and also, within the CI group, based on presence of additional disability and bilingualism. Both monolingual Greek-speaking children with CI with no additional disabilities and hearing peers showed high accuracy rates at all stages, with performances falling above the 3rd quartile. However, children with normal hearing scored higher than the children with CI, especially in the detection and word discrimination tasks. The reaction time differences between the two groups decreased in language-based tasks. Results for children with CI with additional disability or bilingualism varied. Finally, older children scored higher than younger ones in both groups (CI, NH), but larger differences occurred in children with CI. The interactions between familiarization of the software, age, hearing status and demographic characteristics are discussed. Overall, the VR game is a promising tool for tracking the development of auditory skills, as it provides multi-level longitudinal empirical data. Acknowledgment: This work is part of a project that has received funding from the Research Committee of the University of Macedonia under the Basic Research 2020-21 funding programme.

Keywords: VR serious games, auditory rehabilitation, auditory training, children with cochlear implants

Procedia PDF Downloads 89
2631 Single-Element Simulations of Wood Material in LS-DYNA

Authors: Ren Zuo Wang

Abstract:

In this paper, in order to investigate the behavior of the wood structure, the non-linearity of wood material model in LS-DYNA is adopted. It is difficult and less efficient to conduct the experiment of the ancient wood structure, hence LS-DYNA software can be used to simulate nonlinear responses of ancient wood structure. In LS-DYNA software, there is material model called *MAT_WOOD or *MAT_143. This model is to simulate a single-element response of the wood subjected to tension and compression under the parallel and the perpendicular material directions. Comparing with the exact solution and numerical simulations results using LS-DYNA, it demonstrates the accuracy and the efficiency of the proposed simulation method.

Keywords: LS-DYNA, wood structure, single-element simulations, MAT_143

Procedia PDF Downloads 654
2630 Highly Accurate Tennis Ball Throwing Machine with Intelligent Control

Authors: Ferenc Kovács, Gábor Hosszú

Abstract:

The paper presents an advanced control system for tennis ball throwing machines to improve their accuracy according to the ball impact points. A further advantage of the system is the much easier calibration process involving the intelligent solution of the automatic adjustment of the stroking parameters according to the ball elasticity, the self-calibration, the use of the safety margin at very flat strokes and the possibility to placing the machine to any position of the half court. The system applies mathematical methods to determine the exact ball trajectories and special approximating processes to access all points on the aimed half court.

Keywords: control system, robot programming, robot control, sports equipment, throwing machine

Procedia PDF Downloads 397
2629 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 178
2628 Optimal Control of Volterra Integro-Differential Systems Based on Legendre Wavelets and Collocation Method

Authors: Khosrow Maleknejad, Asyieh Ebrahimzadeh

Abstract:

In this paper, the numerical solution of optimal control problem (OCP) for systems governed by Volterra integro-differential (VID) equation is considered. The method is developed by means of the Legendre wavelet approximation and collocation method. The properties of Legendre wavelet accompany with Gaussian integration method are utilized to reduce the problem to the solution of nonlinear programming one. Some numerical examples are given to confirm the accuracy and ease of implementation of the method.

Keywords: collocation method, Legendre wavelet, optimal control, Volterra integro-differential equation

Procedia PDF Downloads 388
2627 Optimize Data Evaluation Metrics for Fraud Detection Using Machine Learning

Authors: Jennifer Leach, Umashanger Thayasivam

Abstract:

The use of technology has benefited society in more ways than one ever thought possible. Unfortunately, though, as society’s knowledge of technology has advanced, so has its knowledge of ways to use technology to manipulate people. This has led to a simultaneous advancement in the world of fraud. Machine learning techniques can offer a possible solution to help decrease this advancement. This research explores how the use of various machine learning techniques can aid in detecting fraudulent activity across two different types of fraudulent data, and the accuracy, precision, recall, and F1 were recorded for each method. Each machine learning model was also tested across five different training and testing splits in order to discover which testing split and technique would lead to the most optimal results.

Keywords: data science, fraud detection, machine learning, supervised learning

Procedia PDF Downloads 196
2626 Using 3D Satellite Imagery to Generate a High Precision Canopy Height Model

Authors: M. Varin, A. M. Dubois, R. Gadbois-Langevin, B. Chalghaf

Abstract:

Good knowledge of the physical environment is essential for an integrated forest planning. This information enables better forecasting of operating costs, determination of cutting volumes, and preservation of ecologically sensitive areas. The use of satellite images in stereoscopic pairs gives the capacity to generate high precision 3D models, which are scale-adapted for harvesting operations. These models could represent an alternative to 3D LiDAR data, thanks to their advantageous cost of acquisition. The objective of the study was to assess the quality of stereo-derived canopy height models (CHM) in comparison to a traditional LiDAR CHM and ground tree-height samples. Two study sites harboring two different forest stand types (broadleaf and conifer) were analyzed using stereo pairs and tri-stereo images from the WorldView-3 satellite to calculate CHM. Acquisition of multispectral images from an Unmanned Aerial Vehicle (UAV) was also realized on a smaller part of the broadleaf study site. Different algorithms using two softwares (PCI Geomatica and Correlator3D) with various spatial resolutions and band selections were tested to select the 3D modeling technique, which offered the best performance when compared with LiDAR. In the conifer study site, the CHM produced with Corelator3D using only the 50-cm resolution panchromatic band was the one with the smallest Root-mean-square deviation (RMSE: 1.31 m). In the broadleaf study site, the tri-stereo model provided slightly better performance, with an RMSE of 1.2 m. The tri-stereo model was also compared to the UAV, which resulted in an RMSE of 1.3 m. At individual tree level, when ground samples were compared to satellite, lidar, and UAV CHM, RMSE were 2.8, 2.0, and 2.0 m, respectively. Advanced analysis was done for all of these cases, and it has been noted that RMSE is reduced when the canopy cover is higher when shadow and slopes are lower and when clouds are distant from the analyzed site.

Keywords: very high spatial resolution, satellite imagery, WorlView-3, canopy height models, CHM, LiDAR, unmanned aerial vehicle, UAV

Procedia PDF Downloads 127
2625 Prediction of Compressive Strength in Geopolymer Composites by Adaptive Neuro Fuzzy Inference System

Authors: Mehrzad Mohabbi Yadollahi, Ramazan Demirboğa, Majid Atashafrazeh

Abstract:

Geopolymers are highly complex materials which involve many variables which makes modeling its properties very difficult. There is no systematic approach in mix design for Geopolymers. Since the amounts of silica modulus, Na2O content, w/b ratios and curing time have a great influence on the compressive strength an ANFIS (Adaptive neuro fuzzy inference system) method has been established for predicting compressive strength of ground pumice based Geopolymers and the possibilities of ANFIS for predicting the compressive strength has been studied. Consequently, ANFIS can be used for geopolymer compressive strength prediction with acceptable accuracy.

Keywords: geopolymer, ANFIS, compressive strength, mix design

Procedia PDF Downloads 853
2624 Measuring of the Volume Ratio of Two Immiscible Liquids Using Electrical Impedance Tomography

Authors: Jiri Primas, Michal Malik, Darina Jasikova, Michal Kotek, Vaclav Kopecky

Abstract:

Authors of this paper discuss the measuring of volume ratio of two immiscible liquids in the homogenous mixture using the industrial Electrical Impedance Tomography (EIT) system ITS p2+. In the first part of the paper, the principle of EIT and the basic theory of conductivity of mixture of two components are stated. In the next part, the experiment with water and olive oil mixed with Rushton turbine is described, and the measured results are used to verify the theory. In the conclusion, the results are discussed in detail, and the accuracy of the measuring method and its advantages are also mentioned.

Keywords: conductivity, electrical impedance tomography, homogenous mixture, mixing process

Procedia PDF Downloads 403
2623 A Class of Third Derivative Four-Step Exponential Fitting Numerical Integrator for Stiff Differential Equations

Authors: Cletus Abhulimen, L. A. Ukpebor

Abstract:

In this paper, we construct a class of four-step third derivative exponential fitting integrator of order six for the numerical integration of stiff initial-value problems of the type: y’= f(x,y); y(x₀) =y₀. The implicit method has free parameters which allow it to be fitted automatically to exponential functions. For the purpose of effective implementation of the proposed method, we adopted the techniques of splitting the method into predictor and corrector schemes. The numerical analysis of the stability of the new method was discussed; the results show that the method is A-stable. Finally, numerical examples are presented, to show the efficiency and accuracy of the new method.

Keywords: third derivative four-step, exponentially fitted, a-stable, stiff differential equations

Procedia PDF Downloads 265
2622 Free Vibration of Functionally Graded Smart Beams Based on the First Order Shear Deformation Theory

Authors: A. R. Nezamabadi, M. Veiskarami

Abstract:

This paper studies free vibration of simply supported functionally graded beams with piezoelectric layers based on the first order shear deformation theory. The Young's modulus of beam is assumed to be graded continuously across the beam thickness. The governing equation is established. Resulting equation is solved using the Euler's equation. The effects of the constituent volume fractions, the influences of applied voltage on the vibration frequency are presented. To investigate the accuracy of the present analysis, a compression study is carried out with a known data.

Keywords: mechanical buckling, functionally graded beam, first order shear deformation theory, free vibration

Procedia PDF Downloads 476
2621 Application of Wavelet Based Approximation for the Solution of Partial Integro-Differential Equation Arising from Viscoelasticity

Authors: Somveer Singh, Vineet Kumar Singh

Abstract:

This work contributes a numerical method based on Legendre wavelet approximation for the treatment of partial integro-differential equation (PIDE). Operational matrices of Legendre wavelets reduce the solution of PIDE into the system of algebraic equations. Some useful results concerning the computational order of convergence and error estimates associated to the suggested scheme are presented. Illustrative examples are provided to show the effectiveness and accuracy of proposed numerical method.

Keywords: legendre wavelets, operational matrices, partial integro-differential equation, viscoelasticity

Procedia PDF Downloads 448
2620 Zeros Elimination from the National Currency

Authors: Zahra Karimi

Abstract:

The purpose of this paper is to investigate the role and importance of accounting for the implementation of the VAT system in the country. For this purpose, after the evaluation of specifications and important advantages of the VAT and the experience of other countries, important role of accounting in the precise determination of taxes, strategies to prevent escape of tax and realization of tax revenues of government, necessary control to increase the efficiency and accuracy of the calculations discussed. High-dependence of government to borrowing from the banking system and inflation tax and a low general ratio of tax revenues to GDP, indicating the inadequacy of the country's tax system. It can be said that being of a proper accounting system consider as a prerequisite for successful implementation of VAT in the country. So it's crucial for accountants with responsibility announce its full fitness to meet the requirements. For successful implementation of VAT as such a multi-stage sales tax and the tax on the price.

Keywords: accounting, tax reform in Iran, Value Added Tax (VAT), economic

Procedia PDF Downloads 386
2619 Price Compensation Mechanism with Unmet Demand for Public-Private Partnership Projects

Authors: Zhuo Feng, Ying Gao

Abstract:

Public-private partnership (PPP), as an innovative way to provide infrastructures by the private sector, is being widely used throughout the world. Compared with the traditional mode, PPP emerges largely for merits of relieving public budget constraint and improving infrastructure supply efficiency by involving private funds. However, PPP projects are characterized by large scale, high investment, long payback period, and long concession period. These characteristics make PPP projects full of risks. One of the most important risks faced by the private sector is demand risk because many factors affect the real demand. If the real demand is far lower than the forecasting demand, the private sector will be got into big trouble because operating revenue is the main means for the private sector to recoup the investment and obtain profit. Therefore, it is important to study how the government compensates the private sector when the demand risk occurs in order to achieve Pareto-improvement. This research focuses on price compensation mechanism, an ex-post compensation mechanism, and analyzes, by mathematical modeling, the impact of price compensation mechanism on payoff of the private sector and consumer surplus for PPP toll road projects. This research first investigates whether or not price compensation mechanisms can obtain Pareto-improvement and, if so, then explores boundary conditions for this mechanism. The research results show that price compensation mechanism can realize Pareto-improvement under certain conditions. Especially, to make the price compensation mechanism accomplish Pareto-improvement, renegotiation costs of the government and the private sector should be lower than a certain threshold which is determined by marginal operating cost and distortionary cost of the tax. In addition, the compensation percentage should match with the price cut of the private investor when demand drops. This research aims to provide theoretical support for the government when determining compensation scope under the price compensation mechanism. Moreover, some policy implications can also be drawn from the analysis for better risk-sharing and sustainability of PPP projects.

Keywords: infrastructure, price compensation mechanism, public-private partnership, renegotiation

Procedia PDF Downloads 179
2618 Personalizing Human Physical Life Routines Recognition over Cloud-based Sensor Data via AI and Machine Learning

Authors: Kaushik Sathupadi, Sandesh Achar

Abstract:

Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study introduces state-of-the-art techniques for recognizing static and dy-namic patterns and forecasting those challenging activities from multi-fused sensors. Further-more, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, we acquired raw data is filtered with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.

Keywords: artificial intelligence, machine learning, gait analysis, local binary pattern (LBP), statistical features, micro-electro-mechanical systems (MEMS), maximum relevance and minimum re-dundancy (MRMR)

Procedia PDF Downloads 22
2617 Biimodal Biometrics System Using Fusion of Iris and Fingerprint

Authors: Attallah Bilal, Hendel Fatiha

Abstract:

This paper proposes the bimodal biometrics system for identity verification iris and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre processed images of iris and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., normalization, generation of similarity score and fusion of weighted scores. The final score is then used to declare the person as genuine or an impostor. The system is tested on CASIA database and gives an overall accuracy of 91.04% with FAR of 2.58% and FRR of 8.34%.

Keywords: iris, fingerprint, sum rule, fusion

Procedia PDF Downloads 369
2616 Model Predictive Control of Three Phase Inverter for PV Systems

Authors: Irtaza M. Syed, Kaamran Raahemifar

Abstract:

This paper presents a model predictive control (MPC) of a utility interactive three phase inverter (TPI) for a photovoltaic (PV) system at commercial level. The proposed model uses phase locked loop (PLL) to synchronize TPI with the power electric grid (PEG) and performs MPC control in a dq reference frame. TPI model consists of boost converter (BC), maximum power point tracking (MPPT) control, and a three leg voltage source inverter (VSI). Operational model of VSI is used to synthesize sinusoidal current and track the reference. Model is validated using a 35.7 kW PV system in Matlab/Simulink. Implementation and results show simplicity and accuracy, as well as reliability of the model.

Keywords: model predictive control, three phase voltage source inverter, PV system, Matlab/simulink

Procedia PDF Downloads 596
2615 Compensatory Neuro-Fuzzy Inference (CNFI) Controller for Bilateral Teleoperation

Authors: R. Mellah, R. Toumi

Abstract:

This paper presents a new adaptive neuro-fuzzy controller equipped with compensatory fuzzy control (CNFI) in order to not only adjusts membership functions but also to optimize the adaptive reasoning by using a compensatory learning algorithm. The proposed control structure includes both CNFI controllers for which one is used to control in force the master robot and the second one for controlling in position the slave robot. The experimental results obtained, show a fairly high accuracy in terms of position and force tracking under free space motion and hard contact motion, what highlights the effectiveness of the proposed controllers.

Keywords: compensatory fuzzy, neuro-fuzzy, control adaptive, teleoperation

Procedia PDF Downloads 324
2614 Real-time Rate and Rhythms Feedback Control System in Patients with Atrial Fibrillation

Authors: Mohammad A. Obeidat, Ayman M. Mansour

Abstract:

Capturing the dynamic behavior of the heart to improve control performance, enhance robustness, and support diagnosis is very important in establishing real time models for the heart. Control Techniques and strategies have been utilized to improve system costs, reliability, and estimation accuracy for different types of systems such as biomedical, industrial, and other systems that required tuning input/output relation and/or monitoring. Simulations are performed to illustrate potential applications of the technology. In this research, a new control technology scheme is used to enhance the performance of the Af system and meet the design specifications.

Keywords: atrial fibrillation, dynamic behavior, closed loop, signal, filter

Procedia PDF Downloads 421
2613 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning

Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz

Abstract:

Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.

Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics

Procedia PDF Downloads 119