Search results for: imputation method of missing data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 37299

Search results for: imputation method of missing data

36909 Bayesian Using Markov Chain Monte Carlo and Lindley's Approximation Based on Type-I Censored Data

Authors: Al Omari Moahmmed Ahmed

Abstract:

These papers describe the Bayesian Estimator using Markov Chain Monte Carlo and Lindley’s approximation and the maximum likelihood estimation of the Weibull distribution with Type-I censored data. The maximum likelihood method can’t estimate the shape parameter in closed forms, although it can be solved by numerical methods. Moreover, the Bayesian estimates of the parameters, the survival and hazard functions cannot be solved analytically. Hence Markov Chain Monte Carlo method and Lindley’s approximation are used, where the full conditional distribution for the parameters of Weibull distribution are obtained via Gibbs sampling and Metropolis-Hastings algorithm (HM) followed by estimate the survival and hazard functions. The methods are compared to Maximum Likelihood counterparts and the comparisons are made with respect to the Mean Square Error (MSE) and absolute bias to determine the better method in scale and shape parameters, the survival and hazard functions.

Keywords: weibull distribution, bayesian method, markov chain mote carlo, survival and hazard functions

Procedia PDF Downloads 455
36908 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method

Authors: Mohamad R. Moshtagh, Ahmad Bagheri

Abstract:

Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.

Keywords: fault detection, gearbox, machine learning, wiener method

Procedia PDF Downloads 53
36907 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network

Authors: Abdolreza Memari

Abstract:

In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.

Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model

Procedia PDF Downloads 471
36906 Bayesian Reliability of Weibull Regression with Type-I Censored Data

Authors: Al Omari Moahmmed Ahmed

Abstract:

In the Bayesian, we developed an approach by using non-informative prior with covariate and obtained by using Gauss quadrature method to estimate the parameters of the covariate and reliability function of the Weibull regression distribution with Type-I censored data. The maximum likelihood seen that the estimators obtained are not available in closed forms, although they can be solved it by using Newton-Raphson methods. The comparison criteria are the MSE and the performance of these estimates are assessed using simulation considering various sample size, several specific values of shape parameter. The results show that Bayesian with non-informative prior is better than Maximum Likelihood Estimator.

Keywords: non-informative prior, Bayesian method, type-I censoring, Gauss quardature

Procedia PDF Downloads 474
36905 Ontology for a Voice Transcription of OpenStreetMap Data: The Case of Space Apprehension by Visually Impaired Persons

Authors: Said Boularouk, Didier Josselin, Eitan Altman

Abstract:

In this paper, we present a vocal ontology of OpenStreetMap data for the apprehension of space by visually impaired people. Indeed, the platform based on produsage gives a freedom to data producers to choose the descriptors of geocoded locations. Unfortunately, this freedom, called also folksonomy leads to complicate subsequent searches of data. We try to solve this issue in a simple but usable method to extract data from OSM databases in order to send them to visually impaired people using Text To Speech technology. We focus on how to help people suffering from visual disability to plan their itinerary, to comprehend a map by querying computer and getting information about surrounding environment in a mono-modal human-computer dialogue.

Keywords: TTS, ontology, open street map, visually impaired

Procedia PDF Downloads 273
36904 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model

Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis

Abstract:

Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).

Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry

Procedia PDF Downloads 195
36903 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method

Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent

Abstract:

A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.

Keywords: bed topography, FBM, LBM, shallow water, simulations

Procedia PDF Downloads 73
36902 Research of Data Cleaning Methods Based on Dependency Rules

Authors: Yang Bao, Shi Wei Deng, WangQun Lin

Abstract:

This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSQL), and gives 6 data cleaning methods based on these algorithms.

Keywords: data cleaning, dependency rules, violation data discovery, data repair

Procedia PDF Downloads 537
36901 Modal FDTD Method for Wave Propagation Modeling Customized for Parallel Computing

Authors: H. Samadiyeh, R. Khajavi

Abstract:

A new FD-based procedure, modal finite difference method (MFDM), is proposed for seismic wave propagation modeling, in which simulation is dealt with in the modal space. The method employs eigenvalues of a characteristic matrix formed by appropriate time-space FD stencils. Since MFD runs for different modes are totally independent of each other, MFDM can easily be parallelized while considerable simplicity in parallel-algorithm is also achieved. There is no requirement to any domain-decomposition procedure and inter-core data exchange. More important is the possibility to skip processing of less-significant modes, which enables one to adjust the procedure up to the level of accuracy needed. Thus, in addition to considerable ease of parallel programming, computation and storage costs are significantly reduced. The method is qualified for its efficiency by some numerical examples.

Keywords: Finite Difference Method, Graphics Processing Unit (GPU), Message Passing Interface (MPI), Modal, Wave propagation

Procedia PDF Downloads 272
36900 A Convolutional Neural Network Based Vehicle Theft Detection, Location, and Reporting System

Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala

Abstract:

One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets especially in the motorist industry, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. Sixty (60) vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.

Keywords: CNN, location identification, tracking, GPS, GSM

Procedia PDF Downloads 132
36899 A Comparative Study of the Athlete Health Records' Minimum Data Set in Selected Countries and Presenting a Model for Iran

Authors: Robab Abdolkhani, Farzin Halabchi, Reza Safdari, Goli Arji

Abstract:

Background and purpose: The quality of health record depends on the quality of its content and proper documentation. Minimum data set makes a standard method for collecting key data elements that make them easy to understand and enable comparison. The aim of this study was to determine the minimum data set for Iranian athletes’ health records. Methods: This study is an applied research of a descriptive comparative type which was carried out in 2013. By using internal and external forms of documentation, a checklist was created that included data elements of athletes health record and was subjected to debate in Delphi method by experts in the field of sports medicine and health information management. Results: From 97 elements which were subjected to discussion, 85 elements by more than 75 percent of the participants (as the main elements) and 12 elements by 50 to 75 percent of the participants (as the proposed elements) were agreed upon. In about 97 elements of the case, there was no significant difference between responses of alumni groups of sport pathology and sports medicine specialists with medical record, medical informatics and information management professionals. Conclusion: Minimum data set of Iranian athletes’ health record with four information categories including demographic information, health history, assessment and treatment plan was presented. The proposed model is available for manual and electronic medical records.

Keywords: Documentation, Health record, Minimum data set, Sports medicine

Procedia PDF Downloads 445
36898 A Proposed Framework for Software Redocumentation Using Distributed Data Processing Techniques and Ontology

Authors: Laila Khaled Almawaldi, Hiew Khai Hang, Sugumaran A. l. Nallusamy

Abstract:

Legacy systems are crucial for organizations, but their intricacy and lack of documentation pose challenges for maintenance and enhancement. Redocumentation of legacy systems is vital for automatically or semi-automatically creating documentation for software lacking sufficient records. It aims to enhance system understandability, maintainability, and knowledge transfer. However, existing redocumentation methods need improvement in data processing performance and document generation efficiency. This stems from the necessity to efficiently handle the extensive and complex code of legacy systems. This paper proposes a method for semi-automatic legacy system re-documentation using semantic parallel processing and ontology. Leveraging parallel processing and ontology addresses current challenges by distributing the workload and creating documentation with logically interconnected data. The paper outlines challenges in legacy system redocumentation and suggests a method of redocumentation using parallel processing and ontology for improved efficiency and effectiveness.

Keywords: legacy systems, redocumentation, big data analysis, parallel processing

Procedia PDF Downloads 18
36897 Development of New Technology Evaluation Model by Using Patent Information and Customers' Review Data

Authors: Kisik Song, Kyuwoong Kim, Sungjoo Lee

Abstract:

Many global firms and corporations derive new technology and opportunity by identifying vacant technology from patent analysis. However, previous studies failed to focus on technologies that promised continuous growth in industrial fields. Most studies that derive new technology opportunities do not test practical effectiveness. Since previous studies depended on expert judgment, it became costly and time-consuming to evaluate new technologies based on patent analysis. Therefore, research suggests a quantitative and systematic approach to technology evaluation indicators by using patent data to and from customer communities. The first step involves collecting two types of data. The data is used to construct evaluation indicators and apply these indicators to the evaluation of new technologies. This type of data mining allows a new method of technology evaluation and better predictor of how new technologies are adopted.

Keywords: data mining, evaluating new technology, technology opportunity, patent analysis

Procedia PDF Downloads 346
36896 Fuzzy Gauge Capability (Cg and Cgk) through Buckley Approach

Authors: Seyed Habib A. Rahmati, Mohsen Sadegh Amalnick

Abstract:

Different terms of the statistical process control (SPC) has sketch in the fuzzy environment. However, measurement system analysis (MSA), as a main branch of the SPC, is rarely investigated in fuzzy area. This procedure assesses the suitability of the data to be used in later stages or decisions of the SPC. Therefore, this research focuses on some important measures of MSA and through a new method introduces the measures in fuzzy environment. In this method, which works based on Buckley approach, imprecision and vagueness nature of the real world measurement are considered simultaneously. To do so, fuzzy version of the gauge capability (Cg and Cgk) are introduced. The method is also explained through example clearly.

Keywords: measurement, SPC, MSA, gauge capability (Cg and Cgk)

Procedia PDF Downloads 613
36895 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 504
36894 A New Computational Package for Using in CFD and Other Problems (Third Edition)

Authors: Mohammad Reza Akhavan Khaleghi

Abstract:

This paper shows changes done to the Reduced Finite Element Method (RFEM) that its result will be the most powerful numerical method that has been proposed so far (some forms of this method are so powerful that they can approximate the most complex equations simply Laplace equation!). Finite Element Method (FEM) is a powerful numerical method that has been used successfully for the solution of the existing problems in various scientific and engineering fields such as its application in CFD. Many algorithms have been expressed based on FEM, but none have been used in popular CFD software. In this section, full monopoly is according to Finite Volume Method (FVM) due to better efficiency and adaptability with the physics of problems in comparison with FEM. It doesn't seem that FEM could compete with FVM unless it was fundamentally changed. This paper shows those changes and its result will be a powerful method that has much better performance in all subjects in comparison with FVM and another computational method. This method is not to compete with the finite volume method but to replace it.

Keywords: reduced finite element method, new computational package, new finite element formulation, new higher-order form, new isogeometric analysis

Procedia PDF Downloads 86
36893 Structural Health Monitoring of Buildings–Recorded Data and Wave Method

Authors: Tzong-Ying Hao, Mohammad T. Rahmani

Abstract:

This article presents the structural health monitoring (SHM) method based on changes in wave traveling times (wave method) within a layered 1-D shear beam model of structure. The wave method measures the velocity of shear wave propagating in a building from the impulse response functions (IRF) obtained from recorded data at different locations inside the building. If structural damage occurs in a structure, the velocity of wave propagation through it changes. The wave method analysis is performed on the responses of Torre Central building, a 9-story shear wall structure located in Santiago, Chile. Because events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded at this building, therefore it can serve as a full-scale benchmark to validate the structural health monitoring method utilized. The analysis of inter-story drifts and the Fourier spectra for the EW and NS motions during 2010 Chile earthquake are presented. The results for the NS motions suggest the coupling of translation and torsion responses. The system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) were detected initially decreasing approximately 24% in the EW motion. Near the end of shaking, an increase of about 17% was detected. These analysis and results serve as baseline indicators of the occurrence of structural damage. The detected changes in wave velocities of the shear beam model are consistent with the observed damage. However, the 1-D shear beam model is not sufficient to simulate the coupling of translation and torsion responses in the NS motion. The wave method is proven for actual implementation in structural health monitoring systems based on carefully assessing the resolution and accuracy of the model for its effectiveness on post-earthquake damage detection in buildings.

Keywords: Chile earthquake, damage detection, earthquake response, impulse response function, shear beam model, shear wave velocity, structural health monitoring, torre central building, wave method

Procedia PDF Downloads 347
36892 Operating Speed Models on Tangent Sections of Two-Lane Rural Roads

Authors: Dražen Cvitanić, Biljana Maljković

Abstract:

This paper presents models for predicting operating speeds on tangent sections of two-lane rural roads developed on continuous speed data. The data corresponds to 20 drivers of different ages and driving experiences, driving their own cars along an 18 km long section of a state road. The data were first used for determination of maximum operating speeds on tangents and their comparison with speeds in the middle of tangents i.e. speed data used in most of operating speed studies. Analysis of continuous speed data indicated that the spot speed data are not reliable indicators of relevant speeds. After that, operating speed models for tangent sections were developed. There was no significant difference between models developed using speed data in the middle of tangent sections and models developed using maximum operating speeds on tangent sections. All developed models have higher coefficient of determination then models developed on spot speed data. Thus, it can be concluded that the method of measuring has more significant impact on the quality of operating speed model than the location of measurement.

Keywords: operating speed, continuous speed data, tangent sections, spot speed, consistency

Procedia PDF Downloads 427
36891 Analysis and Rule Extraction of Coronary Artery Disease Data Using Data Mining

Authors: Rezaei Hachesu Peyman, Oliyaee Azadeh, Salahzadeh Zahra, Alizadeh Somayyeh, Safaei Naser

Abstract:

Coronary Artery Disease (CAD) is one major cause of disability in adults and one main cause of death in developed. In this study, data mining techniques including Decision Trees, Artificial neural networks (ANNs), and Support Vector Machine (SVM) analyze CAD data. Data of 4948 patients who had suffered from heart diseases were included in the analysis. CAD is the target variable, and 24 inputs or predictor variables are used for the classification. The performance of these techniques is compared in terms of sensitivity, specificity, and accuracy. The most significant factor influencing CAD is chest pain. Elderly males (age > 53) have a high probability to be diagnosed with CAD. SVM algorithm is the most useful way for evaluation and prediction of CAD patients as compared to non-CAD ones. Application of data mining techniques in analyzing coronary artery diseases is a good method for investigating the existing relationships between variables.

Keywords: classification, coronary artery disease, data-mining, knowledge discovery, extract

Procedia PDF Downloads 633
36890 A 3-Year Evaluation Study on Fine Needle Aspiration Cytology and Corresponding Histology

Authors: Amjad Al Shammari, Ashraf Ibrahim, Laila Seada

Abstract:

Background and Objectives: Incidence of thyroid carcinoma has been increasing world-wide. In the present study, we evaluated diagnostic accuracy of Fine needle aspiration (FNA) and its efficiency in early detecting neoplastic lesions of thyroid gland over a 3-year period. Methods: Data have been retrieved from pathology files in King Khalid Hospital. For each patient, age, gender, FNA, site & size of nodule and final histopathologic diagnosis were recorded. Results: Study included 490 cases where 419 of them were female and 71 male. Male to female ratio was 1:6. Mean age was 43 years for males and 38 for females. Cases with confirmed histopathology were 131. In 101/131 (77.1%), concordance was found between FNA and histology. In 30/131 (22.9%), there was discrepancy in diagnosis. Total malignant cases were 43, out of which 14 (32.5%) were true positive and 29 (67.44%) were false negative. No false positive cases could be found in our series. Conclusion: FNA could diagnose benign nodules in all cases, however, in malignant cases, ultrasound findings have to be taken into consideration to avoid missing of a microcarcinoma in the contralateral lobe.

Keywords: FNA, hail, histopathology, thyroid

Procedia PDF Downloads 308
36889 Effectiveness of Earthing System in Vertical Configurations

Authors: S. Yunus, A. Suratman, N. Mohamad Nor, M. Othman

Abstract:

This paper presents the measurement and simulation results by Finite Element Method (FEM) for earth resistance (RDC) for interconnected vertical ground rod configurations. The soil resistivity was measured using the Wenner four-pin Method, and RDC was measured using the Fall of Potential (FOP) method, as outlined in the standard. Genetic Algorithm (GA) is employed to interpret the soil resistivity to that of a 2-layer soil model. The same soil resistivity data that were obtained by Wenner four-pin method were used in FEM for simulation. This paper compares the results of RDC obtained by FEM simulation with the real measurement at field site. A good agreement was seen for RDC obtained by measurements and FEM. This shows that FEM is a reliable software to be used for design of earthing systems. It is also found that the parallel rod system has a better performance compared to a similar setup using a grid layout.

Keywords: earthing system, earth electrodes, finite element method, genetic algorithm, earth resistances

Procedia PDF Downloads 91
36888 Shield Tunnel Excavation Simulation of a Case Study Using a So-Called 'Stress Relaxation' Method

Authors: Shengwei Zhu, Alireza Afshani, Hirokazu Akagi

Abstract:

Ground surface settlement induced by shield tunneling is addressing increasing attention as shield tunneling becomes a popular construction technique for tunnels in urban areas. This paper discusses a 2D longitudinal FEM simulation of a tunneling case study in Japan (Tokyo Metro Yurakucho Line). Tunneling-induced field data was already collected and is used here for comparison and evaluating purposes. In this model, earth pressure, face pressure, backfilling grouting, elastic tunnel lining, and Mohr-Coulomb failure criterion for soil elements are considered. A method called ‘stress relaxation’ is also exploited to simulate the gradual tunneling excavation. Ground surface settlements obtained from numerical results using the introduced method are then compared with the measurement data.

Keywords: 2D longitudinal FEM model, tunneling case study, stress relaxation, shield tunneling excavation

Procedia PDF Downloads 309
36887 Analysis of Expression Data Using Unsupervised Techniques

Authors: M. A. I Perera, C. R. Wijesinghe, A. R. Weerasinghe

Abstract:

his study was conducted to review and identify the unsupervised techniques that can be employed to analyze gene expression data in order to identify better subtypes of tumors. Identifying subtypes of cancer help in improving the efficacy and reducing the toxicity of the treatments by identifying clues to find target therapeutics. Process of gene expression data analysis described under three steps as preprocessing, clustering, and cluster validation. Feature selection is important since the genomic data are high dimensional with a large number of features compared to samples. Hierarchical clustering and K Means are often used in the analysis of gene expression data. There are several cluster validation techniques used in validating the clusters. Heatmaps are an effective external validation method that allows comparing the identified classes with clinical variables and visual analysis of the classes.

Keywords: cancer subtypes, gene expression data analysis, clustering, cluster validation

Procedia PDF Downloads 121
36886 Numerical Calculation of Dynamic Response of Catamaran Vessels Based on 3D Green Function Method

Authors: Md. Moinul Islam, N. M. Golam Zakaria

Abstract:

Seakeeping analysis of catamaran vessels in the earlier stages of design has become an important issue as it dictates the seakeeping characteristics, and it ensures safe navigation during the voyage. In the present paper, a 3D numerical method for the seakeeping prediction of catamaran vessel is presented using the 3D Green Function method. Both steady and unsteady potential flow problem is dealt with here. Using 3D linearized potential theory, the dynamic wave loads and the subsequent response of the vessel is computed. For validation of the numerical procedure catamaran vessel composed of twin, Wigley form demi-hull is used. The results of the present calculation are compared with the available experimental data and also with other calculations. The numerical procedure is also carried out for NPL-based round bilge catamaran, and hydrodynamic coefficients along with heave and pitch motion responses are presented for various Froude number. The results obtained by the present numerical method are found to be in fairly good agreement with the available data. This can be used as a design tool for predicting the seakeeping behavior of catamaran ships in waves.

Keywords: catamaran, hydrodynamic coefficients , motion response, 3D green function

Procedia PDF Downloads 190
36885 A Study on the Solutions of the 2-Dimensional and Forth-Order Partial Differential Equations

Authors: O. Acan, Y. Keskin

Abstract:

In this study, we will carry out a comparative study between the reduced differential transform method, the adomian decomposition method, the variational iteration method and the homotopy analysis method. These methods are used in many fields of engineering. This is been achieved by handling a kind of 2-Dimensional and forth-order partial differential equations called the Kuramoto–Sivashinsky equations. Three numerical examples have also been carried out to validate and demonstrate efficiency of the four methods. Furthermost, it is shown that the reduced differential transform method has advantage over other methods. This method is very effective and simple and could be applied for nonlinear problems which used in engineering.

Keywords: reduced differential transform method, adomian decomposition method, variational iteration method, homotopy analysis method

Procedia PDF Downloads 409
36884 Sentiment Classification Using Enhanced Contextual Valence Shifters

Authors: Vo Ngoc Phu, Phan Thi Tuoi

Abstract:

We have explored different methods of improving the accuracy of sentiment classification. The sentiment orientation of a document can be positive (+), negative (-), or neutral (0). We combine five dictionaries from [2, 3, 4, 5, 6] into the new one with 21137 entries. The new dictionary has many verbs, adverbs, phrases and idioms, that are not in five ones before. The paper shows that our proposed method based on the combination of Term-Counting method and Enhanced Contextual Valence Shifters method has improved the accuracy of sentiment classification. The combined method has accuracy 68.984% on the testing dataset, and 69.224% on the training dataset. All of these methods are implemented to classify the reviews based on our new dictionary and the Internet Movie data set.

Keywords: sentiment classification, sentiment orientation, valence shifters, contextual, valence shifters, term counting

Procedia PDF Downloads 481
36883 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks

Authors: Wang Yichen, Haruka Yamashita

Abstract:

In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.

Keywords: recurrent neural network, players lineup, basketball data, decision making model

Procedia PDF Downloads 107
36882 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 230
36881 Pleomorphic Dermal Sarcoma: A Management Challenge

Authors: Mona Nada, Fahmy Fahmy

Abstract:

Background: Pleomorphic dermal sarcoma is a rare form of skin cancer affecting cutaneous layer and, in some cases associated with recurrence and metastasis, very commonly to seen in elderly patient affecting the area of head and neck. Pleomorphic dermal sarcoma rises in ultraviolet light exposed areas. The symptoms and severity of this kind of skin cancer varies according to histological factors. The differentiation of Pleomorphic dermal sarcoma needs extensive immunohistochemistry, as the diagnosis depends mainly on exclusion to rule out other malignancy like poorly differentiated squamous cell carcinoma, melanoma, angiosarcoma and leiomyosarcoma. Objective: assessing the management of Pleomorphic dermal sarcoma in our unit and compared to the updated guidelines. Design: Retrospective study Collection of patient data from medical records at countess of Chester plastic surgery unit of the last 5 years, all histologically confirmed Pleomorphic dermal sarcoma (2017-2023). Data were collected confirmed to be Pleomorphic dermal sarcoma were included in the study. The data collected: clinical description of the lesions at first presentation, operation time, multidisciplinary team discussion, plan, referral as well as second operation and investigation done. With comparison of histological examination, immunohistochemistry staining, the excision and rate of recurrence. Results: data collected N19 from (2017-2023) showed the disease predominantly affecting males and the lesion mainly in head and neck, the diagnosis needed extensive immunohistochemistry to differentiate between other malignancy. recurrence present in numbers of the cases which managed after multidisciplinary team discussion either by excision or radiotherapy. Conclusion: Pleomorphic dermal sarcoma is a rare malignancy which needs more understanding and avoid missing as it is aggressive form of skin cancer, there is a chance of metastasis and recurrence which makes it very important to understand the process of development of the cancer and frequent review of the management guidelines.

Keywords: pleomorphic dermal sarcoma, recurrence, radiotherapy, surgical

Procedia PDF Downloads 49
36880 Contribution Of Community-based House To House (H2h) Active Tuberculosis (Tb) Case Finding (Acf) To Increase In Tb Notification In Nigeria: Kano State Experience 2012 To 2022

Authors: Ibrahim Umar, S Chindo, A Rajab

Abstract:

Background: TB remains a disease of public health concern in Nigeria with an estimated incidence rate of 219/100,000. Kano has the second highest TB burden in Nigeria and is the leading state with the highest consistent yearly TB notification. House-to-house (H2H) active case search in the community was found to have major contribution to the total TB notification in the state. Aims and Objective: To showcase the impact of H2H community active TB case search (ACF) to yearly TB notification in Kano State, Northern Nigeria from 2012 to 2022. Methodology: This is a retrospective descriptive study based on the analysis of data collected during the routine quarterly and yearly TB data collected in the state. Data was analyzed using the Power BI with statistical alpha level of significance <0.05. Results: Between 2012 and 2013 there was no House-to-house active TB case search in Nigeria and Kano had zero contribution to TB notification from the community in those years. However, in 2014 with the introduction of H2H Active TB Case Search Kano notified 6,014 TB cases out of which 113 came from the community ACF that translated to 2% contribution to total TB notification. From 2014 to 2022 there was progressive increase in community contribution to TB case notification from 113 out of 6,014 total TB patients notified (2012) to 11,799 out of 26,371 TB patients notified (2022) in Kano State. This translated to 45% increase in community contribution to total TB case notification. Discussion: Remarkable increase in community contribution to total TB case notification in Kano State was achieved in 2022 with 11,799 TB cases notified from the community Active TB case search to the total of 26,731 TB cases notified in Kano State, Nigeria. Conclusion: in research has shown that Community-based H2H Active TB Case Search through Community TB Workers (CTWs) is an excellent strategy in finding the missing TB cases towards Ending TB in the world.

Keywords: tuberculosis(TB), active case search (ACF), house-to-house (H2H), community TB workers (CTWs)

Procedia PDF Downloads 45