Search results for: datasets
512 Decision Trees Constructing Based on K-Means Clustering Algorithm
Authors: Loai Abdallah, Malik Yousef
Abstract:
A domain space for the data should reflect the actual similarity between objects. Since objects belonging to the same cluster usually share some common traits even though their geometric distance might be relatively large. In general, the Euclidean distance of data points that represented by large number of features is not capturing the actual relation between those points. In this study, we propose a new method to construct a different space that is based on clustering to form a new distance metric. The new distance space is based on ensemble clustering (EC). The EC distance space is defined by tracking the membership of the points over multiple runs of clustering algorithm metric. Over this distance, we train the decision trees classifier (DT-EC). The results obtained by applying DT-EC on 10 datasets confirm our hypotheses that embedding the EC space as a distance metric would improve the performance.Keywords: ensemble clustering, decision trees, classification, K nearest neighbors
Procedia PDF Downloads 191511 A Comparison of YOLO Family for Apple Detection and Counting in Orchards
Authors: Yuanqing Li, Changyi Lei, Zhaopeng Xue, Zhuo Zheng, Yanbo Long
Abstract:
In agricultural production and breeding, implementing automatic picking robot in orchard farming to reduce human labour and error is challenging. The core function of it is automatic identification based on machine vision. This paper focuses on apple detection and counting in orchards and implements several deep learning methods. Extensive datasets are used and a semi-automatic annotation method is proposed. The proposed deep learning models are in state-of-the-art YOLO family. In view of the essence of the models with various backbones, a multi-dimensional comparison in details is made in terms of counting accuracy, mAP and model memory, laying the foundation for realising automatic precision agriculture.Keywords: agricultural object detection, deep learning, machine vision, YOLO family
Procedia PDF Downloads 200510 Monitoring Land Productivity Dynamics of Gombe State, Nigeria
Authors: Ishiyaku Abdulkadir, Satish Kumar J
Abstract:
Land Productivity is a measure of the greenness of above-ground biomass in health and potential gain and is not related to agricultural productivity. Monitoring land productivity dynamics is essential to identify, especially when and where the trend is characterized degraded for mitigation measures. This research aims to monitor the land productivity trend of Gombe State between 2001 and 2015. QGIS was used to compute NDVI from AVHRR/MODIS datasets in a cloud-based method. The result appears that land area with improving productivity account for 773sq.km with 4.31%, stable productivity traced to 4,195.6 sq.km with 23.40%, stable but stressed productivity represent 18.7sq.km account for 0.10%, early sign of decline productivity occupied 5203.1sq.km with 29%, declining productivity account for 7019.7sq.km, represent 39.2%, water bodies occupied 718.7sq.km traced to 4% of the state’s area.Keywords: above-ground biomass, dynamics, land productivity, man-environment relationship
Procedia PDF Downloads 145509 Robust Variable Selection Based on Schwarz Information Criterion for Linear Regression Models
Authors: Shokrya Saleh A. Alshqaq, Abdullah Ali H. Ahmadini
Abstract:
The Schwarz information criterion (SIC) is a popular tool for selecting the best variables in regression datasets. However, SIC is defined using an unbounded estimator, namely, the least-squares (LS), which is highly sensitive to outlying observations, especially bad leverage points. A method for robust variable selection based on SIC for linear regression models is thus needed. This study investigates the robustness properties of SIC by deriving its influence function and proposes a robust SIC based on the MM-estimation scale. The aim of this study is to produce a criterion that can effectively select accurate models in the presence of vertical outliers and high leverage points. The advantages of the proposed robust SIC is demonstrated through a simulation study and an analysis of a real dataset.Keywords: influence function, robust variable selection, robust regression, Schwarz information criterion
Procedia PDF Downloads 142508 Combining the Dynamic Conditional Correlation and Range-GARCH Models to Improve Covariance Forecasts
Authors: Piotr Fiszeder, Marcin Fałdziński, Peter Molnár
Abstract:
The dynamic conditional correlation model of Engle (2002) is one of the most popular multivariate volatility models. However, this model is based solely on closing prices. It has been documented in the literature that the high and low price of the day can be used in an efficient volatility estimation. We, therefore, suggest a model which incorporates high and low prices into the dynamic conditional correlation framework. Empirical evaluation of this model is conducted on three datasets: currencies, stocks, and commodity exchange-traded funds. The utilisation of realized variances and covariances as proxies for true variances and covariances allows us to reach a strong conclusion that our model outperforms not only the standard dynamic conditional correlation model but also a competing range-based dynamic conditional correlation model.Keywords: volatility, DCC model, high and low prices, range-based models, covariance forecasting
Procedia PDF Downloads 184507 Distorted Document Images Dataset for Text Detection and Recognition
Authors: Ilia Zharikov, Philipp Nikitin, Ilia Vasiliev, Vladimir Dokholyan
Abstract:
With the increasing popularity of document analysis and recognition systems, text detection (TD) and optical character recognition (OCR) in document images become challenging tasks. However, according to our best knowledge, no publicly available datasets for these particular problems exist. In this paper, we introduce a Distorted Document Images dataset (DDI-100) and provide a detailed analysis of the DDI-100 in its current state. To create the dataset we collected 7000 unique document pages, and extend it by applying different types of distortions and geometric transformations. In total, DDI-100 contains more than 100,000 document images together with binary text masks, text and character locations in terms of bounding boxes. We also present an analysis of several state-of-the-art TD and OCR approaches on the presented dataset. Lastly, we demonstrate the usefulness of DDI-100 to improve accuracy and stability of the considered TD and OCR models.Keywords: document analysis, open dataset, optical character recognition, text detection
Procedia PDF Downloads 175506 FPGA Implementation of Adaptive Clock Recovery for TDMoIP Systems
Authors: Semih Demir, Anil Celebi
Abstract:
Circuit switched networks widely used until the end of the 20th century have been transformed into packages switched networks. Time Division Multiplexing over Internet Protocol (TDMoIP) is a system that enables Time Division Multiplexing (TDM) traffic to be carried over packet switched networks (PSN). In TDMoIP systems, devices that send TDM data to the PSN and receive it from the network must operate with the same clock frequency. In this study, it was aimed to implement clock synchronization process in Field Programmable Gate Array (FPGA) chips using time information attached to the packages received from PSN. The designed hardware is verified using the datasets obtained for the different carrier types and comparing the results with the software model. Field tests are also performed by using the real time TDMoIP system.Keywords: clock recovery on TDMoIP, FPGA, MATLAB reference model, clock synchronization
Procedia PDF Downloads 279505 Person Re-Identification using Siamese Convolutional Neural Network
Authors: Sello Mokwena, Monyepao Thabang
Abstract:
In this study, we propose a comprehensive approach to address the challenges in person re-identification models. By combining a centroid tracking algorithm with a Siamese convolutional neural network model, our method excels in detecting, tracking, and capturing robust person features across non-overlapping camera views. The algorithm efficiently identifies individuals in the camera network, while the neural network extracts fine-grained global features for precise cross-image comparisons. The approach's effectiveness is further accentuated by leveraging the camera network topology for guidance. Our empirical analysis on benchmark datasets highlights its competitive performance, particularly evident when background subtraction techniques are selectively applied, underscoring its potential in advancing person re-identification techniques.Keywords: camera network, convolutional neural network topology, person tracking, person re-identification, siamese
Procedia PDF Downloads 73504 Healthcare Data Mining Innovations
Authors: Eugenia Jilinguirian
Abstract:
In the healthcare industry, data mining is essential since it transforms the field by collecting useful data from large datasets. Data mining is the process of applying advanced analytical methods to large patient records and medical histories in order to identify patterns, correlations, and trends. Healthcare professionals can improve diagnosis accuracy, uncover hidden linkages, and predict disease outcomes by carefully examining these statistics. Additionally, data mining supports personalized medicine by personalizing treatment according to the unique attributes of each patient. This proactive strategy helps allocate resources more efficiently, enhances patient care, and streamlines operations. However, to effectively apply data mining, however, and ensure the use of private healthcare information, issues like data privacy and security must be carefully considered. Data mining continues to be vital for searching for more effective, efficient, and individualized healthcare solutions as technology evolves.Keywords: data mining, healthcare, big data, individualised healthcare, healthcare solutions, database
Procedia PDF Downloads 68503 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 144502 A Machine Learning Approach to Detecting Evasive PDF Malware
Authors: Vareesha Masood, Ammara Gul, Nabeeha Areej, Muhammad Asif Masood, Hamna Imran
Abstract:
The universal use of PDF files has prompted hackers to use them for malicious intent by hiding malicious codes in their victim’s PDF machines. Machine learning has proven to be the most efficient in identifying benign files and detecting files with PDF malware. This paper has proposed an approach using a decision tree classifier with parameters. A modern, inclusive dataset CIC-Evasive-PDFMal2022, produced by Lockheed Martin’s Cyber Security wing is used. It is one of the most reliable datasets to use in this field. We designed a PDF malware detection system that achieved 99.2%. Comparing the suggested model to other cutting-edge models in the same study field, it has a great performance in detecting PDF malware. Accordingly, we provide the fastest, most reliable, and most efficient PDF Malware detection approach in this paper.Keywords: PDF, PDF malware, decision tree classifier, random forest classifier
Procedia PDF Downloads 92501 Improvement of Ground Truth Data for Eye Location on Infrared Driver Recordings
Authors: Sorin Valcan, Mihail Gaianu
Abstract:
Labeling is a very costly and time consuming process which aims to generate datasets for training neural networks in several functionalities and projects. For driver monitoring system projects, the need for labeled images has a significant impact on the budget and distribution of effort. This paper presents the modifications done to an algorithm used for the generation of ground truth data for 2D eyes location on infrared images with drivers in order to improve the quality of the data and performance of the trained neural networks. The algorithm restrictions become tougher, which makes it more accurate but also less constant. The resulting dataset becomes smaller and shall not be altered by any kind of manual label adjustment before being used in the neural networks training process. These changes resulted in a much better performance of the trained neural networks.Keywords: labeling automation, infrared camera, driver monitoring, eye detection, convolutional neural networks
Procedia PDF Downloads 119500 Designing Emergency Response Network for Rail Hazmat Shipments
Authors: Ali Vaezi, Jyotirmoy Dalal, Manish Verma
Abstract:
The railroad is one of the primary transportation modes for hazardous materials (hazmat) shipments in North America. Installing an emergency response network capable of providing a commensurate response is one of the primary levers to contain (or mitigate) the adverse consequences from rail hazmat incidents. To this end, we propose a two-stage stochastic program to determine the location of and equipment packages to be stockpiled at each response facility. The raw input data collected from publicly available reports were processed, fed into the proposed optimization program, and then tested on a realistic railroad network in Ontario (Canada). From the resulting analyses, we conclude that the decisions based only on empirical datasets would undermine the effectiveness of the resulting network; coverage can be improved by redistributing equipment in the network, purchasing equipment with higher containment capacity, and making use of a disutility multiplier factor.Keywords: hazmat, rail network, stochastic programming, emergency response
Procedia PDF Downloads 182499 Understanding and Improving Neural Network Weight Initialization
Authors: Diego Aguirre, Olac Fuentes
Abstract:
In this paper, we present a taxonomy of weight initialization schemes used in deep learning. We survey the most representative techniques in each class and compare them in terms of overhead cost, convergence rate, and applicability. We also introduce a new weight initialization scheme. In this technique, we perform an initial feedforward pass through the network using an initialization mini-batch. Using statistics obtained from this pass, we initialize the weights of the network, so the following properties are met: 1) weight matrices are orthogonal; 2) ReLU layers produce a predetermined number of non-zero activations; 3) the output produced by each internal layer has a unit variance; 4) weights in the last layer are chosen to minimize the error in the initial mini-batch. We evaluate our method on three popular architectures, and a faster converge rates are achieved on the MNIST, CIFAR-10/100, and ImageNet datasets when compared to state-of-the-art initialization techniques.Keywords: deep learning, image classification, supervised learning, weight initialization
Procedia PDF Downloads 136498 Sentiment Analysis of Consumers’ Perceptions on Social Media about the Main Mobile Providers in Jamaica
Authors: Sherrene Bogle, Verlia Bogle, Tyrone Anderson
Abstract:
In recent years, organizations have become increasingly interested in the possibility of analyzing social media as a means of gaining meaningful feedback about their products and services. The aspect based sentiment analysis approach is used to predict the sentiment for Twitter datasets for Digicel and Lime, the main mobile companies in Jamaica, using supervised learning classification techniques. The results indicate an average of 82.2 percent accuracy in classifying tweets when comparing three separate classification algorithms against the purported baseline of 70 percent and an average root mean squared error of 0.31. These results indicate that the analysis of sentiment on social media in order to gain customer feedback can be a viable solution for mobile companies looking to improve business performance.Keywords: machine learning, sentiment analysis, social media, supervised learning
Procedia PDF Downloads 445497 Opening up Government Datasets for Big Data Analysis to Support Policy Decisions
Authors: K. Hardy, A. Maurushat
Abstract:
Policy makers are increasingly looking to make evidence-based decisions. Evidence-based decisions have historically used rigorous methodologies of empirical studies by research institutes, as well as less reliable immediate survey/polls often with limited sample sizes. As we move into the era of Big Data analytics, policy makers are looking to different methodologies to deliver reliable empirics in real-time. The question is not why did these people do this for the last 10 years, but why are these people doing this now, and if the this is undesirable, and how can we have an impact to promote change immediately. Big data analytics rely heavily on government data that has been released in to the public domain. The open data movement promises greater productivity and more efficient delivery of services; however, Australian government agencies remain reluctant to release their data to the general public. This paper considers the barriers to releasing government data as open data, and how these barriers might be overcome.Keywords: big data, open data, productivity, data governance
Procedia PDF Downloads 372496 MarginDistillation: Distillation for Face Recognition Neural Networks with Margin-Based Softmax
Authors: Svitov David, Alyamkin Sergey
Abstract:
The usage of convolutional neural networks (CNNs) in conjunction with the margin-based softmax approach demonstrates the state-of-the-art performance for the face recognition problem. Recently, lightweight neural network models trained with the margin-based softmax have been introduced for the face identification task for edge devices. In this paper, we propose a distillation method for lightweight neural network architectures that outperforms other known methods for the face recognition task on LFW, AgeDB-30 and Megaface datasets. The idea of the proposed method is to use class centers from the teacher network for the student network. Then the student network is trained to get the same angles between the class centers and face embeddings predicted by the teacher network.Keywords: ArcFace, distillation, face recognition, margin-based softmax
Procedia PDF Downloads 147495 Fusion of Shape and Texture for Unconstrained Periocular Authentication
Authors: D. R. Ambika, K. R. Radhika, D. Seshachalam
Abstract:
Unconstrained authentication is an important component for personal automated systems and human-computer interfaces. Existing solutions mostly use face as the primary object of analysis. The performance of face-based systems is largely determined by the extent of deformation caused in the facial region and amount of useful information available in occluded face images. Periocular region is a useful portion of face with discriminative ability coupled with resistance to deformation. A reliable portion of periocular area is available for occluded images. The present work demonstrates that joint representation of periocular texture and periocular structure provides an effective expression and poses invariant representation. The proposed methodology provides an effective and compact description of periocular texture and shape. The method is tested over four benchmark datasets exhibiting varied acquisition conditions.Keywords: periocular authentication, Zernike moments, LBP variance, shape and texture fusion
Procedia PDF Downloads 279494 Investigating the Factors Affecting Generalization of Deep Learning Models for Plant Disease Detection
Authors: Praveen S. Muthukumarana, Achala C. Aponso
Abstract:
A large percentage of global crop harvest is lost due to crop diseases. Timely identification and treatment of crop diseases is difficult in many developing nations due to insufficient trained professionals in the field of agriculture. Many crop diseases can be accurately diagnosed by visual symptoms. In the past decade, deep learning has been successfully utilized in domains such as healthcare but adoption in agriculture for plant disease detection is rare. The literature shows that models trained with popular datasets such as PlantVillage does not generalize well on real world images. This paper attempts to find out how to make plant disease identification models that generalize well with real world images.Keywords: agriculture, convolutional neural network, deep learning, plant disease classification, plant disease detection, plant disease diagnosis
Procedia PDF Downloads 146493 Analysis of Formation Methods of Range Profiles for an X-Band Coastal Surveillance Radar
Authors: Nguyen Van Loi, Le Thanh Son, Tran Trung Kien
Abstract:
The paper deals with the problem of the formation of range profiles (RPs) for an X-band coastal surveillance radar. Two popular methods, the difference operator method, and the window-based method, are reviewed and analyzed via two tests with different datasets. The test results show that although the original window-based method achieves a better performance than the difference operator method, it has three main drawbacks that are the use of 3 or 4 peaks of an RP for creating the windows, the extension of the window size using the power sum of three adjacent cells in the left and the right sides of the windows and the same threshold applied for all types of vessels to finish the formation process of RPs. These drawbacks lead to inaccurate RPs due to the low signal-to-clutter ratio. Therefore, some suggestions are proposed to improve the original window-based method.Keywords: range profile, difference operator method, window-based method, automatic target recognition
Procedia PDF Downloads 127492 Institutional Capacity and Corruption: Evidence from Brazil
Authors: Dalson Figueiredo, Enivaldo Rocha, Ranulfo Paranhos, José Alexandre
Abstract:
This paper analyzes the effects of institutional capacity on corruption. Methodologically, the research design combines both descriptive and multivariate statistics to examine two original datasets based on secondary data. In particular, we employ a principal component model to estimate an indicator of institutional capacity for both state audit institutions and subnational judiciary courts. Then, we estimate the effect of institutional capacity on two dependent variables: (1) incidence of administrative irregularities and (2) time elapsed to judge corruption cases. The preliminary results using ordinary least squares, negative binomial and Tobit models suggest the same conclusions: higher the institutional audit capacity, higher is the probability of detecting a corruption case. On the other hand, higher the institutional capacity of state judiciary, the lower is the time to judge corruption cases.Keywords: institutional capacity, corruption, state level institutions, evidence from Brazil
Procedia PDF Downloads 373491 Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data
Authors: Fan Gao, Lior Pachter
Abstract:
The primary tool currently used to pre-process 10X Chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices, and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.Keywords: single-cell, ATAC-seq, bioinformatics, open chromatin landscape, chromatin interactome
Procedia PDF Downloads 156490 Liver Tumor Detection by Classification through FD Enhancement of CT Image
Authors: N. Ghatwary, A. Ahmed, H. Jalab
Abstract:
In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.Keywords: fractional differential (FD), computed tomography (CT), fusion, aplha, texture features.
Procedia PDF Downloads 359489 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 253488 Protein Remote Homology Detection and Fold Recognition by Combining Profiles with Kernel Methods
Authors: Bin Liu
Abstract:
Protein remote homology detection and fold recognition are two most important tasks in protein sequence analysis, which is critical for protein structure and function studies. In this study, we combined the profile-based features with various string kernels, and constructed several computational predictors for protein remote homology detection and fold recognition. Experimental results on two widely used benchmark datasets showed that these methods outperformed the competing methods, indicating that these predictors are useful computational tools for protein sequence analysis. By analyzing the discriminative features of the training models, some interesting patterns were discovered, reflecting the characteristics of protein superfamilies and folds, which are important for the researchers who are interested in finding the patterns of protein folds.Keywords: protein remote homology detection, protein fold recognition, profile-based features, Support Vector Machines (SVMs)
Procedia PDF Downloads 163487 Identification of Impact Load and Partial System Parameters Using 1D-CNN
Authors: Xuewen Yu, Danhui Dan
Abstract:
The identification of impact load and some hard-to-obtain system parameters is crucial for the activities of analysis, validation, and evaluation in the engineering field. This paper proposes a method that utilizes neural networks based on 1D-CNN to identify the impact load and partial system parameters from measured responses. To this end, forward computations are conducted to provide datasets consisting of the triples (parameter θ, input u, output y). Then neural networks are trained to learn the mapping from input to output, fu|{θ} : y → u, as well as from input and output to parameter, fθ : (u, y) → θ. Afterward, feeding the trained neural networks the measured output response, the input impact load and system parameter can be calculated, respectively. The method is tested on two simulated examples and shows sound accuracy in estimating the impact load (waveform and location) and system parameters.Keywords: convolutional neural network, impact load identification, system parameter identification, inverse problem
Procedia PDF Downloads 127486 Investigation of Regional Differences in Strong Ground Motions for the Iranian Plateau
Authors: Farhad Sedaghati, Shahram Pezeshk
Abstract:
Regional variations in strong ground motions for the Iranian Plateau have been investigated by using a simple statistical method called Analysis of Variance (ANOVA). In this respect, a large database consisting of 1157 records occurring within the Iranian Plateau with moment magnitudes of greater than or equal to 5 and Joyner-Boore distances up to 200 km has been considered. Geometric averages of horizontal peak ground accelerations (PGA) as well as 5% damped linear elastic response spectral accelerations (SA) at periods of 0.2, 0.5, 1.0, and 2.0 sec are used as strong motion parameters. The initial database is divided into two different datasets, for Northern Iran (NI) and Central and Southern Iran (CSI). The comparison between strong ground motions of these two regions reveals that there is no evidence for significant differences; therefore, data from these two regions may be combined to estimate the unknown coefficients of attenuation relationships.Keywords: ANOVA, attenuation relationships, Iranian Plateau, PGA, regional variation, SA, strong ground motion
Procedia PDF Downloads 315485 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement
Authors: Rajkumar Ghosh
Abstract:
Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.Keywords: earthquake, out-of-sequence thrust, disaster, human life
Procedia PDF Downloads 78484 Incremental Learning of Independent Topic Analysis
Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda
Abstract:
In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.Keywords: text mining, topic extraction, independent, incremental, independent component analysis
Procedia PDF Downloads 309483 A Quantitative Evaluation of Text Feature Selection Methods
Authors: B. S. Harish, M. B. Revanasiddappa
Abstract:
Due to rapid growth of text documents in digital form, automated text classification has become an important research in the last two decades. The major challenge of text document representations are high dimension, sparsity, volume and semantics. Since the terms are only features that can be found in documents, selection of good terms (features) plays an very important role. In text classification, feature selection is a strategy that can be used to improve classification effectiveness, computational efficiency and accuracy. In this paper, we present a quantitative analysis of most widely used feature selection (FS) methods, viz. Term Frequency-Inverse Document Frequency (tfidf ), Mutual Information (MI), Information Gain (IG), CHISquare (x2), Term Frequency-Relevance Frequency (tfrf ), Term Strength (TS), Ambiguity Measure (AM) and Symbolic Feature Selection (SFS) to classify text documents. We evaluated all the feature selection methods on standard datasets like 20 Newsgroups, 4 University dataset and Reuters-21578.Keywords: classifiers, feature selection, text classification
Procedia PDF Downloads 460