Search results for: elemental graph data model (EGDM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 34969

Search results for: elemental graph data model (EGDM)

34609 A Unified Model for Orotidine Monophosphate Synthesis: Target for Inhibition of Growth of Mycobacterium tuberculosis

Authors: N. Naga Subrahmanyeswara Rao, Parag Arvind Deshpande

Abstract:

Understanding nucleotide synthesis reaction of any organism is beneficial to know the growth of it as in Mycobacterium tuberculosis to design anti TB drug. One of the reactions of de novo pathway which takes place in all organisms was considered. The reaction takes places between phosphoribosyl pyrophosphate and orotate catalyzed by orotate phosphoribosyl transferase and divalent metal ion gives orotdine monophosphate, a nucleotide. All the reaction steps of three experimentally proposed mechanisms for this reaction were considered to develop kinetic rate expression. The model was validated using the data for four organisms. This model could successfully describe the kinetics for the reported data. The developed model can serve as a reliable model to describe the kinetics in new organisms without the need of mechanistic determination. So an organism-independent model was developed.

Keywords: mechanism, nucleotide, organism, tuberculosis

Procedia PDF Downloads 311
34608 Spatial Econometric Approaches for Count Data: An Overview and New Directions

Authors: Paula Simões, Isabel Natário

Abstract:

This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.

Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data

Procedia PDF Downloads 563
34607 Verification & Validation of Map Reduce Program Model for Parallel K-Mediod Algorithm on Hadoop Cluster

Authors: Trapti Sharma, Devesh Kumar Srivastava

Abstract:

This paper is basically a analysis study of above MapReduce implementation and also to verify and validate the MapReduce solution model for Parallel K-Mediod algorithm on Hadoop Cluster. MapReduce is a programming model which authorize the managing of huge amounts of data in parallel, on a large number of devices. It is specially well suited to constant or moderate changing set of data since the implementation point of a position is usually high. MapReduce has slowly become the framework of choice for “big data”. The MapReduce model authorizes for systematic and instant organizing of large scale data with a cluster of evaluate nodes. One of the primary affect in Hadoop is how to minimize the completion length (i.e. makespan) of a set of MapReduce duty. In this paper, we have verified and validated various MapReduce applications like wordcount, grep, terasort and parallel K-Mediod clustering algorithm. We have found that as the amount of nodes increases the completion time decreases.

Keywords: hadoop, mapreduce, k-mediod, validation, verification

Procedia PDF Downloads 344
34606 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks

Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz

Abstract:

Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.

Keywords: customer relationship management, churn prediction, telecom industry, deep learning, artificial neural networks

Procedia PDF Downloads 124
34605 Positioning a Southern Inclusive Framework Embedded in the Social Model of Disability Theory Contextualised for Guyana

Authors: Lidon Lashley

Abstract:

This paper presents how the social model of disability can be used to reshape inclusive education practices in Guyana. Inclusive education in Guyana is metamorphosizing but still firmly held in the tenets of the Medical Model of Disability which influences the experiences of children with Special Education Needs and/or Disabilities (SEN/D). An ethnographic approach to data gathering was employed in this study. Qualitative data was gathered from the voices of children with and without SEN/D as well as their mainstream teachers to present the interplay of discourses and subjectivities in the situation. The data was analyzed using Adele Clarke's postmodern approach to grounded theory analysis called situational analysis. The data suggest that it is possible but will be challenging to fully contextualize and adopt Loreman's synthesis and Booths and Ainscow's Index in the two mainstream schools studied. In addition, the data paved the way for the presentation of the social model framework specific to Guyana called 'Southern Inclusive Education Framework for Guyana' and its support tool called 'The Inclusive Checker created for Southern mainstream primary classrooms.

Keywords: social model of disability, medical model of disability, subjectivities, metamorphosis, special education needs, postcolonial Guyana, inclusion, culture, mainstream primary schools, Loreman's synthesis, Booths and Ainscow's index

Procedia PDF Downloads 130
34604 Integrated Model for Enhancing Data Security Performance in Cloud Computing

Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali

Abstract:

Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.

Keywords: cloud Ccomputing, data security, SAAS, PAAS, IAAS, Blowfish

Procedia PDF Downloads 453
34603 Identification of Classes of Bilinear Time Series Models

Authors: Anthony Usoro

Abstract:

In this paper, two classes of bilinear time series model are obtained under certain conditions from the general bilinear autoregressive moving average model. Bilinear Autoregressive (BAR) and Bilinear Moving Average (BMA) Models have been identified. From the general bilinear model, BAR and BMA models have been proved to exist for q = Q = 0, => j = 0, and p = P = 0, => i = 0 respectively. These models are found useful in modelling most of the economic and financial data.

Keywords: autoregressive model, bilinear autoregressive model, bilinear moving average model, moving average model

Procedia PDF Downloads 379
34602 Simulation of a Cost Model Response Requests for Replication in Data Grid Environment

Authors: Kaddi Mohammed, A. Benatiallah, D. Benatiallah

Abstract:

Data grid is a technology that has full emergence of new challenges, such as the heterogeneity and availability of various resources and geographically distributed, fast data access, minimizing latency and fault tolerance. Researchers interested in this technology address the problems of the various systems related to the industry such as task scheduling, load balancing and replication. The latter is an effective solution to achieve good performance in terms of data access and grid resources and better availability of data cost. In a system with duplication, a coherence protocol is used to impose some degree of synchronization between the various copies and impose some order on updates. In this project, we present an approach for placing replicas to minimize the cost of response of requests to read or write, and we implement our model in a simulation environment. The placement techniques are based on a cost model which depends on several factors, such as bandwidth, data size and storage nodes.

Keywords: response time, query, consistency, bandwidth, storage capacity, CERN

Procedia PDF Downloads 249
34601 A Method for Identifying Unusual Transactions in E-commerce Through Extended Data Flow Conformance Checking

Authors: Handie Pramana Putra, Ani Dijah Rahajoe

Abstract:

The proliferation of smart devices and advancements in mobile communication technologies have permeated various facets of life with the widespread influence of e-commerce. Detecting abnormal transactions holds paramount significance in this realm due to the potential for substantial financial losses. Moreover, the fusion of data flow and control flow assumes a critical role in the exploration of process modeling and data analysis, contributing significantly to the accuracy and security of business processes. This paper introduces an alternative approach to identify abnormal transactions through a model that integrates both data and control flows. Referred to as the Extended Data Petri net (DPNE), our model encapsulates the entire process, encompassing user login to the e-commerce platform and concluding with the payment stage, including the mobile transaction process. We scrutinize the model's structure, formulate an algorithm for detecting anomalies in pertinent data, and elucidate the rationale and efficacy of the comprehensive system model. A case study validates the responsive performance of each system component, demonstrating the system's adeptness in evaluating every activity within mobile transactions. Ultimately, the results of anomaly detection are derived through a thorough and comprehensive analysis.

Keywords: database, data analysis, DPNE, extended data flow, e-commerce

Procedia PDF Downloads 33
34600 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: big data, machine learning, smart city, social cost, transportation network

Procedia PDF Downloads 236
34599 Altered Network Organization in Mild Alzheimer's Disease Compared to Mild Cognitive Impairment Using Resting-State EEG

Authors: Chia-Feng Lu, Yuh-Jen Wang, Shin Teng, Yu-Te Wu, Sui-Hing Yan

Abstract:

Brain functional networks based on resting-state EEG data were compared between patients with mild Alzheimer’s disease (mAD) and matched patients with amnestic subtype of mild cognitive impairment (aMCI). We integrated the time–frequency cross mutual information (TFCMI) method to estimate the EEG functional connectivity between cortical regions and the network analysis based on graph theory to further investigate the alterations of functional networks in mAD compared with aMCI group. We aimed at investigating the changes of network integrity, local clustering, information processing efficiency, and fault tolerance in mAD brain networks for different frequency bands based on several topological properties, including degree, strength, clustering coefficient, shortest path length, and efficiency. Results showed that the disruptions of network integrity and reductions of network efficiency in mAD characterized by lower degree, decreased clustering coefficient, higher shortest path length, and reduced global and local efficiencies in the delta, theta, beta2, and gamma bands were evident. The significant changes in network organization can be used in assisting discrimination of mAD from aMCI in clinical.

Keywords: EEG, functional connectivity, graph theory, TFCMI

Procedia PDF Downloads 404
34598 Sequential Data Assimilation with High-Frequency (HF) Radar Surface Current

Authors: Lei Ren, Michael Hartnett, Stephen Nash

Abstract:

The abundant measured surface current from HF radar system in coastal area is assimilated into model to improve the modeling forecasting ability. A simple sequential data assimilation scheme, Direct Insertion (DI), is applied to update model forecast states. The influence of Direct Insertion data assimilation over time is analyzed at one reference point. Vector maps of surface current from models are compared with HF radar measurements. Root-Mean-Squared-Error (RMSE) between modeling results and HF radar measurements is calculated during the last four days with no data assimilation.

Keywords: data assimilation, CODAR, HF radar, surface current, direct insertion

Procedia PDF Downloads 549
34597 Finite Elemental Simulation of the Combined Process of Asymmetric Rolling and Plastic Bending

Authors: A. Pesin, D. Pustovoytov, M. Sverdlik

Abstract:

Traditionally, the need in items represents a large body of rotation (e.g. shrouds of various process units: a converter, a mixer, a scrubber, a steel ladle and etc.) is satisfied by using them at engineering enterprises. At these enterprises large parts of bodies of rotation are made on stamping units or bending and forming machines. In Nosov Magnitogorsk State Technical University in alliance with JSC "Magnitogorsk Metal and Steel Works" there was suggested and implemented the technology for producing such items based on a combination of asymmetric rolling processes and plastic bending under conditions of the plate mill. In this paper, based on finite elemental mathematical simulation in technology of a combined process of asymmetric rolling and bending plastic has been improved. It is shown that for the same curvature along the entire length of the metal sheet it is necessary to introduce additional asymmetry speed when rolling front end and tape trailer. Production of large bodies of rotation at mill 4500 JSC "Magnitogorsk Metal and Steel Works" showed good convergence of theoretical and experimental values of the curvature of the metal. Economic effect obtained more than 1.0 million dollars.

Keywords: asymmetric rolling, plastic bending, combined process, FEM

Procedia PDF Downloads 295
34596 A Constitutive Model for Time-Dependent Behavior of Clay

Authors: T. N. Mac, B. Shahbodaghkhan, N. Khalili

Abstract:

A new elastic-viscoplastic (EVP) constitutive model is proposed for the analysis of time-dependent behavior of clay. The proposed model is based on the bounding surface plasticity and the concept of viscoplastic consistency framework to establish continuous transition from plasticity to rate dependent viscoplasticity. Unlike the overstress based models, this model will meet the consistency condition in formulating the constitutive equation for EVP model. The procedure of deriving the constitutive relationship is also presented. Simulation results and comparisons with experimental data are then presented to demonstrate the performance of the model.

Keywords: bounding surface, consistency theory, constitutive model, viscosity

Procedia PDF Downloads 467
34595 Application of Model Tree in the Prediction of TBM Rate of Penetration with Synthetic Minority Oversampling Technique

Authors: Ehsan Mehryaar

Abstract:

The rate of penetration is (RoP) one of the vital factors in the cost and time of tunnel boring projects; therefore, predicting it can lead to a substantial increase in the efficiency of the project. RoP is heavily dependent geological properties of the project site and TBM properties. In this study, 151-point data from Queen’s water tunnel is collected, which includes unconfined compression strength, peak slope index, angle with weak planes, and distance between planes of weaknesses. Since the size of the data is small, it was observed that it is imbalanced. To solve that problem synthetic minority oversampling technique is utilized. The model based on the model tree is proposed, where each leaf consists of a support vector machine model. Proposed model performance is then compared to existing empirical equations in the literature.

Keywords: Model tree, SMOTE, rate of penetration, TBM(tunnel boring machine), SVM

Procedia PDF Downloads 157
34594 Application of Data Mining Techniques for Tourism Knowledge Discovery

Authors: Teklu Urgessa, Wookjae Maeng, Joong Seek Lee

Abstract:

Application of five implementations of three data mining classification techniques was experimented for extracting important insights from tourism data. The aim was to find out the best performing algorithm among the compared ones for tourism knowledge discovery. Knowledge discovery process from data was used as a process model. 10-fold cross validation method is used for testing purpose. Various data preprocessing activities were performed to get the final dataset for model building. Classification models of the selected algorithms were built with different scenarios on the preprocessed dataset. The outperformed algorithm tourism dataset was Random Forest (76%) before applying information gain based attribute selection and J48 (C4.5) (75%) after selection of top relevant attributes to the class (target) attribute. In terms of time for model building, attribute selection improves the efficiency of all algorithms. Artificial Neural Network (multilayer perceptron) showed the highest improvement (90%). The rules extracted from the decision tree model are presented, which showed intricate, non-trivial knowledge/insight that would otherwise not be discovered by simple statistical analysis with mediocre accuracy of the machine using classification algorithms.

Keywords: classification algorithms, data mining, knowledge discovery, tourism

Procedia PDF Downloads 272
34593 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis

Authors: Meng Su

Abstract:

High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.

Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis

Procedia PDF Downloads 77
34592 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 322
34591 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images

Authors: Sophia Shi

Abstract:

Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.

Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG

Procedia PDF Downloads 111
34590 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO

Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky

Abstract:

The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.

Keywords: aeronautics, big data, data processing, machine learning, S1000D

Procedia PDF Downloads 102
34589 Equivalent Circuit Model for the Eddy Current Damping with Frequency-Dependence

Authors: Zhiguo Shi, Cheng Ning Loong, Jiazeng Shan, Weichao Wu

Abstract:

This study proposes an equivalent circuit model to simulate the eddy current damping force with shaking table tests and finite element modeling. The model is firstly proposed and applied to a simple eddy current damper, which is modelled in ANSYS, indicating that the proposed model can simulate the eddy current damping force under different types of excitations. Then, a non-contact and friction-free eddy current damper is designed and tested, and the proposed model can reproduce the experimental observations. The excellent agreement between the simulated results and the experimental data validates the accuracy and reliability of the equivalent circuit model. Furthermore, a more complicated model is performed in ANSYS to verify the feasibility of the equivalent circuit model in complex eddy current damper, and the higher-order fractional model and viscous model are adopted for comparison.

Keywords: equivalent circuit model, eddy current damping, finite element model, shake table test

Procedia PDF Downloads 166
34588 Forecasting Materials Demand from Multi-Source Ordering

Authors: Hui Hsin Huang

Abstract:

The downstream manufactures will order their materials from different upstream suppliers to maintain a certain level of the demand. This paper proposes a bivariate model to portray this phenomenon of material demand. We use empirical data to estimate the parameters of model and evaluate the RMSD of model calibration. The results show that the model has better fitness.

Keywords: recency, ordering time, materials demand quantity, multi-source ordering

Procedia PDF Downloads 510
34587 Detecting Overdispersion for Mortality AIDS in Zero-inflated Negative Binomial Death Rate (ZINBDR) Co-infection Patients in Kelantan

Authors: Mohd Asrul Affedi, Nyi Nyi Naing

Abstract:

Overdispersion is present in count data, and basically when a phenomenon happened, a Negative Binomial (NB) is commonly used to replace a standard Poisson model. Analysis of count data event, such as mortality cases basically Poisson regression model is appropriate. Hence, the model is not appropriate when existing a zero values. The zero-inflated negative binomial model is appropriate. In this article, we modelled the mortality cases as a dependent variable by age categorical. The objective of this study to determine existing overdispersion in mortality data of AIDS co-infection patients in Kelantan.

Keywords: negative binomial death rate, overdispersion, zero-inflation negative binomial death rate, AIDS

Procedia PDF Downloads 444
34586 On the Basis Number and the Minimum Cycle Bases of the Wreath Product of Paths with Wheels

Authors: M. M. M. Jaradat

Abstract:

For a given graph G, the set Ԑ of all subsets of E(G) forms an |E(G)| dimensional vector space over Z2 with vector addition X⊕Y = (X\Y ) [ (Y \X) and scalar multiplication 1.X = X and 0.X = Ø for all X, Yϵ Ԑ. The cycle space, C(G), of a graph G is the vector subspace of (E; ⊕; .) spanned by the cycles of G. Traditionally there have been two notions of minimality among bases of C(G). First, a basis B of G is called a d-fold if each edge of G occurs in at most d cycles of the basis B. The basis number, b(G), of G is the least non-negative integer d such that C(G) has a d-fold basis; a required basis of C(G) is a basis for which each edge of G belongs to at most b(G) elements of B. Second, a basis B is called a minimum cycle basis (MCB) if its total length Σ BϵB |B| is minimum among all bases of C(G). The lexicographic product GρH has the vertex set V (GρH) = V (G) x V (H) and the edge set E(GρH) = {(u1, v1)(u2, v2)|u1 = u2 and v1 v2 ϵ E(H); or u1u2 ϵ E(G) and there is α ϵ Aut(H) such that α (v1) = v2}. In this work, a construction of a minimum cycle basis for the wreath product of wheels with paths is presented. Also, the length of the longest cycle of a minimum cycle basis is determined. Moreover, the basis number for the wreath product of the same is investigated.

Keywords: cycle space, minimum cycle basis, basis number, wreath product

Procedia PDF Downloads 246
34585 Identification of the Usage of Some Special Places in the Prehistoric Site of Tapeh Zagheh through Multi-Elemental Chemical Analysis of the Soil Samples

Authors: Iraj Rezaei, Kamal Al Din Niknami

Abstract:

Tapeh Zagheh is an important prehistoric site located in the central plateau of Iran, which has settlement layers of the Neolithic and Chalcolithic periods. For this research, 38 soil samples were collected from different parts of the site, as well as two samples from its outside as witnesses. Then the samples were analyzed by XRF. The purpose of this research was to identify some places with special usage for human activities in Tapeh Zagheh by measuring the amount of some special elements in the soil. The result of XRF analysis shows a significant amount of P and K in samples No.3 (fourth floor) and No.4 (third floor), probably due to certain activities such as food preparation and consumption. Samples No.9 and No.10 can be considered suitable examples of the hearths of the prehistoric period in the central plateau of Iran. The color of these samples was completely darkened due to the presence of ash, charcoal, and burnt materials. According to the XRF results, the soil of these hearths has very high amounts of elements such as P, Ca, Mn, S, K, and significant amounts of Ti, Fe, and Na. In addition, the elemental composition of sample No. 14, which was taken from a home waster, also has very high amounts of P, Mn, Mg, Ti, and Fe and high amounts of K and Ca. Sample No. 11, which is related to soil containing large amounts of waster of the kiln, along with a very strong increase in Cl and Na, the amount of elements such as K, Mg, and S has also increased significantly. It seems that the reason for the increase of elements such as Ti and Fe in some Tapeh Zagheh floors (for example, samples number 1, 2, 3, 4, 5) was the use of materials such as ocher mud or fire ash in the composition of these floors. Sample No. 13, which was taken from an oven located in the FIX trench, has very high amounts of Mn, Ti, and Fe and high amounts of P and Ca. Sample No. 15, which is related to House No. VII (probably related to a pen or a place where animals were kept) has much more phosphate compared to the control samples, which is probably due to the addition of animal excrement and urine to the soil. Sample No. 29 was taken from the north of the industrial area of Zagheh village (place of pottery kilns). The very low amount of index elements in sample No. 29 shows that the industrial activities did not extend to the mentioned point, and therefore, the range of this point can be considered as the boundary between the residential part of the Zagheh village and its industrial part.

Keywords: prehistory, multi-elemental analysis, Tapeh Zagheh, XRF

Procedia PDF Downloads 76
34584 An Owen Value for Cooperative Games with Pairwise a Priori Incompatibilities

Authors: Jose M. Gallardo, Nieves Jimenez, Andres Jimenez-Losada, Esperanza Lebron

Abstract:

A game with a priori incompatibilities is a triple (N,v,g) where (N,v) is a cooperative game, and (N,g) is a graph which establishes initial incompatibilities between some players. In these games, the negotiation has two stages. In the first stage, players can only negotiate with others with whom they are compatible. In the second stage, the grand coalition will be formed. We introduce a value for these games. Given a game with a priori incompatibility (N,v,g), we consider the family of coalitions without incompatibility relations among their players. This family is a normal set system or coalition configuration Ig. Therefore, we can assign to each game with a priori incompatibilities (N,v,g) a game with coalition configuration (N,v, Ig). Now, in order to obtain a payoff vector for (N,v,g), it suffices to calculate a payoff vector for (N,v, Ig). To this end, we apply a value for games with coalition configuration. In our case, we will use the dual configuration value, which has been studied in the literature. With this method, we obtain a value for games with a priori incompatibilities, which is called the Owen value for a priori incompatibilities. We provide a characterization of this value.

Keywords: cooperative game, game with coalition configuration, graph, independent set, Owen value, Shapley value

Procedia PDF Downloads 109
34583 Implementation of Data Science in Field of Homologation

Authors: Shubham Bhonde, Nekzad Doctor, Shashwat Gawande

Abstract:

For the use and the import of Keys and ID Transmitter as well as Body Control Modules with radio transmission in a lot of countries, homologation is required. Final deliverables in homologation of the product are certificates. In considering the world of homologation, there are approximately 200 certificates per product, with most of the certificates in local languages. It is challenging to manually investigate each certificate and extract relevant data from the certificate, such as expiry date, approval date, etc. It is most important to get accurate data from the certificate as inaccuracy may lead to missing re-homologation of certificates that will result in an incompliance situation. There is a scope of automation in reading the certificate data in the field of homologation. We are using deep learning as a tool for automation. We have first trained a model using machine learning by providing all country's basic data. We have trained this model only once. We trained the model by feeding pdf and jpg files using the ETL process. Eventually, that trained model will give more accurate results later. As an outcome, we will get the expiry date and approval date of the certificate with a single click. This will eventually help to implement automation features on a broader level in the database where certificates are stored. This automation will help to minimize human error to almost negligible.

Keywords: homologation, re-homologation, data science, deep learning, machine learning, ETL (extract transform loading)

Procedia PDF Downloads 141
34582 A Study of Population Growth Models and Future Population of India

Authors: Sheena K. J., Jyoti Badge, Sayed Mohammed Zeeshan

Abstract:

A Comparative Study of Exponential and Logistic Population Growth Models in India India is the second most populous city in the world, just behind China, and is going to be in the first place by next year. The Indian population has remarkably at higher rate than the other countries from the past 20 years. There were many scientists and demographers who has formulated various models of population growth in order to study and predict the future population. Some of the models are Fibonacci population growth model, Exponential growth model, Logistic growth model, Lotka-Volterra model, etc. These models have been effective in the past to an extent in predicting the population. However, it is essential to have a detailed comparative study between the population models to come out with a more accurate one. Having said that, this research study helps to analyze and compare the two population models under consideration - exponential and logistic growth models, thereby identifying the most effective one. Using the census data of 2011, the approximate population for 2016 to 2031 are calculated for 20 Indian states using both the models, compared and recorded the data with the actual population. On comparing the results of both models, it is found that logistic population model is more accurate than the exponential model, and using this model, we can predict the future population in a more effective way. This will give an insight to the researchers about the effective models of population and how effective these population models are in predicting the future population.

Keywords: population growth, population models, exponential model, logistic model, fibonacci model, lotka-volterra model, future population prediction, demographers

Procedia PDF Downloads 99
34581 ACBM: Attention-Based CNN and Bi-LSTM Model for Continuous Identity Authentication

Authors: Rui Mao, Heming Ji, Xiaoyu Wang

Abstract:

Keystroke dynamics are widely used in identity recognition. It has the advantage that the individual typing rhythm is difficult to imitate. It also supports continuous authentication through the keyboard without extra devices. The existing keystroke dynamics authentication methods based on machine learning have a drawback in supporting relatively complex scenarios with massive data. There are drawbacks to both feature extraction and model optimization in these methods. To overcome the above weakness, an authentication model of keystroke dynamics based on deep learning is proposed. The model uses feature vectors formed by keystroke content and keystroke time. It ensures efficient continuous authentication by cooperating attention mechanisms with the combination of CNN and Bi-LSTM. The model has been tested with Open Data Buffalo dataset, and the result shows that the FRR is 3.09%, FAR is 3.03%, and EER is 4.23%. This proves that the model is efficient and accurate on continuous authentication.

Keywords: keystroke dynamics, identity authentication, deep learning, CNN, LSTM

Procedia PDF Downloads 129
34580 Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network

Authors: Xulu Yao, Moi Hoon Yap, Yanlong Zhang

Abstract:

As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning.

Keywords: GUI, deep learning, GAN, data augmentation

Procedia PDF Downloads 156