Search results for: substitutional inference
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 317

Search results for: substitutional inference

227 Application of Fuzzy Analytical Hierarchical Process in Evaluation Supply Chain Performance Measurement

Authors: Riyadh Jamegh, AllaEldin Kassam, Sawsan Sabih

Abstract:

In modern trends of market, organizations face high-pressure environment which is characterized by globalization, high competition, and customer orientation, so it is very crucial to control and know the weak and strong points of the supply chain in order to improve their performance. So the performance measurements presented as an important tool of supply chain management because it's enabled the organizations to control, understand, and improve their efficiency. This paper aims to identify supply chain performance measurement (SCPM) by using Fuzzy Analytical Hierarchical Process (FAHP). In our real application, the performance of organizations estimated based on four parameters these are cost parameter indicator of cost (CPI), inventory turnover parameter indicator of (INPI), raw material parameter (RMPI), and safety stock level parameter indicator (SSPI), these indicators vary in impact on performance depending upon policies and strategies of organization. In this research (FAHP) technique has been used to identify the importance of such parameters, and then first fuzzy inference (FIR1) is applied to identify performance indicator of each factor depending on the importance of the factor and its value. Then, the second fuzzy inference (FIR2) also applied to integrate the effect of these indicators and identify (SCPM) which represent the required output. The developed approach provides an effective tool for evaluation of supply chain performance measurement.

Keywords: fuzzy performance measurements, supply chain, fuzzy logic, key performance indicator

Procedia PDF Downloads 115
226 Cognitive Model of Analogy Based on Operation of the Brain Cells: Glial, Axons and Neurons

Authors: Ozgu Hafizoglu

Abstract:

Analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with attributional, deep structural, casual relations that are essential to learning, to innovation in artificial worlds, and to discovery in science. Cognitive Model of Analogy (CMA) leads and creates information pattern transfer within and between domains and disciplines in science. This paper demonstrates the Cognitive Model of Analogy (CMA) as an evolutionary approach to scientific research. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions. In this paper, the model of analogical reasoning is created based on brain cells, their fractal, and operational forms within the system itself. Visualization techniques are used to show correspondences. Distinct phases of the problem-solving processes are divided thusly: encoding, mapping, inference, and response. The system is revealed relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain cells: glial cells, axons, axon terminals, and neurons, relative to matching conditions of analogical reasoning and relational information. It’s found that encoding, mapping, inference, and response processes in four-term analogical reasoning are corresponding with the fractal and operational forms of brain cells: glial, axons, and neurons.

Keywords: analogy, analogical reasoning, cognitive model, brain and glials

Procedia PDF Downloads 159
225 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference

Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev

Abstract:

Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.

Keywords: compartmental model, climate, dengue, machine learning, social-economic

Procedia PDF Downloads 43
224 DLtrace: Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

With the widespread popularity of mobile devices and the development of artificial intelligence (AI), deep learning (DL) has been extensively applied in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace; a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Moreover, using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. We conducted an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace has a more robust performance than FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: mobile computing, deep learning apps, sensitive information, static analysis

Procedia PDF Downloads 126
223 Understanding Mathematics Achievements among U. S. Middle School Students: A Bayesian Multilevel Modeling Analysis with Informative Priors

Authors: Jing Yuan, Hongwei Yang

Abstract:

This paper aims to understand U.S. middle school students’ mathematics achievements by examining relevant student and school-level predictors. Through a variance component analysis, the study first identifies evidence supporting the use of multilevel modeling. Then, a multilevel analysis is performed under Bayesian statistical inference where prior information is incorporated into the modeling process. During the analysis, independent variables are entered sequentially in the order of theoretical importance to create a hierarchy of models. By evaluating each model using Bayesian fit indices, a best-fit and most parsimonious model is selected where Bayesian statistical inference is performed for the purpose of result interpretation and discussion. The primary dataset for Bayesian modeling is derived from the Program for International Student Assessment (PISA) in 2012 with a secondary PISA dataset from 2003 analyzed under the traditional ordinary least squares method to provide the information needed to specify informative priors for a subset of the model parameters. The dependent variable is a composite measure of mathematics literacy, calculated from an exploratory factor analysis of all five PISA 2012 mathematics achievement plausible values for which multiple evidences are found supporting data unidimensionality. The independent variables include demographics variables and content-specific variables: mathematics efficacy, teacher-student ratio, proportion of girls in the school, etc. Finally, the entire analysis is performed using the MCMCpack and MCMCglmm packages in R.

Keywords: Bayesian multilevel modeling, mathematics education, PISA, multilevel

Procedia PDF Downloads 303
222 Allelic Diversity of Productive, Reproductive and Fertility Traits Genes of Buffalo and Cattle

Authors: M. Moaeen-ud-Din, G. Bilal, M. Yaqoob

Abstract:

Identification of genes of importance regarding production traits in buffalo is impaired by a paucity of genomic resources. Choice to fill this gap is to exploit data available for cow. The cross-species application of comparative genomics tools is potential gear to investigate the buffalo genome. However, this is dependent on nucleotide sequences similarity. In this study gene diversity between buffalo and cattle was determined by using 86 gene orthologues. There was about 3% difference in all genes in term of nucleotide diversity; and 0.267±0.134 in amino acids indicating the possibility for successfully using cross-species strategies for genomic studies. There were significantly higher non synonymous substitutions both in cattle and buffalo however, there was similar difference in term of dN – dS (4.414 vs 4.745) in buffalo and cattle respectively. Higher rate of non-synonymous substitutions at similar level in buffalo and cattle indicated a similar positive selection pressure. Results for relative rate test were assessed with the chi-squared test. There was no significance difference on unique mutations between cattle and buffalo lineages at synonymous sites. However, there was a significance difference on unique mutations for non synonymous sites indicating ongoing mutagenic process that generates substitutional mutation at approximately the same rate at silent sites. Moreover, despite of common ancestry, our results indicate a different divergent time among genes of cattle and buffalo. This is the first demonstration that variable rates of molecular evolution may be present within the family Bovidae.

Keywords: buffalo, cattle, gene diversity, molecular evolution

Procedia PDF Downloads 458
221 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data

Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates

Abstract:

Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.

Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.

Procedia PDF Downloads 63
220 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation

Procedia PDF Downloads 323
219 Comprehensive Risk Analysis of Decommissioning Activities with Multifaceted Hazard Factors

Authors: Hyeon-Kyo Lim, Hyunjung Kim, Kune-Woo Lee

Abstract:

Decommissioning process of nuclear facilities can be said to consist of a sequence of problem solving activities, partly because there may exist working environments contaminated by radiological exposure, and partly because there may also exist industrial hazards such as fire, explosions, toxic materials, and electrical and physical hazards. As for an individual hazard factor, risk assessment techniques are getting known to industrial workers with advance of safety technology, but the way how to integrate those results is not. Furthermore, there are few workers who experienced decommissioning operations a lot in the past. Therefore, not a few countries in the world have been trying to develop appropriate counter techniques in order to guarantee safety and efficiency of the process. In spite of that, there still exists neither domestic nor international standard since nuclear facilities are too diverse and unique. In the consequence, it is quite inevitable to imagine and assess the whole risk in the situation anticipated one by one. This paper aimed to find out an appropriate technique to integrate individual risk assessment results from the viewpoint of experts. Thus, on one hand the whole risk assessment activity for decommissioning operations was modeled as a sequence of individual risk assessment steps, and on the other, a hierarchical risk structure was developed. Then, risk assessment procedure that can elicit individual hazard factors one by one were introduced with reference to the standard operation procedure (SOP) and hierarchical task analysis (HTA). With an assumption of quantification and normalization of individual risks, a technique to estimate relative weight factors was tried by using the conventional Analytic Hierarchical Process (AHP) and its result was reviewed with reference to judgment of experts. Besides, taking the ambiguity of human judgment into consideration, debates based upon fuzzy inference was added with a mathematical case study.

Keywords: decommissioning, risk assessment, analytic hierarchical process (AHP), fuzzy inference

Procedia PDF Downloads 398
218 Alternative General Formula to Estimate and Test Influences of Early Diagnosis on Cancer Survival

Authors: Li Yin, Xiaoqin Wang

Abstract:

Background and purpose: Cancer diagnosis is part of a complex stochastic process, in which patients' personal and social characteristics influence the choice of diagnosing methods, diagnosing methods, in turn, influence the initial assessment of cancer stage, the initial assessment, in turn, influences the choice of treating methods, and treating methods in turn influence cancer outcomes such as cancer survival. To evaluate diagnosing methods, one needs to estimate and test the causal effect of a regime of cancer diagnosis and treatments. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to estimate and test these causal effects via point effects. The purpose of the work is to estimate and test causal effects under various regimes of cancer diagnosis and treatments via point effects. Challenges and solutions: The cancer stage has influences from earlier diagnosis as well as on subsequent treatments. As a consequence, it is highly difficult to estimate and test the causal effects via standard parameters, that is, the conditional survival given all stationary covariates, diagnosing methods, cancer stage and prognosis factors, treating methods. Instead of standard parameters, we use the point effects of cancer diagnosis and treatments to estimate and test causal effects under various regimes of cancer diagnosis and treatments. We are able to use familiar methods in the framework of single-point causal inference to accomplish the task. Achievements: we have applied this method to stomach cancer survival from a clinical study in Sweden. We have studied causal effects under various regimes, including the optimal regime of diagnosis and treatments and the effect moderation of the causal effect by age and gender.

Keywords: cancer diagnosis, causal effect, point effect, G-formula, sequential causal effect

Procedia PDF Downloads 168
217 Development of Automated Quality Management System for the Management of Heat Networks

Authors: Nigina Toktasynova, Sholpan Sagyndykova, Zhanat Kenzhebayeva, Maksat Kalimoldayev, Mariya Ishimova, Irbulat Utepbergenov

Abstract:

Any business needs a stable operation and continuous improvement, therefore it is necessary to constantly interact with the environment, to analyze the work of the enterprise in terms of employees, executives and consumers, as well as to correct any inconsistencies of certain types of processes and their aggregate. In the case of heat supply organizations, in addition to suppliers, local legislation must be considered which often is the main regulator of pricing of services. In this case, the process approach used to build a functional organizational structure in these types of businesses in Kazakhstan is a challenge not only in the implementation, but also in ways of analyzing the employee's salary. To solve these problems, we investigated the management system of heating enterprise, including strategic planning based on the balanced scorecard (BSC), quality management in accordance with the standards of the Quality Management System (QMS) ISO 9001 and analysis of the system based on expert judgment using fuzzy inference. To carry out our work we used the theory of fuzzy sets, the QMS in accordance with ISO 9001, BSC according to the method of Kaplan and Norton, method of construction of business processes according to the notation IDEF0, theory of modeling using Matlab software simulation tools and graphical programming LabVIEW. The results of the work are as follows: We determined possibilities of improving the management of heat-supply plant-based on QMS; after the justification and adaptation of software tool it has been used to automate a series of functions for the management and reduction of resources and for the maintenance of the system up to date; an application for the analysis of the QMS based on fuzzy inference has been created with novel organization of communication software with the application enabling the analysis of relevant data of enterprise management system.

Keywords: balanced scorecard, heat supply, quality management system, the theory of fuzzy sets

Procedia PDF Downloads 342
216 Zeolite Supported Iron-Sensitized TIO₂ for Tetracycline Photocatalytic ‎Degradation under Visible Light: A Comparison between Doping and Ion ‎Exchange ‎

Authors: Ghadeer Jalloul, Nour Hijazi, Cassia Boyadjian, Hussein Awala, Mohammad N. Ahmad, ‎Ahmad Albadarin

Abstract:

In this study, we applied Fe-sensitized TiO₂ supported over embryonic Beta zeolite (BEA) zeolite ‎for the photocatalytic degradation of Tetracycline (TC) antibiotic under visible light. Four different ‎samples having 20, 40, 60, and 100% w/w as a ratio of TiO₂/BEA were prepared. The ‎immobilization of solgel TiO₂ (33 m²/g) over BEA (390 m²/g) increased its surface area to (227 ‎m²/g) and enhanced its adsorption capacity from 8% to 19%. To expand the activity of TiO₂ ‎photocatalyst towards the visible light region (λ>380 nm), we explored two different metal ‎sensitization techniques with Iron ions (Fe³⁺). In the ion-exchange method, the substitutional cations ‎in the zeolite in TiO₂/BEA were exchanged with (Fe³⁺) in an aqueous solution of FeCl₃. In the ‎doping technique, solgel TiO₂ was doped with (Fe³⁺) from FeCl₃ precursor during its synthesis and ‎before its immobilization over BEA. (Fe-TiO₂/BEA) catalysts were characterized using SEM, XRD, ‎BET, UV-VIS DRS, and FTIR. After testing the performance of the various ion-exchanged catalysts ‎under blue and white lights, only (Fe-TiO₂/BEA 60%) showed better activity as compared to pure ‎TiO₂ under white light with 100 ppm initial catalyst concentration and 20 ppm TC concentration. As ‎compared to ion-exchanged (Fe-TiO₂/BEA), doped (Fe-TiO₂/BEA) resulted in higher photocatalytic ‎efficiencies under blue and white lights. The 3%-Fe-doped TiO₂/BEA removed 92% of TC ‎compared to 54% by TiO₂ under white light. The catalysts were also tested under real solar ‎irradiations. This improvement in the photocatalytic performance of TiO₂ was due to its higher ‎adsorption capacity due to BEA support combined with the presence of Iron ions that enhance the ‎visible light absorption and minimize the recombination effect by the charge carriers. ‎

Keywords: Tetracycline, photocatalytic degradation, immobilized TiO₂, zeolite, iron-doped TiO₂, ion-exchange

Procedia PDF Downloads 67
215 Brain Connectome of Glia, Axons, and Neurons: Cognitive Model of Analogy

Authors: Ozgu Hafizoglu

Abstract:

An analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with physical, behavioral, principal relations that are essential to learning, discovery, and innovation. The Cognitive Model of Analogy (CMA) leads and creates patterns of pathways to transfer information within and between domains in science, just as happens in the brain. The connectome of the brain shows how the brain operates with mental leaps between domains and mental hops within domains and the way how analogical reasoning mechanism operates. This paper demonstrates the CMA as an evolutionary approach to science, technology, and life. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions in the new era, especially post-pandemic. In this paper, we will reveal how to draw an analogy to scientific research to discover new systems that reveal the fractal schema of analogical reasoning within and between the systems like within and between the brain regions. Distinct phases of the problem-solving processes are divided thusly: stimulus, encoding, mapping, inference, and response. Based on the brain research so far, the system is revealed to be relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain’s mechanism in macro context; brain and spinal cord, and micro context: glia and neurons, relative to matching conditions of analogical reasoning and relational information, encoding, mapping, inference and response processes, and verification of perceptual responses in four-term analogical reasoning. Finally, we will relate all these terminologies with these mental leaps, mental maps, mental hops, and mental loops to make the mental model of CMA clear.

Keywords: analogy, analogical reasoning, brain connectome, cognitive model, neurons and glia, mental leaps, mental hops, mental loops

Procedia PDF Downloads 144
214 Using the Bootstrap for Problems Statistics

Authors: Brahim Boukabcha, Amar Rebbouh

Abstract:

The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.

Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models

Procedia PDF Downloads 355
213 Parametric Inference of Elliptical and Archimedean Family of Copulas

Authors: Alam Ali, Ashok Kumar Pathak

Abstract:

Nowadays, copulas have attracted significant attention for modeling multivariate observations, and the foremost feature of copula functions is that they give us the liberty to study the univariate marginal distributions and their joint behavior separately. The copula parameter apprehends the intrinsic dependence among the marginal variables, and it can be estimated using parametric, semiparametric, or nonparametric techniques. This work aims to compare the coverage rates between an Elliptical and an Archimedean family of copulas via a fully parametric estimation technique.

Keywords: elliptical copula, archimedean copula, estimation, coverage rate

Procedia PDF Downloads 37
212 Cognitive Footprints: Analytical and Predictive Paradigm for Digital Learning

Authors: Marina Vicario, Amadeo Argüelles, Pilar Gómez, Carlos Hernández

Abstract:

In this paper, the Computer Research Network of the National Polytechnic Institute of Mexico proposes a paradigmatic model for the inference of cognitive patterns in digital learning systems. This model leads to metadata architecture useful for analysis and prediction in online learning systems; especially on MOOc's architectures. The model is in the design phase and expects to be tested through an institutional of courses project which is going to develop for the MOOc.

Keywords: cognitive footprints, learning analytics, predictive learning, digital learning, educational computing, educational informatics

Procedia PDF Downloads 441
211 The Problem of Now in Special Relativity Theory

Authors: Mogens Frank Mikkelsen

Abstract:

Special Relativity Theory (SRT) includes only one characteristic of light, the speed is equal to all observers, and by excluding other relevant characteristics of light, the common interpretation of SRT should be regarded as merely an approximative theory. By rethinking the iconic double light cones, a revised version of SRT can be developed. The revised concept of light cones acknowledges an asymmetry of past and future light cones and introduced a concept of the extended past to explain the predictions as something other than the future. Combining this with the concept of photon-paired events, led to the inference that Special Relativity theory can support the existence of Now.

Keywords: relativity, light cone, Minkowski, time

Procedia PDF Downloads 49
210 On Coverage Probability of Confidence Intervals for the Normal Mean with Known Coefficient of Variation

Authors: Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

Statistical inference of normal mean with known coefficient of variation has been investigated recently. This phenomenon occurs normally in environment and agriculture experiments when the scientist knows the coefficient of variation of their experiments. In this paper, we constructed new confidence intervals for the normal population mean with known coefficient of variation. We also derived analytic expressions for the coverage probability of each confidence interval. To confirm our theoretical results, Monte Carlo simulation will be used to assess the performance of these intervals based on their coverage probabilities.

Keywords: confidence interval, coverage probability, expected length, known coefficient of variation

Procedia PDF Downloads 357
209 Insight into the Electrocatalytic Activities of Nitrogen-Doped Graphyne and Graphdiyne Families: A First-Principles Study

Authors: Bikram K. Das, Kalyan K. Chattopadhyay

Abstract:

The advent of 2-D materials in the last decade has induced a fresh spur of growth in fuel cell technology as these materials have some highly promising traits that can be exploited to felicitate Oxygen Reduction Reaction (ORR) in an efficient way. Among the various 2-D carbon materials, graphyne (Gy) and graphdiyne (Gdy)1 with their intrinsic non-uniform charge distribution holds promises in this purpose and it is expected2 that substitutional Nitrogen (N) doping could further enhance their efficiency. In this regard, dispersive force corrected density functional theory is used to map the oxygen reduction reaction (ORR) kinetics of five different kinds of N doped graphyne and graphdiyne systems (namely αGy, βGy, γGy, RGy and 6,6,12Gy and Gdy) in alkaline medium. The best doping site for each of the Gy/ Gdy system is determined comparing the formation energies of the possible doping configurations. Similarly, the best di-oxygen (O₂) adsorption sites for the doped systems are identified by comparing the adsorption energies. O₂ adsorption on all N doped Gy/ Gdy systems is found to be energetically favorable. ORR on a catalyst surface may occur either via the Eley-Rideal (ER) or the Langmuir–Hinschelwood (LH) pathway. Systematic studies performed on the considered systems reveal that all of them favor the ER pathway. Further, depending on the nature of di-oxygen adsorption ORR can follow either associative or dissociative mechanism; the possibility of occurrence of both the mechanisms is tested thoroughly for each N doped Gy/ Gdy. For the ORR process, all the Gy/Gdy systems are observed to prefer the efficient four-electron pathway but the expected monotonically exothermic reaction pathway is found only for N doped 6,6,12Gy and RGy following the associative pathway and for N doped βGy, γGy and Gdy following the dissociative pathway. Further computation performed for these systems reveals that for N doped 6,6,12Gy, RGy, βGy, γGy and Gdy the overpotentials are 1.08 V, 0.94 V, 1.17 V, 1.21 V and 1.04 V respectively depicting N doped RGy is the most promising material, to carry out ORR in alkaline medium, among the considered ones. The stability of the ORR intermediate states with the variation of pH and electrode potentials is further explored with Pourbiax diagrams and the activities of these systems in the alkaline medium are compared with the prior reported B/N doped identical systems for ORR in an acidic medium in terms of a common descriptor.

Keywords: graphdiyne, graphyne, nitrogen-doped, ORR

Procedia PDF Downloads 96
208 A Study on Inference from Distance Variables in Hedonic Regression

Authors: Yan Wang, Yasushi Asami, Yukio Sadahiro

Abstract:

In urban area, several landmarks may affect housing price and rents, hedonic analysis should employ distance variables corresponding to each landmarks. Unfortunately, the effects of distances to landmarks on housing prices are generally not consistent with the true price. These distance variables may cause magnitude error in regression, pointing a problem of spatial multicollinearity. In this paper, we provided some approaches for getting the samples with less bias and method on locating the specific sampling area to avoid the multicollinerity problem in two specific landmarks case.

Keywords: landmarks, hedonic regression, distance variables, collinearity, multicollinerity

Procedia PDF Downloads 426
207 Problems in Computational Phylogenetics: The Germano-Italo-Celtic Clade

Authors: Laura Mclean

Abstract:

A recurring point of interest in computational phylogenetic analysis of Indo-European family trees is the inference of a Germano-Italo-Celtic clade in some versions of the trees produced. The presence of this clade in the models is intriguing as there is little evidence for innovations shared among Germanic, Italic, and Celtic, the evidence generally used in the traditional method to construct a subgroup. One source of this unexpected outcome could be the input to the models. The datasets in the various models used so far, for the most part, take as their basis the Swadesh list, a list compiled by Morris Swadesh and then revised several times, containing up to 207 words that he believed were resistant to change among languages. The judgments made by Swadesh for this list, however, were subjective and based on his intuition rather than rigorous analysis. Some scholars used the Swadesh 200 list as the basis for their Indo-European dataset and made cognacy judgements for each of the words on the list. Another dataset is largely based on the Swadesh 207 list as well although the authors include additional lexical and non-lexical data, and they implement ‘split coding’ to deal with cases of polymorphic characters. A different team of scholars uses a different dataset, IECoR, which combines several different lists, one of which is the Swadesh 200 list. In fact, the Swadesh list is used in some form in every study surveyed and each dataset has three words that, when they are coded as cognates, seemingly contribute to the inference of a Germano-Italo-Celtic clade which could happen due to these clades sharing three words among only themselves. These three words are ‘fish’, ‘flower’, and ‘man’ (in the case of ‘man’, one dataset includes Lithuanian in the cognacy coding and removes the word ‘man’ from the screened data). This collection of cognates shared among Germanic, Italic, and Celtic that were deemed important enough to be included on the Swadesh list, without the ability to account for possible reasons for shared cognates that are not shared innovations, gives an impression of affinity between the Germanic, Celtic, and Italic branches without adequate methodological support. However, by changing how cognacy is defined (ie. root cognates, borrowings vs inherited cognates etc.), we will be able to identify whether these three cognates are significant enough to infer a clade for Germanic, Celtic, and Italic. This paper examines the question of what definition of cognacy should be used for phylogenetic datasets by examining the Germano-Italo-Celtic clade as a case study and offers insights into the reconstruction of a Germano-Italo-Celtic clade.

Keywords: historical, computational, Italo-Celtic, Germanic

Procedia PDF Downloads 19
206 Assessment the Quality of Telecommunication Services by Fuzzy Inferences System

Authors: Oktay Nusratov, Ramin Rzaev, Aydin Goyushov

Abstract:

Fuzzy inference method based approach to the forming of modular intellectual system of assessment the quality of communication services is proposed. Developed under this approach the basic fuzzy estimation model takes into account the recommendations of the International Telecommunication Union in respect of the operation of packet switching networks based on IP-protocol. To implement the main features and functions of the fuzzy control system of quality telecommunication services it is used multilayer feedforward neural network.

Keywords: quality of communication, IP-telephony, fuzzy set, fuzzy implication, neural network

Procedia PDF Downloads 438
205 Realization and Characterizations of Conducting Ceramics Based on ZnO Doped by TiO₂, Al₂O₃ and MgO

Authors: Qianying Sun, Abdelhadi Kassiba, Guorong Li

Abstract:

ZnO with wurtzite structure is a well-known semiconducting oxide (SCO), being applied in thermoelectric devices, varistors, gas sensors, transparent electrodes, solar cells, liquid crystal displays, piezoelectric and electro-optical devices. Intrinsically, ZnO is weakly n-type SCO due to native defects (Znⱼ, Vₒ). However, the substitutional doping by metallic elements as (Al, Ti) gives rise to a high n-type conductivity ensured by donor centers. Under CO+N₂ sintering atmosphere, Schottky barriers of ZnO ceramics will be suppressed by lowering the concentration of acceptors at grain boundaries and then inducing a large increase in the Hall mobility, thereby increasing the conductivity. The presented work concerns ZnO based ceramics, which are fabricated with doping by TiO₂ (0.50mol%), Al₂O₃ (0.25mol%) and MgO (1.00mol%) and sintering in different atmospheres (Air (A), N₂ (N), CO+N₂(C)). We obtained uniform, dense ceramics with ZnO as the main phase and Zn₂TiO₄ spinel as a secondary and minor phase. An important increase of the conductivity was shown for the samples A, N, and C which were sintered under different atmospheres. The highest conductivity (σ = 1.52×10⁵ S·m⁻¹) was obtained under the reducing atmosphere (CO). The role of doping was investigated with the aim to identify the local environment and valence states of the doping elements. Thus, Electron paramagnetic spectroscopy (EPR) determines the concentration of defects and the effects of charge carriers in ZnO ceramics as a function of the sintering atmospheres. The relation between conductivity and defects concentration shows the opposite behavior between these parameters suggesting that defects act as traps for charge carriers. For Al ions, nuclear magnetic resonance (NMR) technique was used to identify the involved local coordination of these ions. Beyond the six and forth coordinated Al, an additional NMR signature of ZnO based TCO requires analysis taking into account the grain boundaries and the conductivity through the Knight shift effects. From the thermal evolution of the conductivity as a function of the sintering atmosphere, we succeed in defining the conditions to realize ZnO based TCO ceramics with an important thermal coefficient of resistance (TCR) which is promising for electrical safety of devices.

Keywords: ceramics, conductivity, defects, TCO, ZnO

Procedia PDF Downloads 161
204 Yield, Economics and ICBR of Different IPM Modules in Bt Cotton in Maharashtra

Authors: N. K. Bhute, B. B. Bhosle, D. G. More, B. V. Bhede

Abstract:

The field experiments were conducted during kharif season of the year 2007-08 at the experimental farm of the Department of Agricultural Entomology, Vasantrao Naik Marathwada Krishi Vidyapeeth, Studies on evaluation of different IPM modules for Bt cotton in relation to yield economics and ICBR revealed that MAU and CICR IPM modules proved superior. It was, however, on par with chemical control. Considering the ICBR and safety to natural enemies, an inference can be drawn that Bt cotton with IPM module is the most ideal combination. Besides reduction in insecticide use, it is also expected to ensure favourable ecological and economic returns in contrast to the adverse effects due to conventional insecticides. The IPM approach, which takes care of varying pest situation, appears to be essential for gaining higher advantage from Bt cotton.

Keywords: yield, economics, ICBR, IPM Modules, Bt cotton

Procedia PDF Downloads 238
203 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application

Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz

Abstract:

Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.

Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation

Procedia PDF Downloads 69
202 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 78
201 Fuzzy Inference Based Modelling of Perception Reaction Time of Drivers

Authors: U. Chattaraj, K. Dhusiya, M. Raviteja

Abstract:

Perception reaction time of drivers is an outcome of human thought process, which is vague and approximate in nature and also varies from driver to driver. So, in this study a fuzzy logic based model for prediction of the same has been presented, which seems suitable. The control factors, like, age, experience, intensity of driving of the driver, speed of the vehicle and distance of stimulus have been considered as premise variables in the model, in which the perception reaction time is the consequence variable. Results show that the model is able to explain the impacts of the control factors on perception reaction time properly.

Keywords: driver, fuzzy logic, perception reaction time, premise variable

Procedia PDF Downloads 269
200 Transformation of Periodic Fuzzy Membership Function to Discrete Polygon on Circular Polar Coordinates

Authors: Takashi Mitsuishi

Abstract:

Fuzzy logic has gained acceptance in the recent years in the fields of social sciences and humanities such as psychology and linguistics because it can manage the fuzziness of words and human subjectivity in a logical manner. However, the major field of application of the fuzzy logic is control engineering as it is a part of the set theory and mathematical logic. Mamdani method, which is the most popular technique for approximate reasoning in the field of fuzzy control, is one of the ways to numerically represent the control afforded by human language and sensitivity and has been applied in various practical control plants. Fuzzy logic has been gradually developing as an artificial intelligence in different applications such as neural networks, expert systems, and operations research. The objects of inference vary for different application fields. Some of these include time, angle, color, symptom and medical condition whose fuzzy membership function is a periodic function. In the defuzzification stage, the domain of the membership function should be unique to obtain uniqueness its defuzzified value. However, if the domain of the periodic membership function is determined as unique, an unintuitive defuzzified value may be obtained as the inference result using the center of gravity method. Therefore, the authors propose a method of circular-polar-coordinates transformation and defuzzification of the periodic membership functions in this study. The transformation to circular polar coordinates simplifies the domain of the periodic membership function. Defuzzified value in circular polar coordinates is an argument. Furthermore, it is required that the argument is calculated from a closed plane figure which is a periodic membership function on the circular polar coordinates. If the closed plane figure is continuous with the continuity of the membership function, a significant amount of computation is required. Therefore, to simplify the practice example and significantly reduce the computational complexity, we have discretized the continuous interval and the membership function in this study. In this study, the following three methods are proposed to decide the argument from the discrete polygon which the continuous plane figure is transformed into. The first method provides an argument of a straight line passing through the origin and through the coordinate of the arithmetic mean of each coordinate of the polygon (physical center of gravity). The second one provides an argument of a straight line passing through the origin and the coordinate of the geometric center of gravity of the polygon. The third one provides an argument of a straight line passing through the origin bisecting the perimeter of the polygon (or the closed continuous plane figure).

Keywords: defuzzification, fuzzy membership function, periodic function, polar coordinates transformation

Procedia PDF Downloads 325
199 Using Vulnerability to Reduce False Positive Rate in Intrusion Detection Systems

Authors: Nadjah Chergui, Narhimene Boustia

Abstract:

Intrusion Detection Systems are an essential tool for network security infrastructure. However, IDSs have a serious problem which is the generating of massive number of alerts, most of them are false positive ones which can hide true alerts and make the analyst confused to analyze the right alerts for report the true attacks. The purpose behind this paper is to present a formalism model to perform correlation engine by the reduction of false positive alerts basing on vulnerability contextual information. For that, we propose a formalism model based on non-monotonic JClassicδє description logic augmented with a default (δ) and an exception (є) operator that allows a dynamic inference according to contextual information.

Keywords: context, default, exception, vulnerability

Procedia PDF Downloads 236
198 Compensatory Neuro-Fuzzy Inference (CNFI) Controller for Bilateral Teleoperation

Authors: R. Mellah, R. Toumi

Abstract:

This paper presents a new adaptive neuro-fuzzy controller equipped with compensatory fuzzy control (CNFI) in order to not only adjusts membership functions but also to optimize the adaptive reasoning by using a compensatory learning algorithm. The proposed control structure includes both CNFI controllers for which one is used to control in force the master robot and the second one for controlling in position the slave robot. The experimental results obtained, show a fairly high accuracy in terms of position and force tracking under free space motion and hard contact motion, what highlights the effectiveness of the proposed controllers.

Keywords: compensatory fuzzy, neuro-fuzzy, control adaptive, teleoperation

Procedia PDF Downloads 293