Search results for: sequential causal inference
1123 Modelling Causal Effects from Complex Longitudinal Data via Point Effects of Treatments
Authors: Xiaoqin Wang, Li Yin
Abstract:
Background and purpose: In many practices, one estimates causal effects arising from a complex stochastic process, where a sequence of treatments are assigned to influence a certain outcome of interest, and there exist time-dependent covariates between treatments. When covariates are plentiful and/or continuous, statistical modeling is needed to reduce the huge dimensionality of the problem and allow for the estimation of causal effects. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to conduct the modeling via point effects. The purpose of the work is to study the modeling of these causal effects via point effects. Challenges and solutions: The time-dependent covariates often have influences from earlier treatments as well as on subsequent treatments. Consequently, the standard parameters – i.e., the mean of the outcome given all treatments and covariates-- are essentially all different (null paradox). Furthermore, the dimension of the parameters is huge (curse of dimensionality). Therefore, it can be difficult to conduct the modeling in terms of standard parameters. Instead of standard parameters, we have use point effects of treatments to develop likelihood-based parametric approach to the modeling of these causal effects and are able to model the causal effects of a sequence of treatments by modeling a small number of point effects of individual treatment Achievements: We are able to conduct the modeling of the causal effects from a sequence of treatments in the familiar framework of single-point causal inference. The simulation shows that our method achieves not only an unbiased estimate for the causal effect but also the nominal level of type I error and a low level of type II error for the hypothesis testing. We have applied this method to a longitudinal study of COVID-19 mortality among Scandinavian countries and found that the Swedish approach performed far worse than the other countries' approach for COVID-19 mortality and the poor performance was largely due to its early measure during the initial period of the pandemic.Keywords: causal effect, point effect, statistical modelling, sequential causal inference
Procedia PDF Downloads 2051122 Application of Causal Inference and Discovery in Curriculum Evaluation and Continuous Improvement
Authors: Lunliang Zhong, Bin Duan
Abstract:
The undergraduate graduation project is a vital part of the higher education curriculum, crucial for engineering accreditation. Current evaluations often summarize data without identifying underlying issues. This study applies the Peter-Clark algorithm to analyze causal relationships within the graduation project data of an Electronics and Information Engineering program, creating a causal model. Structural equation modeling confirmed the model's validity. The analysis reveals key teaching stages affecting project success, uncovering problems in the process. Introducing causal discovery and inference into project evaluation helps identify issues and propose targeted improvement measures. The effectiveness of these measures is validated by comparing the learning outcomes of two student cohorts, stratified by confounding factors, leading to improved teaching quality.Keywords: causal discovery, causal inference, continuous improvement, Peter-Clark algorithm, structural equation modeling
Procedia PDF Downloads 181121 Alternative General Formula to Estimate and Test Influences of Early Diagnosis on Cancer Survival
Authors: Li Yin, Xiaoqin Wang
Abstract:
Background and purpose: Cancer diagnosis is part of a complex stochastic process, in which patients' personal and social characteristics influence the choice of diagnosing methods, diagnosing methods, in turn, influence the initial assessment of cancer stage, the initial assessment, in turn, influences the choice of treating methods, and treating methods in turn influence cancer outcomes such as cancer survival. To evaluate diagnosing methods, one needs to estimate and test the causal effect of a regime of cancer diagnosis and treatments. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to estimate and test these causal effects via point effects. The purpose of the work is to estimate and test causal effects under various regimes of cancer diagnosis and treatments via point effects. Challenges and solutions: The cancer stage has influences from earlier diagnosis as well as on subsequent treatments. As a consequence, it is highly difficult to estimate and test the causal effects via standard parameters, that is, the conditional survival given all stationary covariates, diagnosing methods, cancer stage and prognosis factors, treating methods. Instead of standard parameters, we use the point effects of cancer diagnosis and treatments to estimate and test causal effects under various regimes of cancer diagnosis and treatments. We are able to use familiar methods in the framework of single-point causal inference to accomplish the task. Achievements: we have applied this method to stomach cancer survival from a clinical study in Sweden. We have studied causal effects under various regimes, including the optimal regime of diagnosis and treatments and the effect moderation of the causal effect by age and gender.Keywords: cancer diagnosis, causal effect, point effect, G-formula, sequential causal effect
Procedia PDF Downloads 1951120 Identification of Bayesian Network with Convolutional Neural Network
Authors: Mohamed Raouf Benmakrelouf, Wafa Karouche, Joseph Rynkiewicz
Abstract:
In this paper, we propose an alternative method to construct a Bayesian Network (BN). This method relies on a convolutional neural network (CNN classifier), which determinates the edges of the network skeleton. We train a CNN on a normalized empirical probability density distribution (NEPDF) for predicting causal interactions and relationships. We have to find the optimal Bayesian network structure for causal inference. Indeed, we are undertaking a search for pair-wise causality, depending on considered causal assumptions. In order to avoid unreasonable causal structure, we consider a blacklist and a whitelist of causality senses. We tested the method on real data to assess the influence of education on the voting intention for the extreme right-wing party. We show that, with this method, we get a safer causal structure of variables (Bayesian Network) and make to identify a variable that satisfies the backdoor criterion.Keywords: Bayesian network, structure learning, optimal search, convolutional neural network, causal inference
Procedia PDF Downloads 1761119 A Generative Adversarial Framework for Bounding Confounded Causal Effects
Authors: Yaowei Hu, Yongkai Wu, Lu Zhang, Xintao Wu
Abstract:
Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounders. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, with an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make any assumption about the data generating process and the type of the variables. Experiments using both synthetic and real-world datasets show the effectiveness of the method.Keywords: average causal effect, hidden confounding, bound estimation, generative adversarial learning
Procedia PDF Downloads 1911118 Recommendation Systems for Cereal Cultivation using Advanced Casual Inference Modeling
Authors: Md Yeasin, Ranjit Kumar Paul
Abstract:
In recent years, recommendation systems have become indispensable tools for agricultural system. The accurate and timely recommendations can significantly impact crop yield and overall productivity. Causal inference modeling aims to establish cause-and-effect relationships by identifying the impact of variables or factors on outcomes, enabling more accurate and reliable recommendations. New advancements in causal inference models have been found in the literature. With the advent of the modern era, deep learning and machine learning models have emerged as efficient tools for modeling. This study proposed an innovative approach to enhance recommendation systems-based machine learning based casual inference model. By considering the causal effect and opportunity cost of covariates, the proposed system can provide more reliable and actionable recommendations for cereal farmers. To validate the effectiveness of the proposed approach, experiments are conducted using cereal cultivation data of eastern India. Comparative evaluations are performed against existing correlation-based recommendation systems, demonstrating the superiority of the advanced causal inference modeling approach in terms of recommendation accuracy and impact on crop yield. Overall, it empowers farmers with personalized recommendations tailored to their specific circumstances, leading to optimized decision-making and increased crop productivity.Keywords: agriculture, casual inference, machine learning, recommendation system
Procedia PDF Downloads 791117 Personalized Intervention through Causal Inference in mHealth
Authors: Anna Guitart Atienza, Ana Fernández del Río, Madhav Nekkar, Jelena Ljubicic, África Periáñez, Eura Shin, Lauren Bellhouse
Abstract:
The use of digital devices in healthcare or mobile health (mHealth) has increased in recent years due to the advances in digital technology, making it possible to nudge healthy behaviors through individual interventions. In addition, mHealth is becoming essential in poor-resource settings due to the widespread use of smartphones in areas where access to professional healthcare is limited. In this work, we evaluate mHealth interventions in low-income countries with a focus on causal inference. Counterfactuals estimation and other causal computations are key to determining intervention success and assisting in empirical decision-making. Our main purpose is to personalize treatment recommendations and triage patients at the individual level in order to maximize the entire intervention's impact on the desired outcome. For this study, collected data includes mHealth individual logs from front-line healthcare workers, electronic health records (EHR), and external variables data such as environmental, demographic, and geolocation information.Keywords: causal inference, mHealth, intervention, personalization
Procedia PDF Downloads 1321116 Non-Linear Causality Inference Using BAMLSS and Bi-CAM in Finance
Authors: Flora Babongo, Valerie Chavez
Abstract:
Inferring causality from observational data is one of the fundamental subjects, especially in quantitative finance. So far most of the papers analyze additive noise models with either linearity, nonlinearity or Gaussian noise. We fill in the gap by providing a nonlinear and non-gaussian causal multiplicative noise model that aims to distinguish the cause from the effect using a two steps method based on Bayesian additive models for location, scale and shape (BAMLSS) and on causal additive models (CAM). We have tested our method on simulated and real data and we reached an accuracy of 0.86 on average. As real data, we considered the causality between financial indices such as S&P 500, Nasdaq, CAC 40 and Nikkei, and companies' log-returns. Our results can be useful in inferring causality when the data is heteroskedastic or non-injective.Keywords: causal inference, DAGs, BAMLSS, financial index
Procedia PDF Downloads 1511115 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application
Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz
Abstract:
Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation
Procedia PDF Downloads 1031114 Explanation and Temporality in International Relations
Authors: Alasdair Stanton
Abstract:
What makes for a good explanation? Twenty years after Wendt’s important treatment of constitution and causation, non-causal explanations (sometimes referred to as ‘understanding’, or ‘descriptive inference’) have become, if not mainstream, at least accepted within International Relations. This article proceeds in two parts: firstly, it examines closely Wendt’s constitutional claims, and while it agrees there is a difference between causal and constitutional, rejects the view that constitutional explanations lack temporality. In fact, this author concludes that a constitutional argument is only possible if it relies upon a more foundational, causal argument. Secondly, through theoretical analysis of the constitutional argument, this research seeks to delineate temporal and non-temporal ways of explaining within International Relations. This article concludes that while the constitutional explanation, like other logical arguments, including comparative, and counter-factual, are not truly non-causal explanations, they are not bound as tightly to the ‘real world’ as temporal arguments such as cause-effect, process tracing, or even interpretivist accounts. However, like mathematical models, non-temporal arguments should aim for empirical testability as well as internal consistency. This work aims to give clear theoretical grounding to those authors using non-temporal arguments, but also to encourage them, and their positivist critics, to engage in thoroughgoing empirical tests.Keywords: causal explanation, constitutional understanding, empirical, temporality
Procedia PDF Downloads 1951113 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 871112 Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features
Authors: Tharini N. de Silva, Xiao Zhibo, Zhao Rui, Mao Kezhi
Abstract:
Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks.Keywords: causal realtion extraction, relation extracton, convolutional neural network, text representation
Procedia PDF Downloads 7321111 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 2021110 The Effects of the Inference Process in Reading Texts in Arabic
Authors: May George
Abstract:
Inference plays an important role in the learning process and it can lead to a rapid acquisition of a second language. When learning a non-native language, i.e., a critical language like Arabic, the students depend on the teacher’s support most of the time to learn new concepts. The students focus on memorizing the new vocabulary and stress on learning all the grammatical rules. Hence, the students became mechanical and cannot produce the language easily. As a result, they are unable to predict the meaning of words in the context by relying heavily on the teacher, in that they cannot link their prior knowledge or even identify the meaning of the words without the support of the teacher. This study explores how the teacher guides students learning during the inference process and what are the processes of learning that can direct student’s inference.Keywords: inference, reading, Arabic, language acquisition
Procedia PDF Downloads 5311109 Elucidation of the Sequential Transcriptional Activity in Escherichia coli Using Time-Series RNA-Seq Data
Authors: Pui Shan Wong, Kosuke Tashiro, Satoru Kuhara, Sachiyo Aburatani
Abstract:
Functional genomics and gene regulation inference has readily expanded our knowledge and understanding of gene interactions with regards to expression regulation. With the advancement of transcriptome sequencing in time-series comes the ability to study the sequential changes of the transcriptome. This method presented here works to augment existing regulation networks accumulated in literature with transcriptome data gathered from time-series experiments to construct a sequential representation of transcription factor activity. This method is applied on a time-series RNA-Seq data set from Escherichia coli as it transitions from growth to stationary phase over five hours. Investigations are conducted on the various metabolic activities in gene regulation processes by taking advantage of the correlation between regulatory gene pairs to examine their activity on a dynamic network. Especially, the changes in metabolic activity during phase transition are analyzed with focus on the pagP gene as well as other associated transcription factors. The visualization of the sequential transcriptional activity is used to describe the change in metabolic pathway activity originating from the pagP transcription factor, phoP. The results show a shift from amino acid and nucleic acid metabolism, to energy metabolism during the transition to stationary phase in E. coli.Keywords: Escherichia coli, gene regulation, network, time-series
Procedia PDF Downloads 3721108 An Analysis of Sequential Pattern Mining on Databases Using Approximate Sequential Patterns
Authors: J. Suneetha, Vijayalaxmi
Abstract:
Sequential Pattern Mining involves applying data mining methods to large data repositories to extract usage patterns. Sequential pattern mining methodologies used to analyze the data and identify patterns. The patterns have been used to implement efficient systems can recommend on previously observed patterns, in making predictions, improve usability of systems, detecting events, and in general help in making strategic product decisions. In this paper, identified performance of approximate sequential pattern mining defines as identifying patterns approximately shared with many sequences. Approximate sequential patterns can effectively summarize and represent the databases by identifying the underlying trends in the data. Conducting an extensive and systematic performance over synthetic and real data. The results demonstrate that ApproxMAP effective and scalable in mining large sequences databases with long patterns.Keywords: multiple data, performance analysis, sequential pattern, sequence database scalability
Procedia PDF Downloads 3401107 Conditions for Fault Recovery of Interconnected Asynchronous Sequential Machines with State Feedback
Authors: Jung–Min Yang
Abstract:
In this paper, fault recovery for parallel interconnected asynchronous sequential machines is studied. An adversarial input can infiltrate into one of two submachines comprising parallel composition of the considered asynchronous sequential machine, causing an unauthorized state transition. The control objective is to elucidate the condition for the existence of a corrective controller that makes the closed-loop system immune against any occurrence of adversarial inputs. In particular, an efficient existence condition is presented that does not need the complete modeling of the interconnected asynchronous sequential machine.Keywords: asynchronous sequential machines, parallel composi-tion, corrective control, fault tolerance
Procedia PDF Downloads 2291106 Causal Modeling of the Glucose-Insulin System in Type-I Diabetic Patients
Authors: J. Fernandez, N. Aguilar, R. Fernandez de Canete, J. C. Ramos-Diaz
Abstract:
In this paper, a simulation model of the glucose-insulin system for a patient undergoing diabetes Type 1 is developed by using a causal modeling approach under system dynamics. The OpenModelica simulation environment has been employed to build the so called causal model, while the glucose-insulin model parameters were adjusted to fit recorded mean data of a diabetic patient database. Model results under different conditions of a three-meal glucose and exogenous insulin ingestion patterns have been obtained. This simulation model can be useful to evaluate glucose-insulin performance in several circumstances, including insulin infusion algorithms in open-loop and decision support systems in closed-loop.Keywords: causal modeling, diabetes, glucose-insulin system, diabetes, causal modeling, OpenModelica software
Procedia PDF Downloads 3301105 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization
Authors: Yihao Kuang, Bowen Ding
Abstract:
With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graphs and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improved strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain a better and more efficient inference effect by introducing PPO into knowledge inference technology.Keywords: reinforcement learning, PPO, knowledge inference
Procedia PDF Downloads 2431104 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization
Authors: Yihao Kuang, Bowen Ding
Abstract:
With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graph and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improve strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain better and more efficient inference effect by introducing PPO into knowledge inference technology.Keywords: reinforcement learning, PPO, knowledge inference, supervised learning
Procedia PDF Downloads 671103 Fuzzy Inference System for Risk Assessment Evaluation of Wheat Flour Product Manufacturing Systems
Authors: Yas Barzegaar, Atrin Barzegar
Abstract:
The aim of this research is to develop an intelligent system to analyze the risk level of wheat flour product manufacturing system. The model consists of five Fuzzy Inference Systems in two different layers to analyse the risk of a wheat flour product manufacturing system. The first layer of the model consists of four Fuzzy Inference Systems with three criteria. The output of each one of the Physical, Chemical, Biological and Environmental Failures will be the input of the final manufacturing systems. The proposed model based on Mamdani Fuzzy Inference Systems gives a performance ranking of wheat flour products manufacturing systems. The first step is obtaining data to identify the failure modes from expert’s opinions. The second step is the fuzzification process to convert crisp input to a fuzzy set., then the IF-then fuzzy rule applied through inference engine, and in the final step, the defuzzification process is applied to convert the fuzzy output into real numbers.Keywords: failure modes, fuzzy rules, fuzzy inference system, risk assessment
Procedia PDF Downloads 1021102 Causal Relationship between Corporate Governance and Financial Information Transparency: A Simultaneous Equations Approach
Authors: Maali Kachouri, Anis Jarboui
Abstract:
We focus on the causal relationship between governance and information transparency as well as interrelation among the various governance mechanisms. This paper employs a simultaneous equations approach to show this relationship in the Tunisian context. Based on an 8-year dataset, our sample covers 28 listed companies over 2006-2013. Our findings suggest that internal and external governance mechanisms are interdependent. Moreover, in order to analyze the causal effect between information transparency and governance mechanisms, we found evidence that information transparency tends to increase good corporate governance practices.Keywords: simultaneous equations approach, transparency, causal relationship, corporate governance
Procedia PDF Downloads 3541101 Fuzzy Inference System for Diagnosis of Malaria
Authors: Purnima Pandit
Abstract:
Malaria remains one of the world’s most deadly infectious disease and arguably, the greatest menace to modern society in terms of morbidity and mortality. To choose the right treatment and to ensure a quality of life suitable for a specific patient condition, early and accurate diagnosis of malaria is essential. It reduces transmission of disease and prevents deaths. Our work focuses on designing an efficient, accurate fuzzy inference system for malaria diagnosis.Keywords: fuzzy inference system, fuzzy logic, malaria disease, triangular fuzzy number
Procedia PDF Downloads 2971100 Influence of Causal beliefs on self-management in Korean patients with hypertension
Authors: Hyun-E Yeom
Abstract:
Patients’ views about the cause of hypertension may influence their present and proactive behaviors to regulate high blood pressure. This study aimed to examine the internal structure underlying the causal beliefs about hypertension and the influence of causal beliefs on self-care intention and medical compliance in Korean patients with hypertension. The causal beliefs of 145 patients (M age = 57.7) were assessed using the Illness Perception Questionnaire-Revised. An exploratory factor analysis was used to identify the factor structure of the causal beliefs, and the factors’ influence on self-care intention and medication compliance was analyzed using multiple and logistic regression analyses. The four-factor structure including psychological, fate-related, risk and habitual factors was identified and the psychological factor was the most representative component of causal beliefs. The risk and fate-related factors were significant factors affecting lower intention to engage in self-care and poor compliance with medication regimens, respectively. The findings support the critical role of causal beliefs about hypertension in driving patients’ current and future self-care behaviors. This study highlights the importance of educational interventions corresponding to patients’ awareness of hypertension for improving their adherence to a healthy lifestyle and medication regimens.Keywords: hypertension, self-care, beliefs, medication compliance
Procedia PDF Downloads 3511099 Bayesian Network and Feature Selection for Rank Deficient Inverse Problem
Authors: Kyugneun Lee, Ikjin Lee
Abstract:
Parameter estimation with inverse problem often suffers from unfavorable conditions in the real world. Useless data and many input parameters make the problem complicated or insoluble. Data refinement and reformulation of the problem can solve that kind of difficulties. In this research, a method to solve the rank deficient inverse problem is suggested. A multi-physics system which has rank deficiency caused by response correlation is treated. Impeditive information is removed and the problem is reformulated to sequential estimations using Bayesian network (BN) and subset groups. At first, subset grouping of the responses is performed. Feature selection with singular value decomposition (SVD) is used for the grouping. Next, BN inference is used for sequential conditional estimation according to the group hierarchy. Directed acyclic graph (DAG) structure is organized to maximize the estimation ability. Variance ratio of response to noise is used to pairing the estimable parameters by each response.Keywords: Bayesian network, feature selection, rank deficiency, statistical inverse analysis
Procedia PDF Downloads 3141098 Fuzzy Inference System for Risk Assessment Evaluation of Wheat Flour Product Manufacturing Systems
Authors: Atrin Barzegar, Yas Barzegar, Stefano Marrone, Francesco Bellini, Laura Verde
Abstract:
The aim of this research is to develop an intelligent system to analyze the risk level of wheat flour product manufacturing system. The model consists of five Fuzzy Inference Systems in two different layers to analyse the risk of a wheat flour product manufacturing system. The first layer of the model consists of four Fuzzy Inference Systems with three criteria. The output of each one of the Physical, Chemical, Biological and Environmental Failures will be the input of the final manufacturing systems. The proposed model based on Mamdani Fuzzy Inference Systems gives a performance ranking of wheat flour products manufacturing systems. The first step is obtaining data to identify the failure modes from expert’s opinions. The second step is the fuzzification process to convert crisp input to a fuzzy set., then the IF-then fuzzy rule applied through inference engine, and in the final step, the defuzzification process is applied to convert the fuzzy output into real numbers.Keywords: failure modes, fuzzy rules, fuzzy inference system, risk assessment
Procedia PDF Downloads 751097 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices
Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues
Abstract:
This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.Keywords: matrix minimization algorithm, decoding sequential search algorithm, image compression, DCT, DWT
Procedia PDF Downloads 1501096 Design and Implementation of Testable Reversible Sequential Circuits Optimized Power
Authors: B. Manikandan, A. Vijayaprabhu
Abstract:
The conservative reversible gates are used to designed reversible sequential circuits. The sequential circuits are flip-flops and latches. The conservative logic gates are Feynman, Toffoli, and Fredkin. The design of two vectors testable sequential circuits based on conservative logic gates. All sequential circuit based on conservative logic gates can be tested for classical unidirectional stuck-at faults using only two test vectors. The two test vectors are all 1s, and all 0s. The designs of two vectors testable latches, master-slave flip-flops and double edge triggered (DET) flip-flops are presented. We also showed the application of the proposed approach toward 100% fault coverage for single missing/additional cell defect in the quantum- dot cellular automata (QCA) layout of the Fredkin gate. The conservative logic gates are in terms of complexity, speed, and area.Keywords: DET, QCA, reversible logic gates, POS, SOP, latches, flip flops
Procedia PDF Downloads 3041095 The Parallelization of Algorithm Based on Partition Principle for Association Rules Discovery
Authors: Khadidja Belbachir, Hafida Belbachir
Abstract:
subsequently the expansion of the physical supports storage and the needs ceaseless to accumulate several data, the sequential algorithms of associations’ rules research proved to be ineffective. Thus the introduction of the new parallel versions is imperative. We propose in this paper, a parallel version of a sequential algorithm “Partition”. This last is fundamentally different from the other sequential algorithms, because it scans the data base only twice to generate the significant association rules. By consequence, the parallel approach does not require much communication between the sites. The proposed approach was implemented for an experimental study. The obtained results, shows a great reduction in execution time compared to the sequential version and Count Distributed algorithm.Keywords: association rules, distributed data mining, partition, parallel algorithms
Procedia PDF Downloads 4161094 A Model of Empowerment Evaluation of Knowledge Management in Private Banks Using Fuzzy Inference System
Authors: Nazanin Pilevari, Kamyar Mahmoodi
Abstract:
The purpose of this research is to provide a model based on fuzzy inference system for evaluating empowerment of Knowledge management. The first prototype of the research was developed based on the study of literature. In the next step, experts were provided with these models and after implementing consensus-based reform, the views of Fuzzy Delphi experts and techniques, components and Index research model were finalized. Culture, structure, IT and leadership were considered as dimensions of empowerment. Then, In order to collect and extract data for fuzzy inference system based on knowledge and Experience, the experts were interviewed. The values obtained from designed fuzzy inference system, made review and assessment of the organization's empowerment of Knowledge management possible. After the design and validation of systems to measure indexes ,empowerment of Knowledge management and inputs into fuzzy inference) in the AYANDEH Bank, a questionnaire was used. In the case of this bank, the system output indicates that the status of empowerment of Knowledge management, culture, organizational structure and leadership are at the moderate level and information technology empowerment are relatively high. Based on these results, the status of knowledge management empowerment in AYANDE Bank, was moderate. Eventually, some suggestions for improving the current situation of banks were provided. According to studies of research history, the use of powerful tools in Fuzzy Inference System for assessment of Knowledge management and knowledge management empowerment such an assessment in the field of banking, are the innovation of this Research.Keywords: knowledge management, knowledge management empowerment, fuzzy inference system, fuzzy Delphi
Procedia PDF Downloads 359